machine_learning/ml_infrastructure/inference-server-performance/client/deployment_master.yaml (40 lines of code) (raw):

apiVersion: extensions/v1beta1 kind: Deployment metadata: name: locust-master labels: name: locust-master spec: replicas: 1 selector: matchLabels: app: locust-master template: metadata: labels: app: locust-master spec: containers: - name: locust-master image: gcr.io/YOUR-PROJECT-ID/locust_tester ports: - name: loc-master containerPort: 8089 protocol: TCP - name: loc-master-p1 containerPort: 5557 protocol: TCP - name: loc-master-p2 containerPort: 5558 protocol: TCP command: ["locust","-f","locust/trtis_grpc_client.py"] args: ["--host", "CLUSTER-IP-TRTIS", "--master"] resources: requests: cpu: 200m env: - name: MODEL_NAME valueFrom: configMapKeyRef: name: locust-config key: model