integrations/nvidia-triton/documentation.yaml (41 lines of code) (raw):
exporter_type: included
app_name_short: NVIDIA Triton
app_name: {{app_name_short}}
app_site_name: {{app_name_short}}
app_site_url: https://developer.nvidia.com/triton-inference-server
exporter_name: the {{app_name_short}} exporter
exporter_repo_url: https://docs.nvidia.com/deeplearning/triton-inference-server/user-guide/docs/user_guide/metrics.html
gke_setup_url: /kubernetes-engine/docs/tutorials/online-ml-inference
additional_prereq_info: |
{{app_site_name}} exposes Prometheus-format metrics automatically; you do not
have to install it separately. To verify that {{exporter_name}} is emitting
metrics on the expected endpoints, do the following:
1. Set up port forwarding by using the following command:
<pre class="devsite-click-to-copy">
kubectl -n {{namespace_name}} port-forward {{pod_name}} 8002:8002
</pre>
2. Access the endpoint `localhost:8002/metrics` by using the browser
or the `curl` utility in another terminal session.
dashboard_available: true
multiple_dashboards: false
dashboard_display_name: {{app_name_short}} Prometheus Overview
podmonitoring_config: |
apiVersion: monitoring.googleapis.com/v1
kind: PodMonitoring
metadata:
name: triton
labels:
app.kubernetes.io/name: triton
app.kubernetes.io/part-of: google-cloud-managed-prometheus
spec:
endpoints:
- port: 8002
scheme: http
interval: 30s
path: /metrics
selector:
matchLabels:
app: triton
additional_podmonitoring_info: |
Ensure that the values of the `port` and `matchLabels` fields match those of the {{app_name_short}} pods you want to monitor.
sample_promql_query: up{job="triton", cluster="{{cluster_name}}", namespace="{{namespace_name}}"}