integrations/nvidia-triton/metadata.yaml (4 lines of code) (raw):
id: nvidia-triton
short_name: nvidia-triton
display_name: NVIDIA Triton
description: 'NVIDIA Triton Inference Server is open source inference serving software that streamlines AI inferencing. This integration provides metrics for throughput, latency, resource usage, and errors.'