The TensorZero Gateway exposes runtime metrics through a Prometheus-compatible endpoint. This allows you to monitor gateway performance, track usage patterns, and set up alerting using standard Prometheus tooling. This endpoint provides operational metrics about the gateway itself. It’s not meant to replace TensorZero’s observability features. You can access the metrics by scraping the /metrics endpoint. The gateway currently exports two metrics:
  • inference_count
  • request_count
The metrics include relevant labels such as endpoint, function_name, model_name, and metric_name. For example:
GET /metrics
inference_count{endpoint="inference",function_name="draft_email"} 10
request_count{endpoint="inference",function_name="draft_email"} 10
request_count{endpoint="feedback",metric_name="draft_accepted"} 10