The TensorZero Gateway also provides a Prometheus-compatible metrics endpoint at
/metrics
.
This endpoint includes metrics about the gateway itself rather than the data processed by the gateway.
See Export Prometheus metrics for more details.Configure
You can find a complete runnable example exporting traces to Jaeger on GitHub.
1
Set up the configuration
Enable
export.otlp.traces.enabled
in the [gateway]
section of the tensorzero.toml
configuration file:2
Configure the OTLP traces endpoint
Set the
OTEL_EXPORTER_OTLP_TRACES_ENDPOINT
environment variable in the gateway container to the endpoint of your OpenTelemetry service.Example: TensorZero Gateway and Jaeger with Docker Compose
Example: TensorZero Gateway and Jaeger with Docker Compose
For example, if you’re deploying the TensorZero Gateway and Jaeger in Docker Compose, you can set the following environment variable:
3
Browse the exported traces
Once configured, the TensorZero Gateway will begin sending traces to your OpenTelemetry-compatible service.Traces are generated for each HTTP request handled by the gateway (excluding auxiliary endpoints).
For inference requests, these traces additionally contain spans that represent the processing of functions, variants, models, and model providers.

Customize
Send custom HTTP headers
You can attach custom HTTP headers to the outgoing requests made toOTEL_EXPORTER_OTLP_TRACES_ENDPOINT
.
When making a request to a TensorZero HTTP endpoint, add a header prefixed with tensorzero-otlp-traces-extra-header-
. For example:
my-first-header: my-first-value
and my-second-header: my-second-value
when exporting any span associated with your TensorZero API request.
TensorZero API requests without these headers set will be unaffected.