Exporting traces via OpenTelemetry does not replace the core observability features built into TensorZero.Many key TensorZero features (including optimization) require richer observability data that TensorZero collects and stores in your ClickHouse database.
Traces exported through OpenTelemetry are for external observability only and are not sufficient for these built-in TensorZero capabilities.
You can find a complete runnable example exporting traces to Jaeger on GitHub.
Setup
- Enable
export.otlp.traces.enabled
in the[gateway]
section of thetensorzero.toml
configuration file:
- Set the
OTEL_EXPORTER_OTLP_TRACES_ENDPOINT
environment variable in the gateway container to the endpoint of your OpenTelemetry service.
Example: TensorZero Gateway and Jaeger with Docker Compose
Example: TensorZero Gateway and Jaeger with Docker Compose
For example, if you’re deploying the TensorZero Gateway and Jaeger in Docker Compose, you can set the following environment variable:
Traces
Once configured, the TensorZero Gateway will begin sending traces to your OpenTelemetry-compatible service. Traces are generated for each HTTP request handled by the gateway (excluding auxiliary endpoints). For inference requests, these traces additionally contain spans that represent the processing of functions, variants, models, and model providers.
The TensorZero Gateway also provides a Prometheus-compatible metrics endpoint at
/metrics
.
This endpoint includes metrics about the gateway itself rather than the data processed by the gateway.
See Auxiliary Endpoints for more details.Custom HTTP headers
You can attach custom HTTP headers to the outgoing requests made toOTEL_EXPORTER_OTLP_TRACES_ENDPOINT
. When making a request to a TensorZero HTTP endpoint, add a header prefixed with tensorzero-otlp-traces-extra-header-
. For example:
my-first-header: my-first-value
and my-second-header: my-second-value
when exporting any span associated with your TensorZero API request.
TensorZero API requests without these headers set will be unaffected.