Skip to main content
The TensorZero Gateway can export traces to an external OpenTelemetry-compatible observability system using OTLP. Exporting traces via OpenTelemetry allows you to monitor the TensorZero Gateway in external observability platforms such as Jaeger, Datadog, or Grafana. This integration enables you to correlate gateway activity with the rest of your infrastructure, providing deeper insights and unified monitoring across your systems. Exporting traces via OpenTelemetry does not replace the core observability features built into TensorZero. Many key TensorZero features (including optimization) require richer observability data that TensorZero collects and stores in your ClickHouse database. Traces exported through OpenTelemetry are for external observability only.
The TensorZero Gateway also provides a Prometheus-compatible metrics endpoint at /metrics. This endpoint includes metrics about the gateway itself rather than the data processed by the gateway. See Export Prometheus metrics for more details.

Configure

You can find a complete runnable example exporting traces to Jaeger on GitHub.
1

Set up the configuration

Enable export.otlp.traces.enabled in the [gateway] section of the tensorzero.toml configuration file:
[gateway]
# ...
export.otlp.traces.enabled = true
# ...
2

Configure the OTLP traces endpoint

Set the OTEL_EXPORTER_OTLP_TRACES_ENDPOINT environment variable in the gateway container to the endpoint of your OpenTelemetry service.
For example, if you’re deploying the TensorZero Gateway and Jaeger in Docker Compose, you can set the following environment variable:
services:
  gateway:
    image: tensorzero/gateway
    environment:
      OTEL_EXPORTER_OTLP_TRACES_ENDPOINT: http://jaeger:4317
    # ...

  jaeger:
    image: jaegertracing/jaeger
    ports:
      - "4317:4317"
    # ...
3

Browse the exported traces

Once configured, the TensorZero Gateway will begin sending traces to your OpenTelemetry-compatible service.Traces are generated for each HTTP request handled by the gateway (excluding auxiliary endpoints). For inference requests, these traces additionally contain spans that represent the processing of functions, variants, models, and model providers.Screenshot of TensorZero Gateway traces in Jaeger

Customize

Send custom HTTP headers

You can attach custom HTTP headers to the outgoing OTLP export requests made to OTEL_EXPORTER_OTLP_TRACES_ENDPOINT.

Define custom headers in the configuration

You can configure static headers that will be included in all OTLP export requests by adding them to the export.otlp.traces.extra_headers field in your configuration file:
tensorzero.toml
[gateway.export.otlp.traces]
# ...
extra_headers.space_id = "my-workspace-123"
extra_headers."X-Environment" = "production"
# ...

Define custom headers during inference

You can also send custom headers dynamically on a per-request basis. When there is a conflict between static and dynamic headers, the latter takes precedence. When making a request to a TensorZero HTTP endpoint, add a header prefixed with tensorzero-otlp-traces-extra-header-:
tensorzero-otlp-traces-extra-header-user-id: user-123
tensorzero-otlp-traces-extra-header-request-source: mobile-app
This will attach the headers user-id: user-123 and request-source: mobile-app when exporting any span associated with that specific API request. When using the TensorZero Python SDK, you can pass dynamic OTLP headers using the otlp_traces_extra_headers parameter in the inference method. The headers will be automatically prefixed with tensorzero-otlp-traces-extra-header- for you:
response = t0.inference(
    function_name="your_function_name",
    input={
        "messages": [
            {
                "role": "user",
                "content": "Write a haiku about artificial intelligence.",
            }
        ]
    },
    otlp_traces_extra_headers={
        "user-id": "user-123",
        "request-source": "mobile-app",
    },
)
This will attach the headers user-id: user-123 and request-source: mobile-app when exporting any span associated with that specific inference request.

Export OpenInference traces

By default, TensorZero exports traces with attributes that follow the OpenTelemetry Generative AI semantic conventions. You can instead choose to export traces with attributes that follow the OpenInference semantic conventions by setting export.otlp.traces.format = "openinference" in your configuration file. See Configuration Reference for more details.