This Quickstart guide shows how we’d upgrade an OpenAI wrapper to a minimal TensorZero deployment with built-in observability and fine-tuning capabilities — in just 5 minutes.
From there, you can take advantage of dozens of features to build best-in-class LLM applications.This Quickstart covers a tour of TensorZero features.
If you’re only interested in inference with the gateway, see the shorter How to call any LLM guide.
You can also find the runnable code for this example on GitHub.
TensorZero offers dozens of features covering inference, observability, optimization, evaluations, and experimentation.But the absolutely minimal setup requires just a simple configuration file: tensorzero.toml.
tensorzero.toml
Copy
# A function defines the task we're tackling (e.g. generating a haiku)...[functions.generate_haiku]type = "chat"# ... and a variant is one of many implementations we can use to tackle it (a choice of prompt, model, etc.).# Since we only have one variant for this function, the gateway will always use it.[functions.generate_haiku.variants.gpt_4o_mini]type = "chat_completion"model = "openai::gpt-4o-mini"
This minimal configuration file tells the TensorZero Gateway everything it needs to replicate our original OpenAI call.
We’re almost ready to start making API calls.
Let’s launch TensorZero.
Set the environment variable OPENAI_API_KEY.
Place our tensorzero.toml in the ./config directory.
Download the following sample docker-compose.yml file.
This Docker Compose configuration sets up a development ClickHouse database (where TensorZero stores data), the TensorZero Gateway, and the TensorZero UI.
# This is a simplified example for learning purposes. Do not use this in production.# For production-ready deployments, see: https://www.tensorzero.com/docs/gateway/deploymentservices: clickhouse: image: clickhouse/clickhouse-server:24.12-alpine environment: - CLICKHOUSE_USER=chuser - CLICKHOUSE_DEFAULT_ACCESS_MANAGEMENT=1 - CLICKHOUSE_PASSWORD=chpassword ports: - "8123:8123" volumes: - clickhouse-data:/var/lib/clickhouse healthcheck: test: wget --spider --tries 1 http://chuser:chpassword@clickhouse:8123/ping start_period: 30s start_interval: 1s timeout: 1s # The TensorZero Python client *doesn't* require a separate gateway service. # # The gateway is only needed if you want to use the OpenAI Python client # or interact with TensorZero via its HTTP API (for other programming languages). # # The TensorZero UI also requires the gateway service. gateway: image: tensorzero/gateway volumes: # Mount our tensorzero.toml file into the container - ./config:/app/config:ro command: --config-file /app/config/tensorzero.toml environment: - TENSORZERO_CLICKHOUSE_URL=http://chuser:chpassword@clickhouse:8123/tensorzero - OPENAI_API_KEY=${OPENAI_API_KEY:?Environment variable OPENAI_API_KEY must be set.} ports: - "3000:3000" extra_hosts: - "host.docker.internal:host-gateway" depends_on: clickhouse: condition: service_healthy ui: image: tensorzero/ui volumes: # Mount our tensorzero.toml file into the container - ./config:/app/config:ro environment: - OPENAI_API_KEY=${OPENAI_API_KEY:?Environment variable OPENAI_API_KEY must be set.} - TENSORZERO_CLICKHOUSE_URL=http://chuser:chpassword@clickhouse:8123/tensorzero - TENSORZERO_GATEWAY_URL=http://gateway:3000 ports: - "4000:4000" depends_on: clickhouse: condition: service_healthyvolumes: clickhouse-data:
Our setup should look like:
Copy
- config/ - tensorzero.toml- after.py see below- before.py- docker-compose.yml
The gateway will replicate our original OpenAI call and store the data in our database — with less than 1ms latency overhead thanks to Rust 🦀.The TensorZero Gateway can be used with the TensorZero Python client, with OpenAI client (Python, Node, etc.), or via its HTTP API in any programming language.
You can install the TensorZero Python client with:
Copy
pip install tensorzero
Then, you can make a TensorZero API call with:
after.py
Copy
from tensorzero import TensorZeroGatewaywith TensorZeroGateway.build_embedded( clickhouse_url="http://chuser:chpassword@localhost:8123/tensorzero", config_file="config/tensorzero.toml",) as client: response = client.inference( function_name="generate_haiku", input={ "messages": [ { "role": "user", "content": "Write a haiku about artificial intelligence.", } ] }, )print(response)
Sample Output
Copy
ChatInferenceResponse( inference_id=UUID('0191ddb2-2c02-7641-8525-494f01bcc468'), episode_id=UUID('0191ddb2-28f3-7cc2-b0cc-07f504d37e59'), variant_name='gpt_4o_mini', content=[ Text( type='text', text='Wires hum with intent, \nThoughts born from code and structure, \nGhost in silicon.' ) ], usage=Usage( input_tokens=15, output_tokens=20 ))
The TensorZero UI streamlines LLM engineering workflows like observability and optimization (e.g. fine-tuning).The Docker Compose file we used above also launched the TensorZero UI.
You can visit the UI at http://localhost:4000.
The TensorZero UI provides a dashboard for observability data.
We can inspect data about individual inferences, entire functions, and more.
This guide is pretty minimal, so the observability data is pretty simple.
Once we start using more advanced functions like feedback and variants, the observability UI will enable us to track metrics, experiments (A/B tests), and more.
The TensorZero UI also provides a workflow for fine-tuning models like GPT-4o and Llama 3.
With a few clicks, you can launch a fine-tuning job.
Once the job is complete, the TensorZero UI will provide a configuration snippet you can add to your tensorzero.toml.
We can also send metrics & feedback to the TensorZero Gateway.
This data is used to curate better datasets for fine-tuning and other optimization workflows.
Since we haven’t done that yet, the TensorZero UI will skip the curation step before fine-tuning.