Setup
For this minimal setup, you’ll need just two files in your project directory:You can also find the complete code for this example on GitHub.
Configuration
Create a minimal configuration file that defines a model and a simple chat function:config/tensorzero.toml
gcp_vertex_gemini::model_name
to use a GCP Vertex AI Gemini model with TensorZero if you don’t need advanced features like fallbacks or custom credentials:
gcp_vertex_gemini::projects/<PROJECT_ID>/locations/<REGION>/publishers/google/models/<MODEL_ID>
gcp_vertex_gemini::projects/<PROJECT_ID>/locations/<REGION>/endpoints/<ENDPOINT_ID>
Credentials
You must generate a GCP service account key in JWT form (described here) and point to it in theGCP_VERTEX_CREDENTIALS_PATH
environment variable.
You can customize the credential location by setting the credential_location
to env::YOUR_ENVIRONMENT_VARIABLE
.
See the Credential Management guide and Configuration Reference for more information.
Deployment (Docker Compose)
Create a minimal Docker Compose configuration:docker-compose.yml
docker compose up
.