Simple Setup
You can use the short-handmistral::model_name to use a Mistral model with TensorZero, unless you need advanced features like fallbacks or custom credentials.
You can use Mistral models in your TensorZero variants by setting the model field to mistral::model_name.
For example:
model parameter in the OpenAI-compatible inference endpoint to use a specific Mistral model, without having to configure a function and variant in TensorZero.
Advanced Setup
In more complex scenarios (e.g. fallbacks, custom credentials), you can configure your own model and Mistral provider in TensorZero. For this minimal setup, you’ll need just two files in your project directory:Configuration
Create a minimal configuration file that defines a model and a simple chat function:config/tensorzero.toml
Reasoning Models
For Magistral reasoning models (magistral-small-latest, magistral-medium-latest), set prompt_mode = "reasoning" in your provider configuration:
config/tensorzero.toml
prompt_mode is set, reasoning output from the model will be returned as thought blocks in the response.
Credentials
You must set theMISTRAL_API_KEY environment variable before running the gateway.
You can customize the credential location by setting the api_key_location to env::YOUR_ENVIRONMENT_VARIABLE or dynamic::ARGUMENT_NAME.
See the Credential Management guide and Configuration Reference for more information.
Deployment (Docker Compose)
Create a minimal Docker Compose configuration:docker-compose.yml
docker compose up.