- Run different experiments per namespace: override a function’s default A/B test for specific contexts (e.g. customers).
- Restrict models and credentials to a namespace: limit a model or API key to a specific context (e.g. a per-customer fine-tune that should only be served to that customer).
Run different experiments per namespace
Configure
By default, a function’s experimentation configuration applies to all inference requests. With namespaces, you can override this for specific customers or contexts. For example, imagine you’re A/B testing prompts and models across your customer base, but an enterprise customer (acme_corp) needs a variant with a prompt tailored to their brand voice.
You can define separate variants with custom prompts for that customer and route them using a namespace:
tensorzero.toml
support_agent function that runs independent adaptive experiments for different customers:
tensorzero.toml
user_rating, but the default experiment uses feedback from all inferences while the acme_corp experiment only uses feedback from acme_corp inferences.
This means each can converge on a different variant.
Run inference
To use a namespace, pass it at inference time using thenamespace parameter (or tensorzero::namespace in the OpenAI-compatible endpoint).
For example:
tensorzero::namespace tag on the inference record.
This lets you filter and query inferences by namespace in the TensorZero UI and directly in the database.
If a request provides a namespace that doesn’t have a specific configuration (e.g. "some_other_customer"), the default experimentation configuration is used.
In other words, unknown namespaces don’t cause errors; they simply fall back to the default configuration.
Restrict models and credentials to a namespace
Namespaces can also restrict which models and credentials (API keys) are available in a given context. This is particularly helpful when you have per-customer fine-tuned models or API keys that should only be used for the correct customer’s inferences.tensorzero.toml