Learn how to use tool use (function calling) with TensorZero Gateway.
chat
can call tools.
A tool definition has the following properties:
name
: The name of the tool.description
: A description of the tool. The description helps models understand the tool’s purpose and usage.parameters
: The path to a file containing a JSON Schema for the tool’s parameters.strict
property to enforce type checking for the tool’s parameters.
This setting is only supported by some model providers, and will be ignored otherwise.
Example: JSON Schema for the `get_temperature` tool
get_temperature
tool to take a mandatory location
parameter and an optional units
parameter, we could use the following JSON Schema:tool_call
content blocks in the response.
For multi-turn conversations supporting tool use, you can provide tool results in subsequent inference requests with a tool_result
content block.
Example: Multi-turn conversation with tool use
allowed_tools
parameter.
For example, suppose your TensorZero function has access to several tools, but you only want to allow the get_temperature
tool to be called during a particular inference.
You can achieve this by setting allowed_tools=["get_temperature"]
in your inference request.
additional_tools
property.
(In the OpenAI-compatible API, you can use the tools
property instead.)
You should only use dynamic tools if your use case requires it.
Otherwise, it’s recommended to define tools in the configuration file.
You can define a tool dynamically with the additional_tools
property.
This field accepts a list of objects with the same structure as the tools defined in the configuration file, except that the parameters
field should contain the JSON Schema itself (rather than a path to a file with the schema).
tool_choice
parameter.
The supported tool choice strategies are:
none
: The function should not use any tools.auto
: The model decides whether or not to use a tool. If it decides to use a tool, it also decides which tools to use.required
: The model should use a tool. If multiple tools are available, the model decides which tool to use.{ specific = "tool_name" }
: The model should use a specific tool. The tool must be defined in the tools
section of the configuration file or provided in additional_tools
.tool_choice
parameter can be set either in your configuration file or directly in your inference request.
parallel_tool_calling
parameter to true
.
If enabled, the models will be able to request multiple tool calls in a single inference request (conversation turn).
You can specify parallel_tool_calling
in the configuration file or in the inference request.