Most TensorZero deployments will not require Valkey or Redis.
TensorZero can use a Redis-compatible data store like Valkey as a high-performance backend for its rate limiting functionality.
We recommend Valkey over Postgres if you’re handling 100+ QPS or have extreme latency requirements.
TensorZero’s rate limiting implementation can achieve sub-millisecond P99 latency at 10k+ QPS using Valkey.
Deploy
You can self-host Valkey or use a managed Redis-compatible service (e.g. AWS ElastiCache, GCP Memorystore).
Add Valkey to your docker-compose.yml:services:
valkey:
image: valkey/valkey:8
ports:
- "6379:6379"
volumes:
- valkey-data:/data
volumes:
valkey-data:
Run Valkey with Docker:docker run -d --name valkey -p 6379:6379 valkey/valkey:8
To configure TensorZero to use Valkey, set the TENSORZERO_VALKEY_URL environment variable with your Valkey connection details.
TENSORZERO_VALKEY_URL="redis://[hostname]:[port]"
# Example:
TENSORZERO_VALKEY_URL="redis://localhost:6379"
TensorZero automatically loads the required Lua functions into Valkey on startup.
No manual setup is required.
If both TENSORZERO_VALKEY_URL and TENSORZERO_POSTGRES_URL are set, the gateway uses Valkey for rate limiting.
Best Practices
Durability
A critical failure of Valkey (e.g. crash or power outage) could mean losing rate limiting data since the last backup.
When the rate limiting window is granular (e.g. minutes), this can be tolerated.
If your rate limiting config includes larger windows, we recommend setting up recurrent RDB backups (point-in-time snapshots) for better durability guarantees.