TrueFoundry provides an enterprise-ready AI Gateway to provide governance and observability to agentic frameworks like LangChain. TrueFoundry AI Gateway serves as a unified interface for LLM access, providing:
Unified API Access: Connect to 250+ LLMs (OpenAI, Claude, Gemini, Groq, Mistral) through one API
Low Latency: Sub-3ms internal latency with intelligent routing and load balancing
Enterprise Security: SOC 2, HIPAA, GDPR compliance with RBAC and audit logging
Quota and cost management: Token-based quotas, rate limiting, and comprehensive usage tracking
Observability: Full request/response logging, metrics, and traces with customizable retention
Connect to TrueFoundry by updating the ChatOpenAI model in LangChain:
Copy
Ask AI
from langchain_openai import ChatOpenAIllm = ChatOpenAI( api_key=TRUEFOUNDRY_API_KEY, base_url=TRUEFOUNDRY_GATEWAY_BASE_URL, model="openai-main/gpt-4o" # Similarly you can call any model from any model provider)llm.invoke("What is the meaning of life, universe and everything?")
The request is routed through your TrueFoundry gateway to the specified model provider. TrueFoundry automatically handles rate limiting, load balancing, and observability.
With the Metrics Dashboard, you can monitor and analyze:
Performance Metrics: Track key latency metrics like Request Latency, Time to First Token (TTFS), and Inter-Token Latency (ITL) with P99, P90, and P50 percentiles
Cost and Token Usage: Gain visibility into your application’s costs with detailed breakdowns of input/output tokens and the associated expenses for each model
Usage Patterns: Understand how your application is being used with detailed analytics on user activity, model distribution, and team-based usage
Rate Limiting & Load Balancing: Configure limits, distribute traffic across models, and set up fallbacks