Docs by LangChain home page
JavaScript
Search...
⌘K
OSS (v1-alpha)
LangChain and LangGraph
Providers
Providers
Anthropic
AWS
Google
Microsoft
OpenAI
General integrations
Chat models
Tools and Toolkits
LLMs
Key-value stores
Document transformers
Model caches
Memory
Callbacks
RAG integrations
Retrievers
Embeddings
Vector stores
Document loaders
Docs by LangChain home page
JavaScript
Search...
⌘K
Ask AI
GitHub
Forum
Forum
Search...
Navigation
General integrations
Model caches
LangChain
LangGraph
Integrations
Reference
Contributing
LangChain
LangGraph
Integrations
Reference
Contributing
GitHub
Forum
General integrations
Model caches
Copy page
Copy page
Caching LLM calls
can be useful for testing, cost savings, and speed.
Below are some integrations that allow you to cache results of individual LLM calls using different caches with different strategies.
Azure Cosmos DB NoSQL Semantic Cache
View guide
Document transformers
Memory
Assistant
Responses are generated using AI and may contain mistakes.