Caching LLM calls can be useful for testing, cost savings, and speed. Below are some integrations that allow you to cache results of individual LLM calls using different caches with different strategies.Documentation Index
Fetch the complete documentation index at: https://docs.langchain.com/llms.txt
Use this file to discover all available pages before exploring further.
Azure Cosmos DB NoSQL Semantic Cache
Connect these docs to Claude, VSCode, and more via MCP for real-time answers.

