Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.langchain.com/llms.txt

Use this file to discover all available pages before exploring further.

Caching LLM calls can be useful for testing, cost savings, and speed. Below are some integrations that allow you to cache results of individual LLM calls using different caches with different strategies.

Azure Cosmos DB NoSQL Semantic Cache