> ## Documentation Index
> Fetch the complete documentation index at: https://docs.langchain.com/llms.txt
> Use this file to discover all available pages before exploring further.

# Cache integrations

> Integrate with caches using LangChain JavaScript.

[Caching LLM calls](/oss/javascript/langchain/models#prompt-caching) can be useful for testing, cost savings, and speed.

Below are some integrations that allow you to cache results of individual LLM calls using different caches with different strategies.

<Columns cols={3}>
  <Card title="Azure Cosmos DB NoSQL Semantic Cache" icon="link" href="/oss/javascript/integrations/llm_caching/azure_cosmosdb_nosql" arrow="true" cta="View guide" />
</Columns>

***

<div className="source-links">
  <Callout icon="terminal-2">
    [Connect these docs](/use-these-docs) to Claude, VSCode, and more via MCP for real-time answers.
  </Callout>

  <Callout icon="edit">
    [Edit this page on GitHub](https://github.com/langchain-ai/docs/edit/main/src/oss/javascript/integrations/llm_caching/index.mdx) or [file an issue](https://github.com/langchain-ai/docs/issues/new/choose).
  </Callout>
</div>
