Skip to main content
LangChain’s power comes from how its components work together to create sophisticated AI applications. This page provides diagrams showcasing the relationships between different components.

Core component ecosystem

The diagram below shows how LangChain’s major components connect to form complete AI applications:

How components connect

Each component layer builds on the previous ones:
  1. Input processing – Transform raw data into structured documents
  2. Embedding & storage – Convert text into searchable vector representations
  3. Retrieval – Find relevant information based on user queries
  4. Generation – Use AI models to create responses, optionally with tools
  5. Orchestration – Coordinate everything through agents and memory systems

Component categories

LangChain organizes components into these main categories:
CategoryPurposeKey ComponentsUse Cases
ModelsAI reasoning and generationChat models, LLMs, Embedding modelsText generation, reasoning, semantic understanding
ToolsExternal capabilitiesAPIs, databases, etc.Web search, data access, computations
AgentsOrchestration and reasoningReAct agents, tool calling agentsNondeterministic workflows, decision making
MemoryContext preservationMessage history, custom stateConversations, stateful interactions
RetrieversInformation accessVector retrievers, web retrieversRAG, knowledge base search
Document processingData ingestionLoaders, splitters, transformersPDF processing, web scraping
Vector StoresSemantic searchChroma, Pinecone, FAISSSimilarity search, embeddings storage

Common patterns

RAG (Retrieval-Augmented Generation)

Agent with tools

Multi-agent system

Learn more

Now that you understand how components relate to each other, explore specific areas:
Connect these docs programmatically to Claude, VSCode, and more via MCP for real-time answers.