Core component ecosystem
The diagram below shows how LangChain’s major components connect to form complete AI applications:How components connect
Each component layer builds on the previous ones:- Input processing – Transform raw data into structured documents
- Embedding & storage – Convert text into searchable vector representations
- Retrieval – Find relevant information based on user queries
- Generation – Use AI models to create responses, optionally with tools
- Orchestration – Coordinate everything through agents and memory systems
Component categories
LangChain organizes components into these main categories:| Category | Purpose | Key Components | Use Cases |
|---|---|---|---|
| Models | AI reasoning and generation | Chat models, LLMs, Embedding models | Text generation, reasoning, semantic understanding |
| Tools | External capabilities | APIs, databases, etc. | Web search, data access, computations |
| Agents | Orchestration and reasoning | ReAct agents, tool calling agents | Nondeterministic workflows, decision making |
| Memory | Context preservation | Message history, custom state | Conversations, stateful interactions |
| Retrievers | Information access | Vector retrievers, web retrievers | RAG, knowledge base search |
| Document processing | Data ingestion | Loaders, splitters, transformers | PDF processing, web scraping |
| Vector Stores | Semantic search | Chroma, Pinecone, FAISS | Similarity search, embeddings storage |
Common patterns
RAG (Retrieval-Augmented Generation)
Agent with tools
Multi-agent system
Learn more
Now that you understand how components relate to each other, explore specific areas:- Building your first RAG system
- Creating agents
- Working with tools
- Setting up memory
- Browse integrations