Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.langchain.com/llms.txt

Use this file to discover all available pages before exploring further.

Subscribe: Our changelog includes an RSS feed that can integrate with Slack, email, Discord bots like Readybot or RSS Feeds to Discord Bot, and other subscription tools.
May 12, 2026
langgraph

langgraph v1.2

This release adds finer-grained control over node execution (timeouts, error recovery, and graceful shutdown), a new channel type that cuts checkpoint overhead for long-running threads, and a new content-block-centric streaming API (v3) with typed, per-channel projections.
  • DeltaChannel (beta): A new channel type that stores only the incremental delta at each step rather than re-serializing the full accumulated value. Most useful for channels that grow large over time, for example a message list in a long-running thread. Use snapshot_frequency=K to write a full snapshot every K steps and bound read latency.
  • Per-node timeouts: Pass timeout= to add_node to cap how long a single attempt may run. Set a hard wall-clock limit (run_timeout), an idle limit that resets on progress (idle_timeout), or both via TimeoutPolicy. When the limit fires, LangGraph raises NodeTimeoutError, clears writes from that attempt, and hands off to the retry policy. Async nodes only.
  • Node-level error handlers: Pass error_handler= to add_node to run a recovery function after all retries are exhausted. The handler receives a typed NodeError and can return a Command to update state and route to a different node, useful for Saga/compensation patterns.
  • Graceful shutdown: Stop an in-flight run cooperatively after the current superstep completes, and save a resumable checkpoint. Create a RunControl and call request_drain() from any thread; the run raises GraphDrained and can be resumed later with the same config.
  • New event streaming API (beta): Pass version="v3" to stream_events() / astream_events() for a content-block-centric protocol with typed, per-channel projections (run.values, run.messages, run.lifecycle, run.subgraphs) plus opt-in transformers for updates, custom events, checkpoints, tasks, and debug. run.messages yields one ChatModelStream per LLM call with typed sub-projections for text, reasoning, tool calls, and usage. version="v1" and version="v2" are unchanged.
Timeouts and error handlers are Python-only; retry policies continue to work in both Python and TypeScript.
Apr 7, 2026
deepagents

deepagents v0.5.0

  • Async subagents: Deep Agents can launch non-blocking background tasks, so users can continue interacting with the agent while subagents work concurrently. Requires LangSmith Deployment for sub-agents.
  • Multi-modal support: The read_file tool now supports PDFs, audio, and video files in addition to images.
  • Backend changes: We’ve made backward-compatible changes to the Deep Agents backend protocol:
    • Updated the file format stored in State and Store backends to support binary files.
    • Improved error propagation from backends to tools.
    • You can now instantiate StateBackend() and StoreBackend() directly. Specifying with a factory (e.g., backend=(lambda rt: StateBackend(rt))) is deprecated.
  • Anthropic prompt caching improvements: We’ve made some improvements to improve prompt caching performance for Anthropic models.
Mar 10, 2026
langgraph

langgraph v1.1

  • Type-safe streaming (version="v2"): Pass version="v2" to stream() / astream() for unified StreamPart output with type, ns, and data keys on every chunk. Each mode has its own TypedDict, all importable from langgraph.types. See streaming docs.
  • Type-safe invoke (version="v2"): Pass version="v2" to invoke() / ainvoke() to get a GraphOutput object with .value and .interrupts attributes. See invoke docs.
  • Pydantic and dataclass coercion: With version="v2", invoke() and values-mode stream output are automatically coerced to your declared Pydantic model or dataclass type.
  • Fixed time travel with interrupts and subgraphs: Replays no longer reuse stale RESUME values, and subgraphs correctly restore the checkpoint for the parent’s historical state.
  • Fully backwards compatible: version="v2" is opt-in. GraphOutput supports deprecated dict-style access for gradual migration.
Feb 10, 2026
deepagents

deepagents v0.4

  • New integration packages for pluggable sandboxes: langchain-modal, langchain-daytona, and langchain-runloop. See sandboxes guide and example data analysis tutorial.
  • Changes to conversation history summarization:
    • Summarization now happens in the model node via wrap_model_call events. Due to this we retain the full message history in the graph state.
    • More accurate token counting.
    • Summarization will now automatically trigger if a chat model raises a ContextOverflowError (defined in langchain-core). Currently langchain-anthropic and langchain-openai support this.
  • We now default to the Responses API for model strings prefixed with "openai:".
    from langchain.chat_models import init_chat_model
    
    agent = create_deep_agent(
        model=init_chat_model(
            "openai:...",
            use_responses_api=True,
            store=False,
            include=["reasoning.encrypted_content"],
        )
    )
    
Dec 15, 2025
langchainintegrations

langchain v1.2.0

Dec 8, 2025
langchainintegrations

langchain-google-genai v4.0.0

We’ve re-written the Google GenAI integration to use Google’s consolidated Generative AI SDK, which provides access to the Gemini API and Vertex AI Platform under the same interface. This includes minimal breaking changes as well as deprecated packages in langchain-google-vertexai.See the full release notes and migration guide for details.
Nov 25, 2025
langchain

langchain v1.1.0

  • Model profiles: Chat models now expose supported features and capabilities through a .profile attribute. These data are derived from models.dev, an open source project providing model capability data.
  • Summarization middleware: Updated to support flexible trigger points using model profiles for context-aware summarization.
  • Structured output: ProviderStrategy support (native structured output) can now be inferred from model profiles.
  • SystemMessage for create_agent: Support for passing SystemMessage instances directly to create_agent’s system_prompt parameter, enabling advanced features like cache control and structured content blocks.
  • Model retry middleware: New middleware for automatically retrying failed model calls with configurable exponential backoff.
  • Content moderation middleware: OpenAI content moderation middleware for detecting and handling unsafe content in agent interactions. Supports checking user input, model output, and tool results.
Oct 20, 2025
langchainlanggraph

v1.0.0

langchain

langgraph

If you encounter any issues or have feedback, please open an issue so we can improve. To view v0.x documentation, go to the archived content and API reference.