Use this file to discover all available pages before exploring further.
LangGraph provides the time travel functionality to resume execution from a prior checkpoint, either replaying the same state or modifying it to explore alternatives. In all cases, resuming past execution produces a new fork in the history.To time travel using the LangSmith Deployment API (via the LangGraph SDK):
Identify a checkpoint in an existing thread: Use client.threads.get_history method to retrieve the execution history for a specific thread_id and locate the desired checkpoint_id.
Alternatively, set a breakpoint before the node(s) where you want execution to pause. You can then find the most recent checkpoint recorded up to that breakpoint.
(Optional) modify the graph state: Use the client.threads.update_state method to modify the graph’s state at the checkpoint and resume execution from alternative state.
Resume execution from the checkpoint: Use the client.runs.wait or client.runs.stream APIs with an input of None and the appropriate thread_id and checkpoint_id.
from typing_extensions import TypedDict, NotRequiredfrom langgraph.graph import StateGraph, START, ENDfrom langchain.chat_models import init_chat_modelfrom langgraph.checkpoint.memory import InMemorySaverclass State(TypedDict): topic: NotRequired[str] joke: NotRequired[str]model = init_chat_model( "claude-sonnet-4-6", temperature=0,)def generate_topic(state: State): """LLM call to generate a topic for the joke""" msg = model.invoke("Give me a funny topic for a joke") return {"topic": msg.content}def write_joke(state: State): """LLM call to write a joke based on the topic""" msg = model.invoke(f"Write a short joke about {state['topic']}") return {"joke": msg.content}# Build workflowbuilder = StateGraph(State)# Add nodesbuilder.add_node("generate_topic", generate_topic)builder.add_node("write_joke", write_joke)# Add edges to connect nodesbuilder.add_edge(START, "generate_topic")builder.add_edge("generate_topic", "write_joke")# Compilegraph = builder.compile()
from langgraph_sdk import get_clientclient = get_client(url=<DEPLOYMENT_URL>)# Using the graph deployed with the name "agent"assistant_id = "agent"# create a threadthread = await client.threads.create()thread_id = thread["thread_id"]# Run the graphresult = await client.runs.wait( thread_id, assistant_id, input={})
import { Client } from "@langchain/langgraph-sdk";const client = new Client({ apiUrl: <DEPLOYMENT_URL> });// Using the graph deployed with the name "agent"const assistantID = "agent";// create a threadconst thread = await client.threads.create();const threadID = thread["thread_id"];// Run the graphconst result = await client.runs.wait( threadID, assistantID, { input: {}});
Create a thread:
curl --request POST \--url <DEPLOYMENT_URL>/threads \--header 'Content-Type: application/json' \--data '{}'
# The states are returned in reverse chronological order.states = await client.threads.get_history(thread_id)selected_state = states[1]print(selected_state)
// The states are returned in reverse chronological order.const states = await client.threads.getHistory(threadID);const selectedState = states[1];console.log(selectedState);
curl --request GET \--url <DEPLOYMENT_URL>/threads/<THREAD_ID>/history \--header 'Content-Type: application/json'