Start by creating a simple agent that can answer questions and call tools. The agent in this example uses the chosen language model, a basic weather function as a tool, and a simple prompt to guide its behavior:
import { createAgent, tool } from "langchain";import * as z from "zod";const getWeather = tool( (input) => `It's always sunny in ${input.city}!`, { name: "get_weather", description: "Get the weather for a given city", schema: z.object({ city: z.string().describe("The city to get the weather for"), }), });const agent = createAgent({ model: "gpt-5.4", tools: [getWeather],});console.log( await agent.invoke({ messages: [{ role: "user", content: "What's the weather in San Francisco?" }], }));
When you run the code and prompt the agent to tell you about the weather in San Francisco, the agent uses that input and its available context.
The agent understands that you are asking about the weather for the city San Francisco and therefore calls the weather tool with the provided city name.
You can use any supported model by changing the model name in the code and setting up the appropriate API key.
In the following example you will build a research agent that can answer questions about text files.
Along the way you will explore the following concepts:
Detailed system prompts for better agent behavior
Create tools that integrate with external data
Model configuration for consistent responses
Conversational memory for chat-like interactions
Deep Agents for built-in features
Testing your agent
1
Define the system prompt
The system prompt defines your agent’s role and behavior. Keep it specific and actionable:
const SYSTEM_PROMPT = `You are a literary data assistant.## Capabilities- \`fetch_text_from_url\`: loads document text from a URL into the conversation.Do not guess line counts or positions—ground them in tool results from the saved file.`;
2
Create tools
Tools let a model interact with external systems by calling functions you define.
Tools can depend on runtime context and also interact with agent memory.This example uses a tool to load a document from a given URL:
Zod is a library for validating and parsing pre-defined schemas. You can use it to define the input schema for your tools to make sure the agent only calls the tool with the correct arguments.Alternatively, you can define the schema property as a JSON schema object. Keep in mind that JSON schemas won’t be validated at runtime.
Set up your language model with the right parameters for your use case. For example:
import { initChatModel } from "langchain";const model = await initChatModel("gpt-5.4", { temperature: 0.5, timeout: 300, maxTokens: 25000,});
Depending on the model and provider chosen, initialization parameters may vary; refer to their reference pages for details.
4
Add memory
Add memory to your agent to maintain state across interactions. This allows
the agent to remember previous conversations and context.
import { MemorySaver } from "@langchain/langgraph";const checkpointer = new MemorySaver();
In production, use a persistent checkpointer that saves message history to a database.
See Add and manage memory for more details.
5
Create and run the agent
Now assemble your agent with all the components and run it.There are two different frameworks for creating agents: LangChain agents and deep agents.
Both LangChain and deep agents provide you with fine-grained control over tools, memory, and more.
The main difference between both is that deep agents come with a range of commonly useful capabilities already built in, such as planning, file system tools, and subagents.Use deep agents when you want maximum capability with minimal setup; choose LangChain agents when you need fine-grained control.
Since the code invokes the model with the entire text from The Great Gatsby, it uses a large amount of tokens.You can view example output in the next step.
Let’s try both:
async function main() { const agent = createAgent({ model, tools: [fetchTextFromUrl], systemPrompt: SYSTEM_PROMPT, checkpointer, }); const deepAgent = createDeepAgent({ model, tools: [fetchTextFromUrl], systemPrompt: SYSTEM_PROMPT, checkpointer, }); const content = `Project Gutenberg hosts a full plain-text copy of F. Scott Fitzgerald's The Great Gatsby. URL: https://www.gutenberg.org/files/64317/64317-0.txt Answer as much as you can: 1) How many lines in the complete Gutenberg file contain the substring \`Gatsby\` (count lines, not occurrences within a line, each line ends with a line break). 2) The 1-based line number of the first line in the file that contains \`Daisy\`. 3) A two-sentence neutral synopsis. Do your best on (1) and (2). If at any point you realize you cannot **verify** an exact answer with your available tools and reasoning, do not fabricate numbers: use \`null\` for that field and spell out the limitation in \`how_you_computed_counts\`. If you encounter any errors please report what the error was and what the error message was.`; const agentResult = await agent.invoke( { messages: [{ role: "user", content }] }, { configurable: { thread_id: "great-gatsby-lc" } }, ); const deepAgentResult = await deepAgent.invoke( { messages: [{ role: "user", content }] }, { configurable: { thread_id: "great-gatsby-da" } }, ); const agentMessages = agentResult.messages; const deepMessages = deepAgentResult.messages; console.log(agentMessages[agentMessages.length - 1]!.content_blocks); console.log("\n"); console.log(deepMessages[deepMessages.length - 1]!.content_blocks);}main().catch((err) => { console.error(err); process.exitCode = 1;});
Show Full example code
import { MemorySaver } from "@langchain/langgraph";import { createDeepAgent } from "deepagents";import { tool } from "@langchain/core/tools";import { createAgent, initChatModel } from "langchain";import { z } from "zod";const SYSTEM_PROMPT = `You are a literary data assistant.## Capabilities- \`fetch_text_from_url\`: loads document text from a URL into the conversation.Do not guess line counts or positions—ground them in tool results from the saved file.`;const fetchTextFromUrl = tool( async ({ url }: { url: string }): Promise<string> => { const controller = new AbortController(); const timeoutId = setTimeout(() => controller.abort(), 120_000); try { const resp = await fetch(url, { headers: { "User-Agent": "Mozilla/5.0 (compatible; quickstart-research/1.0)", }, signal: controller.signal, }); if (!resp.ok) { return `Fetch failed: HTTP ${resp.status} ${resp.statusText}`; } return await resp.text(); } catch (e) { const msg = e instanceof Error ? e.message : String(e); return `Fetch failed: ${msg}`; } finally { clearTimeout(timeoutId); } }, { name: "fetch_text_from_url", description: "Fetch the document from a URL.", schema: z.object({ url: z.string().url() }), },);const model = await initChatModel("gemini-3.1-pro-preview", { modelProvider: "google-genai", temperature: 0.5, timeout: 600_000, maxTokens: 25000, streaming: true,});const checkpointer = new MemorySaver();async function main() { const agent = createAgent({ model, tools: [fetchTextFromUrl], systemPrompt: SYSTEM_PROMPT, checkpointer, }); const deepAgent = createDeepAgent({ model, tools: [fetchTextFromUrl], systemPrompt: SYSTEM_PROMPT, checkpointer, }); const content = `Project Gutenberg hosts a full plain-text copy of F. Scott Fitzgerald's The Great Gatsby. URL: https://www.gutenberg.org/files/64317/64317-0.txt Answer as much as you can: 1) How many lines in the complete Gutenberg file contain the substring \`Gatsby\` (count lines, not occurrences within a line, each line ends with a line break). 2) The 1-based line number of the first line in the file that contains \`Daisy\`. 3) A two-sentence neutral synopsis. Do your best on (1) and (2). If at any point you realize you cannot **verify** an exact answer with your available tools and reasoning, do not fabricate numbers: use \`null\` for that field and spell out the limitation in \`how_you_computed_counts\`. If you encounter any errors please report what the error was and what the error message was.`; const agentResult = await agent.invoke( { messages: [{ role: "user", content }] }, { configurable: { thread_id: "great-gatsby-lc" } }, ); const deepAgentResult = await deepAgent.invoke( { messages: [{ role: "user", content }] }, { configurable: { thread_id: "great-gatsby-da" } }, ); const agentMessages = agentResult.messages; const deepMessages = deepAgentResult.messages; console.log(agentMessages[agentMessages.length - 1]!.content_blocks); console.log("\n"); console.log(deepMessages[deepMessages.length - 1]!.content_blocks);}main().catch((err) => { console.error(err); process.exitCode = 1;});
6
Review the results
The results will differ based on the model and the execution.
LangChain agents
Deep agents
**1) Number of lines containing `Gatsby`:** `null`**2) First line containing `Daisy`:** `null`**3) Synopsis:**The Great Gatsby follows the mysterious millionaire Jay Gatsby and his obsession with reuniting with his former lover, Daisy Buchanan, as narrated by his neighbor Nick Carraway. Set against the backdrop of the Roaring Twenties on Long Island, the novel explores themes of wealth, class, and the elusive nature of the American Dream.**how_you_computed_counts:**I successfully fetched the full text of the eBook using the `fetch_text_from_url` tool. However, because I do not have access to a code execution environment (like Python) or text-processing tools (like `grep`), I cannot deterministically split the text by line breaks, iterate through the thousands of lines, and verify the exact line numbers or match counts. LLMs cannot reliably perform exact line-counting or indexing over massive texts within their context window without external computational tools. As instructed, rather than fabricating or guessing a number, I have output `null` for the exact counts and positions.
Based on the text fetched directly from the Gutenberg URL and analyzed using filesystem search tools, here are the answers to your questions:**1) Lines containing the substring `Gatsby`****258** lines contain the exact substring `Gatsby`.**2) First line containing `Daisy`**Line **181** is the first line in the file that contains the exact substring `Daisy`.*(For context, the line reads: "Buchanans. Daisy was my second cousin once removed, and I’d known Tom")***3) Two-sentence neutral synopsis***The Great Gatsby* follows the mysterious millionaire Jay Gatsby and his obsessive pursuit to reunite with his former lover, Daisy Buchanan, in 1920s Long Island. The story is narrated by Nick Carraway, who observes the tragic consequences of Gatsby's relentless ambition and the shallow materialism of the era's wealthy elite.*****How counts were computed:**When fetching the document from the URL, the file was too large for the standard output and was automatically saved to the local filesystem by the system (`/large_tool_results/x246ax2x`). I then used the `grep` tool to search the saved file for the exact literal substrings `Gatsby` and `Daisy`. The `grep` tool returned every matching line along with its 1-based line number. I manually counted the exact number of lines returned for `Gatsby` (which totaled 258) and identified the first line number returned for `Daisy` (which was 181). I also verified there were no uppercase variations (`GATSBY` or `DAISY`) that would have been missed. No errors were encountered during this process.
If you look at the output on both tabs, you notice that the LangChain agent provided answers but they are estimates. The agent lacks the tools to answer this question. You may also get errors that the prompt is too long.The deep agent, on the other hand can:
Plans its approach using the built-in write_todos tool to break down the research task.
Loads the file by calling the fetch_text_from_url tool to gather information.
Manages context by using file system tools (grep and read_file).
Spawns subagents as needed to delegate complex subtasks to specialized subagents.
For LangChain agents, you must implement more capabilities to get a similar level of service and can customize them along the way as needed.
Most interesting applications you build with LangChain make many calls to LLMs. As these applications get more complex, it becomes important to be able to inspect what exactly is going on inside your agent. The best way to do this is with LangSmith.Sign up for a LangSmith account and set these to start logging traces: