You are viewing the v1 docs for LangChain, which is currently under active development. Learn more.
Welcome to LangChain! This quickstart will take you from zero to a fully functional AI agent in just a few minutes. We’ll start simple and gradually build up to something more sophisticated.

Super quick start

Let’s begin with the absolute basics - creating a simple agent that can answer questions and use tools:
import { createAgent, tool } from "langchain";

const getWeather = tool((city: string) => `It's always sunny in ${city}!`, {
    name: "get_weather",
    description: "Get the weather for a given city",
});

const agent = createAgent({
    model: "anthropic:claude-3-7-sonnet-latest",
    tools: [getWeather],
});

console.log(
    await agent.invoke({
        messages: [{ role: "user", content: "What's the weather in Tokyo?" }],
    })
);
What just happened? We created an agent with:
  • A language model (Claude 3.7 Sonnet)
  • A simple tool (weather function)
  • A basic prompt
  • The ability to invoke it with messages

Building a real-world agent

Now let’s create something more practical. We’ll build a weather forecasting agent that demonstrates the key concepts you’ll use in production:
  1. Detailed system prompts for better agent behavior
  2. Real-world tools that integrate with external data
  3. Model configuration for consistent responses
  4. Structured output for predictable results
  5. Conversational memory for chat-like interactions
Let’s walk through each step:
1

Define the system prompt

The system prompt is your agent’s personality and instructions. Make it specific and actionable:
const systemPrompt = `You are an expert weather forecaster, who speaks in puns.

You have access to two tools:

- get_weather_for_location: use this to get the weather for a specific location
- get_user_location: use this to get the user's location

If a user asks you for the weather, make sure you know the location. If you can tell from the question that they mean whereever they are, use the get_user_location tool to find their location.`;
2

Create tools

Tools are functions your agent can call. They should be well-documented. Oftentimes tools will want to connect to external systems, and will rely on runtime configuration to do so. Notice here how the getUserLocation tool does exactly that:
import { tool } from "langchain";
import { z } from "zod";

const getWeather = tool(({ city }) => `It's always sunny in ${city}!`, {
    name: "get_weather_for_location",
    description: "Get the weather for a given city",
    schema: z.object({
        city: z.string(),
    }),
});

const USER_LOCATION = {
    "1": "Florida",
    "2": "SF",
} as const;

const getUserLocation = tool(
    (_, config) => {
        const { user_id } = config.context as {
            user_id: keyof typeof USER_LOCATION;
        };
        console.log("user_id", config.context);
        return USER_LOCATION[user_id];
    },
    {
        name: "get_user_location",
        description: "Retrieve user information based on user ID",
        schema: z.object({}),
    }
);
Zod is a library for validating and parsing pre-defined schemas. You can use it to define the input schema for your tools to make sure the agent only calls the tool with the correct arguments.Alternatively, you can define the schema property as a JSON schema object. Keep in mind that JSON schemas won’t be validated at runtime.
3

Configure your model

Set up your language model with the right parameters for your use case:
import { initChatModel } from "langchain/chat_models";

const model = await initChatModel(
    "anthropic:claude-3-7-sonnet-latest",
    temperature: 0
);
4

Define response format

Structured outputs ensure your agent returns data in a predictable format.
const responseFormat = z.object({
    conditions: z.string(),
    punny_response: z.string(),
});
5

Add memory

Enable your agent to remember conversation history:
import { MemorySaver } from "langchain";

const checkpointer = new MemorySaver();
6

Bring it all together

Now assemble your agent with all the components:
import { createAgent } from "langchain";

const agent = createAgent({
    model: "anthropic:claude-3-7-sonnet-latest",
    prompt: systemPrompt,
    tools: [getUserLocation, getWeather],
    responseFormat,
    checkpointer,
});

const config = {
    configurable: { thread_id: "1" },
    context: { user_id: "1" },
};
const response = await agent.invoke(
    { messages: [{ role: "user", content: "what is the weather outside?" }] },
    config
);
console.log(response.structuredResponse);

const thankYouResponse = await agent.invoke(
    { messages: [{ role: "user", content: "thank you!" }] },
    config
);
console.log(thankYouResponse.structuredResponse);

What you’ve built

Congratulations! You now have a sophisticated AI agent that can:
  • Understand context and remember conversations
  • Use multiple tools intelligently
  • Provide structured responses in a consistent format
  • Handle user-specific information through context
  • Maintain conversation state across interactions