Skip to main content
Deprecation NoticeThis tool has been deprecated. Please use the TavilySearch tool in the @langchain/tavily package, instead.
Tavily Search is a robust search API tailored specifically for LLM Agents. It seamlessly integrates with diverse data sources to ensure a superior, relevant search experience. This guide provides a quick overview for getting started with the TavilySearchResults tool.

Overview

Integration details

ClassPackagePY supportVersion
TavilySearchResults@langchain/communityNPM - Version

Setup

The integration lives in the @langchain/community package, which you can install as shown below:
npm install @langchain/community @langchain/core

Credentials

Set up a Tavily API key and set it as an environment variable named TAVILY_API_KEY.
process.env.TAVILY_API_KEY = "YOUR_API_KEY"
It’s also helpful (but not needed) to set up LangSmith for best-in-class observability:
process.env.LANGSMITH_TRACING="true"
process.env.LANGSMITH_API_KEY="your-api-key"

Instantiation

You can import and instantiate an instance of the TavilySearchResults tool like this:
import { TavilySearchResults } from "@langchain/community/tools/tavily_search";

const tool = new TavilySearchResults({
  maxResults: 2,
  // ...
});

Invocation

Invoke directly with args

You can invoke the tool directly like this:
await tool.invoke({
  input: "what is the current weather in SF?",
});

Invoke with ToolCall

We can also invoke the tool with a model-generated ToolCall, in which case a ToolMessage will be returned:
// This is usually generated by a model, but we'll create a tool call directly for demo purposes.
const modelGeneratedToolCall = {
  args: {
    query: "what is the current weather in SF?"
  },
  id: "1",
  name: tool.name,
  type: "tool_call",
}

await tool.invoke(modelGeneratedToolCall)

Chaining

We can use our tool in a chain by first binding it to a tool-calling model and then calling it:
// @lc-docs-hide-cell

import { ChatOpenAI } from "@langchain/openai"

const llm = new ChatOpenAI({
  model: "gpt-4.1",
  temperature: 0,
})
import { HumanMessage } from "@langchain/core/messages";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { RunnableLambda } from "@langchain/core/runnables";

const prompt = ChatPromptTemplate.fromMessages(
  [
    ["system", "You are a helpful assistant."],
    ["placeholder", "{messages}"],
  ]
)

const llmWithTools = llm.bindTools([tool]);

const chain = prompt.pipe(llmWithTools);

const toolChain = RunnableLambda.from(
  async (userInput: string, config) => {
    const humanMessage = new HumanMessage(userInput,);
    const aiMsg = await chain.invoke({
      messages: [new HumanMessage(userInput)],
    }, config);
    const toolMsgs = await tool.batch(aiMsg.tool_calls, config);
    return chain.invoke({
      messages: [humanMessage, aiMsg, ...toolMsgs],
    }, config);
  }
);

const toolChainResult = await toolChain.invoke("what is the current weather in sf?");
const { tool_calls, content } = toolChainResult;

console.log("AIMessage", JSON.stringify({
  tool_calls,
  content,
}, null, 2));

Agents

For guides on how to use LangChain tools in agents, see the LangGraph.js docs.