Parallel is a real-time web search and content extraction platform designed specifically for LLMs and AI applications.The
ParallelWebSearchTool provides access to Parallel’s Search API, which streamlines the traditional search → scrape → extract pipeline into a single API call, returning structured, LLM-optimized results.
Overview
Integration details
| Class | Package | Serializable | JS support | Package latest |
|---|---|---|---|---|
ParallelWebSearchTool | langchain-parallel | ❌ | ❌ |
Tool features
- Real-time web search: Access current information from the web
- Structured results: Returns compressed, LLM-optimized excerpts
- Flexible input: Support for natural language objectives or specific search queries
- Domain filtering: Include or exclude specific domains with source policy
- Customizable output: Control number of results (1-40) and excerpt length (min 100 chars)
- Rich metadata: Optional search timing, result counts, and query information
- Async support: Full async/await support with proper executor handling
- Error handling: Comprehensive error handling with detailed error messages
Setup
The integration lives in thelangchain-parallel package.
Credentials
Head to Parallel to sign up and generate an API key. Once you’ve done this set thePARALLEL_API_KEY environment variable:
Instantiation
Here we show how to instantiate an instance of theParallelWebSearchTool. The tool can be configured with API key and base URL parameters:
Invocation
Invoke directly with args
You can invoke the tool with either anobjective (natural language description) or specific search_queries. The tool supports various configuration options including domain filtering and metadata collection:
Invoke with ToolCall
We can also invoke the tool with a model-generated ToolCall, in which case a ToolMessage will be returned:
Async usage
The tool supports full async/await operations for better performance in async applications:Parameter details and validation
The tool performs comprehensive input validation and supports the following parameters:Required parameters
At least one of the following must be provided:objective: Natural language description (max 5000 characters)search_queries: List of search queries (max 5 queries, 200 chars each)
Optional parameters:
max_results: Number of results to return (1-40, default: 10)excerpts: Excerpt settings dict (e.g.,{"max_chars_per_result": 1500})mode: Search mode - ‘one-shot’ for comprehensive results, ‘agentic’ for token-efficient resultssource_policy: Domain filtering withinclude_domainsand/orexclude_domainslistsfetch_policy: Cache control dict (e.g.,{"max_age_seconds": 86400, "timeout_seconds": 60})include_metadata: Include search timing and statistics (default: True)timeout: Request timeout in seconds (optional)
Error handling:
The tool provides detailed error messages for validation failures and API errors.Chaining
We can use our tool in a chain by first binding it to a tool-calling model and then calling it:- OpenAI
- Anthropic
- Azure
- Google Gemini
- AWS Bedrock
- HuggingFace
Best practices
- Use specific objectives: More specific objectives lead to better, more targeted results
- Apply domain filtering: Use
source_policyto focus on authoritative sources or exclude unreliable domains - Include metadata: Set
include_metadata: Truefor debugging and performance optimization - Handle errors gracefully: The tool provides detailed error messages for validation and API failures
- Use async for performance: Use
ainvoke()in async applications for better performance
Response format
The tool returns a structured dictionary with the following format:API reference
For detailed documentation of all features and configuration options, head to theParallelWebSearchTool API reference or the Parallel search reference.
Connect these docs to Claude, VSCode, and more via MCP for real-time answers.