Documentation Index
Fetch the complete documentation index at: https://docs.langchain.com/llms.txt
Use this file to discover all available pages before exploring further.
Parallel is a real-time web search and content extraction platform built for LLMs and AI applications.The Task API runs research-grade tasks across a tiered processor menu (
lite → ultra, plus matching -fast variants). langchain-parallel exposes it as four LangChain surfaces, all on this page so you can pick the right shape for your workload.
Which surface should I use?
- One ad-hoc question, agent-callable:
ParallelTaskRunTool(BaseTool). - One long-running, multi-source report:
ParallelDeepResearch(Runnable). - Bulk enrichment over a list, with typed inputs and outputs:
ParallelEnrichment(Runnable). - Low-level batch when you need full control of the run envelope:
ParallelTaskGroup(plain class).
-fast processor variant (2-5x faster than the corresponding non-fast tier at similar accuracy). Drop the -fast suffix when latency matters less than maximum quality. See Choose a processor for the full menu.
Overview
Integration details
| Class | Shape | Default processor | Package |
|---|---|---|---|
ParallelTaskRunTool | BaseTool | lite-fast | langchain-parallel |
ParallelDeepResearch | Runnable | pro-fast | langchain-parallel |
ParallelTaskGroup | Plain class | lite-fast | langchain-parallel |
ParallelEnrichment | Runnable | core-fast | langchain-parallel |
Setup
The integration lives in thelangchain-parallel package.
Credentials
Head to Parallel to sign up and generate an API key. SetPARALLEL_API_KEY in your environment:
ParallelTaskRunTool
ParallelTaskRunTool is an agent-callable BaseTool. It runs one task synchronously and returns the structured output, per-field basis citations, and the run_id.
result["output"] is always a dict; the answer text lives at result["output"]["content"] and per-field citations at result["output"]["basis"]. Pass a task_output_schema to have content arrive as a parsed pydantic-shaped dict instead of a free-text string:
parse_basis: citations + low-confidence fields
Every consumer that cares about confidence ends up writing the same boilerplate to walk a result for citations, low-confidence fields, and theinteraction_id. parse_basis() does that for you:
Multi-turn chaining
Result dicts surfaceinteraction_id at the top level. Pass it as previous_interaction_id on the next call to chain context across turns:
ParallelDeepResearch
ParallelDeepResearch is a Runnable. It defaults to pro-fast (the -fast variant of “Exploratory web research”). For the most thorough multi-source reports, pass processor="ultra".
Deep research runs are not instant.
pro-fast typically takes a few minutes; pro and ultra can take longer. Wire up a webhook (see Webhook signature verification) for production usage rather than blocking on invoke.output_schema:
ParallelTaskGroup
ParallelTaskGroup creates a Task Group, fans out runs, and collects results. Use it directly when you need fine-grained control over the batch envelope; otherwise prefer ParallelEnrichment for typed bulk runs.
ParallelTaskGroup exposes run (sync) and arun (async). Latency is dictated by the slowest run in the batch and the chosen processor — lite-fast typically resolves in seconds, higher tiers in minutes.
ParallelEnrichment
ParallelEnrichment wraps ParallelTaskGroup with a default_task_spec built from your input/output pydantic schemas. It coerces pydantic instances into dicts, fans out the batch, and returns results in input order.
ParallelEnrichment blocks until every input has resolved. With the default core-fast processor, expect a few minutes for a non-trivial batch; pass a faster processor for short-form fields, or run in a background worker for large batches.
build_task_spec
build_task_spec accepts pydantic classes, raw JSON-schema dicts, or text descriptions and returns a TaskSpec dict ready for client.task_run.create or add_runs(default_task_spec=...). Use it when you want full control of the run envelope on a ParallelTaskRunTool or ParallelTaskGroup.
BYOMCP: bring your own MCP server
ParallelTaskRunTool and ParallelDeepResearch accept mcp_servers=[McpServer(...)] to expose Streamable-HTTP MCP endpoints to the run.
Webhook signature verification
Long-running tasks can deliver results via webhook. Verify the signature withverify_webhook (Standard Webhooks scheme: HMAC-SHA256 over <webhook-id>.<webhook-timestamp>.<body>, base64-encoded, v1,<sig> with replay protection). See webhook setup for the delivery contract.
Chaining
BindParallelTaskRunTool to any tool-calling chat model and drive an agent with create_agent:
API reference
For detailed documentation, head to theParallelTaskRunTool, ParallelDeepResearch, ParallelTaskGroup, or ParallelEnrichment API references, or the Parallel Task API guides.
Connect these docs to Claude, VSCode, and more via MCP for real-time answers.

