Interpreters give agents a programmable workspace where they can explore data, coordinate tool calls, and keep intermediate work out of the model context. The agent writes code to express its intent, then an in-memory runtime executes that code and returns the relevant results. Where sandboxes are a code-first way for acting on an environment (such as running commands, installing dependencies, and editing files), interpreters are a code-first way for acting inside the agent loop: composing tools, preserving state, and deciding what information should return to the model.Documentation Index
Fetch the complete documentation index at: https://docs.langchain.com/llms.txt
Use this file to discover all available pages before exploring further.
When to use an interpreter
Most agent work alternates between model reasoning and tool execution. That works for simple actions, but it becomes awkward when the agent needs to compose many steps, reason over structured data, or manage intermediate state. An interpreter gives the agent a runtime for that work. Instead of asking the model to choose every next step one tool call at a time, the agent can write a small program that runs control flow, calls allowlisted tools, stores variables, and returns a compact result to the model. Use interpreters when the agent needs to:- Compose multiple tool calls with code, including loops, branching, retries, and concurrency.
- Coordinate subagents from code by splitting work into focused calls, storing their results, and stitching those results into a final synthesis.
- Keep intermediate values in runtime state instead of sending every temporary result back through the model context.
- Transform structured data deterministically, such as sorting, grouping, parsing, validating, scoring, or aggregating.
- Explore a large variable space and return only selected evidence, summaries, or outputs to the model.
Choose the right execution path
| Need | Use |
|---|---|
| One or two simple external calls | Normal tool calling |
| A small program that loops, branches, retries, or aggregates results | Interpreter |
| Many selected tool calls that should run from code | Interpreter with programmatic tool calling |
| Reusable helpers used across threads | Interpreter with interpreter skills |
| Shell commands, package installs, tests, or full OS filesystem access | Sandboxes |
Add an interpreter to an agent
Install the QuickJS middleware package, then add the middleware when creating the agent.Run code in the interpreter
The middleware adds aneval tool to the agent. The tool runs TypeScript in a persistent context, captures console.log, and returns the result of the last expression.
The agent can write code like this:
Programmatic tool calling
Programmatic tool calling (PTC) exposes selected agent tools inside the interpreter under the globaltools namespace. Instead of asking the model to issue one tool call, wait for the result, and then decide the next call, the agent can write code that calls tools in loops, branches, retries, or parallel batches.
This is useful when intermediate tool results are only inputs to the next step. The interpreter can process, filter, or aggregate those results before anything returns to the model context, which can make multi-tool/multi-step workflows more token efficient.
PTC is model-agnostic in Deep Agents. It is implemented by middleware rather than a provider-specific code-execution or tool-calling API.
How it works
- You choose which tools the interpreter can call with the
ptcallowlist. - The middleware exposes those tools as async JavaScript functions under
tools. - The agent writes interpreter code that calls those functions with
await. - The interpreter runs the tool bridge, receives the tool result, and continues executing code.
- The model receives the final interpreter output, not every intermediate value.
web_search becomes tools.webSearch(...):
Useful patterns
| Pattern | What the interpreter can do |
|---|---|
| Batch processing | Loop over many inputs and call a tool for each one. |
| Parallel work | Use Promise.all for independent calls. |
| Conditional logic | Choose the next tool call based on earlier results. |
| Early termination | Stop calling tools once a success condition is met. |
| Data filtering | Return only relevant rows, snippets, errors, or summaries to the model. |
| Recursive orchestration | Call task repeatedly, then combine subagent results in code. |
Enable PTC
Enable PTC with an explicit allowlist:Recursive language models
Recursive language models use an interpreter as a workspace for decomposition. The model keeps a large input or working set in runtime variables, writes code to inspect and split it, calls subagents or other model tools on smaller pieces, and then stitches the returned results together in code. This separates the variable space from the agent’s context. The variable space is information stored in the interpreter, and the agent’s context is what the model actually processes in the next model call. The model can decide which snippets become subagent tasks, which results need another pass, and what final synthesis should return to the main conversation. For background on this pattern, see the Recursive Language Models paper. In Deep Agents, the recursive call is often thetask tool exposed through programmatic tool calling. The interpreter can call subagents over many slices, combine their answers, and return a single synthesized result:
Interpreter skills
Interpreter skills are skills that expose code modules to an interpreter. When configured with interpreter middleware, the agent can import these modules from code and use them for deterministic helper logic. Interpreter skills are useful when the agent needs reusable helpers for structured data workflows, such as sorting, grouping, scoring, parsing, validating, or aggregating data. For setup details, see Interpreter skills.Security and limits
Interpreters use QuickJS to run untrusted JavaScript with strict default isolation. Treat that as a scoped interpreter runtime, not a full production sandbox backend. Every tool you expose through PTC is an outside capability that interpreter code can use. Treat the PTC allowlist as a permission boundary: expose only the tools the agent needs, and avoid bridging broad tools that can access sensitive systems, spend money, mutate data, or call unrestricted networks unless that behavior is intentional.| Capability | Available by default | How to expose it |
|---|---|---|
| JavaScript execution | Yes | Add interpreter middleware |
Top-level await | Yes | Use promises in interpreter code |
console.log capture | Yes | Disable with captureConsole: false |
| Agent tools | No | Add a PTC allowlist |
| Interpreter skill modules | No | Add a module entry and configure skills_backend or skillsBackend |
| Filesystem access | No | Add the built-in filesystem tools via the PTC allowlist |
| Network access | No | Expose a specific network tool through PTC |
| Wall-clock or datetime access | No | Expose an explicit time tool if needed |
| Shell commands, package installs, tests, OS-level execution | No | Use a sandbox backend |
Middleware options
createCodeInterpreterMiddleware accepts the following options:
| Option | Default | Purpose |
|---|---|---|
ptc | omitted | PTC allowlist: array of tool names or StructuredToolInterface instances. |
memoryLimitBytes | 64 * 1024 * 1024 (64 MB) | QuickJS memory limit in bytes. |
maxStackSizeBytes | 320 * 1024 | QuickJS stack size limit in bytes. |
executionTimeoutMs | 5000 | Per-eval timeout in milliseconds. Negative values disable the timeout. |
systemPrompt | null | Override the built-in interpreter system prompt. |
skillsBackend | omitted | Backend used to resolve interpreter skill modules. |
maxPtcCalls | 256 | Maximum tools.* calls per eval. Use null only in trusted environments. |
maxResultChars | 4000 | Maximum characters retained from console output, result, and error strings. |
toolName | "eval" | Name of the interpreter tool exposed to the model. |
captureConsole | true | Whether console.log, console.warn, and console.error output is captured. |
Connect these docs to Claude, VSCode, and more via MCP for real-time answers.

