Execute code in isolated environments with sandbox backends
Agents generate code, interact with filesystems, and run shell commands. Because we can’t predict what an agent might do, it’s important that its environment is isolated so it can’t access credentials, files, or the network. Sandboxes provide this isolation by creating a boundary between the agent’s execution environment and your host system.In deep agents, sandboxes are backends that define the environment where the agent operates. Unlike other backends (State, Filesystem, Store) which only expose file operations, sandbox backends also give the agent an execute tool for running shell commands. When you configure a sandbox backend, the agent gets:
All standard filesystem tools (ls, read_file, write_file, edit_file, glob, grep)
The execute tool for running arbitrary shell commands in the sandbox
Sandboxes are used for security.
They let agents execute arbitrary code, access files, and use the network without compromising your credentials, local files, or host system.
This isolation is essential when agents run autonomously.Sandboxes are especially useful for:
Coding agents: Agents that run autonomously can use shell, git, clone repositories (many providers offer native git APIs, e.g., Daytona’s git operations), and run Docker-in-Docker for build and test pipelines
Data analysis agents — Load files, install data analysis libraries (pandas, numpy, etc.), run statistical calculations, and create outputs like PowerPoint presentations in a safe, isolated environment
The agent runs inside the sandbox and you communicate with it over the network. You build a Docker or VM image with your agent framework pre-installed, run it inside the sandbox, and connect from outside to send messages.Benefits:
âś… Mirrors local development closely.
âś… Tight coupling between agent and environment.
Trade-offs:
đź”´ API keys must live inside the sandbox (security risk).
đź”´ Updates require rebuilding images.
đź”´ Requires infrastructure for communication (WebSocket or HTTP layer).
To run an agent in a sandbox, build an image and install deepagents on it.
Copy
FROM python:3.11RUN pip install deepagents-cli
Then run the agent inside the sandbox.
To use the agent inside the sandbox you have to add additional infrastructure to handle communication between your application and the agent inside the sandbox.
The agent runs on your machine or server. When it needs to execute code, it calls sandbox tools (such as execute, read_file, or write_file) which invoke the provider’s APIs to run operations in a remote sandbox.Benefits:
âś… Update agent code instantly without rebuilding images.
âś… Cleaner separation between agent state and execution.
API keys stay outside the sandbox.
Sandbox failures don’t lose agent state.
Option to run tasks in multiple sandboxes in parallel.
âś… Pay only for execution time.
Trade-offs:
đź”´ Network latency on each execution call.
Example:
Copy
from dotenv import load_dotenvfrom daytona import Daytonafrom langchain_daytona import DaytonaSandboxfrom deepagents import create_deep_agentload_dotenv()# Can also do this with E2B, Runloop, Modalsandbox = Daytona().create()backend = DaytonaSandbox(sandbox=sandbox)agent = create_deep_agent( backend=backend, system_prompt="You are a coding assistant with sandbox access. You can create and run code in the sandbox.",)try: result = agent.invoke( { "messages": [ { "role": "user", "content": "Create a hello world Python script and run it", } ] } ) print(result["messages"][-1].content)except Exception: # Optional: delete the sandbox proactively on an exception sandbox.stop() raise
The examples in this doc use the sandbox as a tool pattern.
Choose the agent in sandbox pattern when your provider’s SDK handles the communication layer and you want production to mirror local development.
Choose the sandbox as tool pattern when you need to iterate quickly on agent logic, keep API keys outside the sandbox, or prefer cleaner separation of concerns.
These examples assume you have already created a sandbox/devbox using the provider’s SDK and have credentials set up. For signup, authentication, and provider-specific lifecycle details, see Available providers.
Modal
Runloop
Daytona
Copy
pip install langchain-modal
Copy
import modalfrom langchain_anthropic import ChatAnthropicfrom deepagents import create_deep_agentfrom langchain_modal import ModalSandboxapp = modal.App.lookup("your-app")modal_sandbox = modal.Sandbox.create(app=app)backend = ModalSandbox(sandbox=modal_sandbox)agent = create_deep_agent( model=ChatAnthropic(model="claude-sonnet-4-20250514"), system_prompt="You are a Python coding assistant with sandbox access.", backend=backend,)try: result = agent.invoke( { "messages": [ { "role": "user", "content": "Create a small Python package and run pytest", } ] } )finally: modal_sandbox.terminate()
Copy
pip install langchain-runloop
Copy
import osfrom runloop_api_client import RunloopSDKfrom langchain_anthropic import ChatAnthropicfrom deepagents import create_deep_agentfrom langchain_runloop import RunloopSandboxclient = RunloopSDK(bearer_token=os.environ["RUNLOOP_API_KEY"])devbox = client.devbox.create()backend = RunloopSandbox(devbox=devbox)agent = create_deep_agent( model=ChatAnthropic(model="claude-sonnet-4-20250514"), system_prompt="You are a Python coding assistant with sandbox access.", backend=backend,)try: result = agent.invoke( { "messages": [ { "role": "user", "content": "Create a small Python package and run pytest", } ] } )finally: devbox.shutdown()
Copy
pip install langchain-daytona
Copy
from daytona import Daytonafrom langchain_anthropic import ChatAnthropicfrom deepagents import create_deep_agentfrom langchain_daytona import DaytonaSandboxsandbox = Daytona().create()backend = DaytonaSandbox(sandbox=sandbox)agent = create_deep_agent( model=ChatAnthropic(model="claude-sonnet-4-20250514"), system_prompt="You are a Python coding assistant with sandbox access.", backend=backend,)try: result = agent.invoke( { "messages": [ { "role": "user", "content": "Create a small Python package and run pytest", } ] } )finally: sandbox.stop()
All sandbox providers protect your host system from the agent’s filesystem and shell operations. The agent cannot read your local files, access environment variables on your machine, or interfere with other processes. However, sandboxes alone do not protect against:
Context injection — An attacker who controls part of the agent’s input can instruct it to run arbitrary commands inside the sandbox. The sandbox is isolated, but the agent has full control within it.
Network exfiltration — Unless network access is blocked, a context-injected agent can send data out of the sandbox over HTTP or DNS. Some providers support blocking network access (e.g., blockNetwork: true on Modal).
Sandbox backends have a simple architecture: the only method a provider must implement is execute(), which runs a shell command and returns its output. Every other filesystem operation — read, write, edit, ls, glob, grep — is built on top of execute() by the BaseSandbox base class, which constructs scripts and runs them inside the sandbox via execute().
This design means:
Adding a new provider is straightforward. Implement execute() — the base class handles everything else.
The execute tool is conditionally available. On every model call, the harness checks whether the backend implements SandboxBackendProtocol. If not, the tool is filtered out and the agent never sees it.
When the agent calls the execute tool, it provides a command string and gets back the combined stdout/stderr, exit code, and a truncation notice if the output was too large.You can also call the backend execute() method directly in your application code.
bash: foobar: command not found[Command failed with exit code 127]
If a command produces very large output, the result is automatically saved to a file and the agent is instructed to use read_file to access it incrementally. This prevents context window overflow.
There are two distinct ways files move in and out of a sandbox, and it’s important to understand when to use each:Agent filesystem tools — read_file, write_file, edit_file, ls, glob, grep, and execute are the tools the LLM calls during its execution. These go through execute() inside the sandbox. The agent uses them to read code, write files, and run commands as part of its task.File transfer APIs — the uploadFiles() and downloadFiles() methods that your application code calls. These use the provider’s native file transfer APIs (not shell commands) and are designed for moving files between your host environment and the sandbox. Use these to:
Seed the sandbox with source code, configuration, or data before the agent runs
Retrieve artifacts (generated code, build outputs, reports) after the agent finishes
Pre-populate dependencies that the agent will need
Use download_files() to retrieve files from the sandbox after the agent finishes:
Modal
Runloop
Daytona
Copy
import modalfrom langchain_modal import ModalSandboxapp = modal.App.lookup("your-app")modal_sandbox = modal.Sandbox.create(app=app)backend = ModalSandbox(sandbox=modal_sandbox)results = backend.download_files(["/src/index.py", "/output.txt"])for result in results: if result.content is not None: print(f"{result.path}: {result.content.decode()}") else: print(f"Failed to download {result.path}: {result.error}")
Copy
pip install langchain-runloop
Copy
from runloop_api_client import RunloopSDKfrom langchain_runloop import RunloopSandboxapi_key = "..."client = RunloopSDK(bearer_token=api_key)devbox = client.devbox.create()backend = RunloopSandbox(devbox=devbox)results = backend.download_files(["/src/index.py", "/output.txt"])for result in results: if result.content is not None: print(f"{result.path}: {result.content.decode()}") else: print(f"Failed to download {result.path}: {result.error}")
Copy
pip install langchain-daytona
Copy
from daytona import Daytonafrom langchain_daytona import DaytonaSandboxsandbox = Daytona().create()backend = DaytonaSandbox(sandbox=sandbox)results = backend.download_files(["/src/index.py", "/output.txt"])for result in results: if result.content is not None: print(f"{result.path}: {result.content.decode()}") else: print(f"Failed to download {result.path}: {result.error}")
Inside the sandbox, the agent uses filesystem tools (read_file, write_file). The upload_files and download_files methods are for your application code to move files across the boundary between your host and the sandbox.
Sandboxes consume resources and cost money until they’re shut down.
To avoid paying for resources that are no longer needed, remember to shut down sandboxes as soon as your application no longer needs them.
TTL for chat applications. When users can re-engage after idle time, you often don’t know if or when they’ll return. Configure a time-to-live (TTL) on the sandbox—for example, TTL to archive or TTL to delete—so the provider automatically cleans up idle sandboxes. Many sandbox providers support this.
In chat applications, a conversation is typically represented by a thread_id.
Generally, each thread_id should use its own unique sandbox.Store the mapping between sandbox ID and thread_id in your application or with the sandbox if the sandbox provider allows attaching metadata to the sandbox.The following example shows a get-or-create pattern using Daytona.
For other providers, consult the sandbox provider API for the equivalent labels, metadata, and TTL options:
Copy
import uuidfrom daytona import CreateSandboxFromSnapshotParams, Daytonafrom langchain_daytona import DaytonaSandboxclient = Daytona()thread_id = str(uuid.uuid4())from deepagents import create_deep_agent# Get or create sandbox by thread_idtry: sandbox = client.find_one(labels={"thread_id": thread_id})except Exception: params = CreateSandboxFromSnapshotParams( labels={"thread_id": thread_id}, # Add TTL so the sandbox is cleaned up when idle auto_delete_interval=3600, ) sandbox = client.create(params)backend = DaytonaSandbox(sandbox=sandbox)agent = create_deep_agent( backend=backend, system_prompt="You are a coding assistant with sandbox access. You can create and run code in the sandbox.",)try: result = agent.invoke( { "messages": [ { "role": "user", "content": "Create a hello world Python script and run it", } ] }, config={ "configurable": { "thread_id": thread_id, } }, ) print(result["messages"][-1].content)except Exception: # Optional: delete the sandbox proactively on an exception client.delete(sandbox) raise
Sandboxes isolate code execution from your host system, but they don’t protect against context injection. An attacker who controls part of the agent’s input can instruct it to read files, run commands, or exfiltrate data from within the sandbox. This makes credentials inside the sandbox especially dangerous.
Never put secrets inside a sandbox. API keys, tokens, database credentials, and other secrets injected into a sandbox (via environment variables, mounted files, or the secrets option) can be read and exfiltrated by a context-injected agent. This applies even to short-lived or scoped credentials — if an agent can access them, so can an attacker.
If your agent needs to call authenticated APIs or access protected resources, you have two options:
Keep secrets in tools outside the sandbox. Define tools that run in your host environment (not inside the sandbox) and handle authentication there. The agent calls these tools by name, but never sees the credentials. This is the recommended approach.
Use a network proxy that injects credentials. Some sandbox providers support proxies that intercept outgoing HTTP requests from the sandbox and attach credentials (e.g., Authorization headers) before forwarding them. The agent never sees the secret — it just makes plain requests to a URL. This approach is not yet widely available across providers.
If you must inject secrets into a sandbox (not recommended), take these precautions:
Enable human-in-the-loop approval for all tool calls, not just sensitive ones
Block or restrict network access from the sandbox to limit exfiltration paths
Use the narrowest possible credential scope and shortest possible lifetime
Monitor sandbox network traffic for unexpected outbound requests
Even with these safeguards, this remains an unsafe workaround. A sufficiently creative enough context injection attack can bypass output filtering and HITL review.