Run Claude Code, Goose, and Gemini CLI in an Isolated Sandbox
ML coding agents want to run shell commands, edit files, and install packages. That’s what makes them useful. It’s also what makes them dangerous on your local machine.
What if you could hand an ML agent a full Linux environment where it can do whatever it wants - and your laptop stays untouched?
Problem with Running ML Agents Locally
Claude Code, Goose, and Gemini CLI all share a common pattern: they read your codebase, propose changes, and execute commands in your terminal. They’re powerful. They’re also running with your permissions.
When an ML agent runs rm -rf node_modules && npm install, that’s fine. When it runs something unexpected against your filesystem, your database, or your cloud credentials - that’s a problem.
Sandboxing solves this. Give the agent its own container. Let it break things. Snapshot it. Throw it away.
One Bootstrap, Four Tools
We built a bootstrap script that installs the major ML coding CLIs into an unsandbox service:
| Agent | Install Method | Binary |
|---|---|---|
| Claude Code |
curl installer → /usr/local/bin/claude |
claude |
| Goose (Block) |
curl installer → /usr/local/bin/goose |
goose |
| Google Gemini CLI | npm global |
gemini |
| Aider | pip (python) |
aider |
Deploy it with your API keys using the vault:
un service --name ai-coder --ports 8000 -n semitrusted \
-e ANTHROPIC_API_KEY=sk-ant-... \
-e GEMINI_API_KEY=... \
--bootstrap-file ai-coder-bootstrap.sh
The -e flags store your keys in the unsandbox vault - encrypted at rest with AES-256-GCM, automatically injected into the container on every start and wake. The bootstrap itself never touches your keys. It just installs the tools.
You can also set keys after creation, or update them later:
# From a .env file
un service env set <ID> --env-file .env
# Check vault status (never exposes values)
un service env status <ID>
Keys persist across freezes, wakes, and redeployments. They’re never logged, never returned by the API, and never stored on the pool node’s disk.
Aider works with any of these keys - it supports Claude, Gemini, DeepSeek, and local models.
Connecting and Using the Tools
Once the service is running, connect with an interactive session or execute commands directly:
# Interactive shell inside the container
un service --execute ai-coder 'bash'
# Or check what's installed
un service --execute ai-coder 'claude --version'
un service --execute ai-coder 'goose --version'
un service --execute ai-coder 'gemini --version'
un service --execute ai-coder 'aider --version'
The ralph-claude Alias: Full Autonomy Inside a Container
Once you’re inside an unsandbox container, there’s no reason to babysit Claude Code with permission prompts. Everything is disposable. Let it run free.
The bootstrap installs this alias automatically:
alias ralph-claude="IS_SANDBOX=1 claude --dangerously-skip-permissions"
What this does:
-
IS_SANDBOX=1tells Claude Code it’s running in a sandbox environment -
--dangerously-skip-permissionsauto-approves all tool use — file edits, shell commands, package installs, everything
On your local machine this would be reckless. Inside an unsandbox container it’s the point. The container is the permission boundary. Let the agent work without interruption, snapshot before risky operations, throw the container away if things go sideways.
# Connect and use it
un service --execute ai-coder 'bash'
ralph-claude "refactor the entire auth module"
The name is a reminder: you’re giving Ralph the keys. Inside a sandbox, that’s fine.
Why Sandbox Your ML Agent?
Blast radius containment
An ML agent that hallucinates a destructive command destroys a disposable container, not your dev machine. Snapshot before risky operations. Roll back if things go wrong.
Reproducible environments
Every service starts from the same golden image. No “works on my machine” problems. No leftover state from previous experiments. Clone a service, hand the same environment to a different agent, compare results.
Network control
Run in zerotrust mode and the agent can’t phone home, exfiltrate code, or hit external APIs you didn’t expect. Run in semitrusted mode when it needs to pip install or git clone.
# Fully isolated - agent can't reach the internet
un service --name ai-isolated --ports 8000 \
--bootstrap-file ai-coder-bootstrap.sh
# With network - agent can install packages, clone repos
un service --name ai-connected --ports 8000 -n semitrusted \
--bootstrap-file ai-coder-bootstrap.sh
Parallel experimentation
Spin up multiple containers with different agents working on the same problem:
un service --name claude-attempt --ports 8000 -n semitrusted \
--bootstrap-file ai-coder-bootstrap.sh
un service --name goose-attempt --ports 8000 -n semitrusted \
--bootstrap-file ai-coder-bootstrap.sh
Give each one the same repo. Compare their approaches. Keep the best result.
What’s in the Bootstrap
The bootstrap script is intentionally minimal. The unsandbox golden image already includes Node.js, Python, git, and Caddy, so the bootstrap just installs the tools:
# Claude Code (curl installer)
curl -fsSL https://claude.ai/install.sh | sh
# Goose by Block (curl installer)
curl -fsSL https://github.com/block/goose/releases/latest/download/download_cli.sh | CONFIGURE=false bash
# npm-based tools
npm install -g @google/gemini-cli
# Aider in an isolated Python venv
python3 -m venv /opt/ai-coder/aider-venv
/opt/ai-coder/aider-venv/bin/pip install aider-chat
ln -sf /opt/ai-coder/aider-venv/bin/aider /usr/local/bin/aider
No key management in the script. The vault handles that - API keys are injected at the container level before the bootstrap even runs, available to every process as standard environment variables.
A Caddy instance serves a status page on port 8000 with tool versions and a health endpoint at /health.
Snapshots: Save Your Setup
Once you’ve cloned repos and configured tools, snapshot the service so you don’t repeat setup:
# Snapshot the configured environment
un service --snapshot ai-coder
# Later, restore from snapshot
un session --restore my-ai-snapshot
Your repos and tool configs are preserved in the snapshot. Vault secrets are stored separately in the portal database, so they persist independently - across snapshots, freezes, wakes, and redeployments.
The Bigger Picture
ML coding agents are getting more capable. They’re also getting more autonomous - executing multi-step plans, installing dependencies, running tests, and modifying files across entire codebases.
The more autonomous an agent becomes, the more important it is that it runs somewhere disposable. Your laptop is not disposable. An unsandbox container is.
Give your ML agents a sandbox. Let them code. Keep your machine clean.
Get Started
-
Grab the bootstrap:
The
ai-coder-bootstrap.shscript is available in our samples repository. -
Deploy with your keys in the vault:
un service --name ai-coder --ports 8000 -n semitrusted \ -e ANTHROPIC_API_KEY=sk-ant-... \ -e OPENAI_API_KEY=sk-... \ -e GEMINI_API_KEY=... \ --bootstrap-file ai-coder-bootstrap.sh -
Connect and code:
un service --execute ai-coder 'bash' -
Visit the status page:
https://ai-coder.on.unsandbox.com