I let Claude Code do whatever it wants without asking permission.
Before you spit out your coffee: I keep it in a jail. A container jail.
I’ve written before about using Claude Code with dangerous permissions inside VS Code devcontainers. That approach works great, but I wanted something more flexible. Something I could use from the command line. Something that didn’t require VS Code at all. Something with all my favorite tools pre-installed.
So I built localdev.
What is localdev?
It’s a feature-rich containerized development environment specifically designed for running Claude Code CLI (and other AI assistants) in “dangerous mode” without risking your host system. Think of it as a padded room where Claude can go absolutely wild, and the worst that happens is you blow away the container and start fresh.
The key insight: if you’re going to give an AI full permissions, make sure there’s nothing important it can touch. Mount only what you need. Keep everything else locked away.
Why Podman, Not Docker?
I use Podman instead of Docker. A few reasons:
- Rootless by default – No daemon running as root on your system
- Better security model – User namespaces keep things properly isolated
- Drop-in Docker compatibility – Same commands, same Containerfiles
- No daemon required – It just… runs
Oh, and it’s completely open source.
On Ubuntu/Debian:
sudo apt-get install podman
Or if you’ve cloned the repo:
make pre
For macOS users, you’ll want to give the Podman machine adequate memory – I recommend at least 16GB if you’ve got it:
podman machine stop
podman machine set --memory 16384
podman machine start
What’s In the Container?
Everything I want in a Linux development environment:
Languages:
- Go 1.25.0 with full toolchain – golangci-lint, staticcheck, Delve debugger
- Node.js via NVM (multiple versions: 14.16.0, 18.18.2, LTS)
- Python 3 with pip and uv
- Java (Eclipse Temurin JDK 17)
Development Tools:
- Git and GitHub CLI (
gh) - Atlassian CLI (
acli) for Jira integration - Podman (yes, containers in containers!)
- Homebrew for installing whatever else I need
Documentation and Media:
- Marp for slide decks
- mermaid-cli for diagrams
- md-to-pdf for markdown conversion
- ffmpeg, ImageMagick, qpdf
AI Assistants:
- Claude Code CLI with automatic
/claudedirectory integration - GitHub Copilot CLI
And here’s the beautiful part: I created aliases so I don’t even have to remember the dangerous flags:
alias clauded="claude --dangerously-skip-permissions"
alias copilotd="copilot --allow-all-tools"
Just type clauded and you’re off to the races.
The Mount Strategy
The real magic is in how directories get mounted. The localdev script handles this intelligently:
./localdev # Current directory
./localdev /path/to/repo1 /path/to/repo2 # With external mounts
LOCALDEV_MOUNTS="/path1;/path2" ./localdev # Via environment variable
Here’s what happens:
/<project-name>/→ Your current working directory (read-write)/claude/→ Host’s~/.claudedirectory (read-write) – this is where your global CLAUDE.md and configs live/external/<name>/→ Any external directories (read-only)
That read-only bit is crucial. When I’m porting code from another project, I mount the reference code as read-only. Claude can see it, learn from it, but it can’t accidentally modify the source I’m studying. I’ve been burned before by AI tools “helpfully” making changes to code I didn’t want touched.
The Global Claude Config
One thing I really like: your host’s ~/.claude directory automatically mounts to /claude inside the container. This means:
- Your global CLAUDE.md instructions persist across projects
- Shared configurations and slash commands are available
- The Claude CLI wrapper automatically adds
--add-dir /claude
So you can have project-specific instructions in your repo AND global instructions that apply everywhere. Best of both worlds.
My Actual Workflow
Inside the container, it looks like this:
cd /<project-name>
cat /claude/CLAUDE.md # Check global instructions
cat /external/ref-code/file # Reference that read-only external code
clauded # Start Claude in dangerous mode
go build ./...
npm test
I wrote extensively about using this setup for real work in my gocat project – porting RFCat from Python to Go. The container let me mount the original Python code read-only while giving Claude full access to my Go workspace. The result? A fully functional RF communication library with 100% packet success rates.
Performance Notes
First run of the container is slow – like 30-60 seconds slow. That’s because:
- User namespace setup
- Overlay filesystem initialization
- Device permission checks for USB passthrough
Subsequent runs are much faster. If you’re impatient like me, just grab another cup of coffee on that first spin-up.
Security Model
Let me be clear about what isolation you actually get:
- Filesystem: Only mounted directories are accessible
- Network: Isolated container networking
- User: Non-root
developeraccount inside the container - User namespaces:
--userns=keep-idmaintains proper file ownership - Read-only mounts: External directories are protected from modification
This isn’t a VM – a determined attacker could probably escape. But for protecting yourself from AI oopsies? It’s more than enough. Claude can rm -rf / all day long and the worst that happens is you restart the container.
Git: Your Real Safety Net
I need to be blunt about this: git is not optional.
Container isolation is great. Read-only mounts are great. But your actual safety net – the thing that will save your bacon when Claude decides to “refactor” your entire codebase at 2am – is version control.
Commit early. Commit often. Commit constantly.
Here’s my workflow:
- Before starting any AI session:
git statusto make sure you’re clean, then commit anything outstanding - Before any significant prompt: Quick commit of what you have so far
- After Claude makes changes: Review, test, commit if good
- If things go sideways:
git diffto see what happened,git checkout .to nuke it all
I’m not kidding about the frequency. When I’m working with Claude in dangerous mode, I might commit every few minutes. They don’t have to be beautiful commits with perfect messages. They’re checkpoints. Safety saves. You can git squash them later to make the history clean.
Why does this matter so much? Two reasons:
Reason 1: Undo button for AI mistakes
Claude sometimes gets… enthusiastic. It might decide that your function really should be split into six microservices, or that your entire error handling approach is wrong and needs rewriting. Having frequent commits means you can easily see exactly what changed and roll back to any previous state.
# See what Claude did
git diff HEAD~1
# Nope, don't like it
git reset --hard HEAD~1
Reason 2: Recovery from deletion
Even inside a container, Claude has write access to your mounted project directory. If it decides to delete files – maybe it thinks they’re “unused” or it’s “cleaning up” – those files are gone from disk. But if you committed them? They’re still in git.
# Claude deleted my config file
git checkout HEAD -- config.yaml
I’ve had Claude delete files it thought were obsolete. I’ve had it overwrite files with completely different content. I’ve had it rename things in ways that broke imports across the codebase. Every single time, git saved me.
The container protects your host system. Git protects your work.
Best Practices
After using this setup for months, here’s what I’ve learned:
-
Mount only what you need – Don’t mount your home directory. Don’t mount
~/.ssh. Just don’t. -
Use read-only mounts for reference – Any code you’re studying but not modifying should be
:ro -
Commit constantly – Git is your backup plan when Claude gets enthusiastic. Every few minutes is not too often.
-
Rebuild often – Containers are cheap. Debugging weird state is expensive. When in doubt,
make clean && make -
Keep secrets out – No API keys in environment variables, no credentials files mounted
Building and Running
Clone the repo and build:
git clone https://github.com/gherlein/localdev
cd localdev
make # Standard build with platform detection
make no-cache # Build without cache if something's weird
Then just run:
./localdev
Or if you prefer the Makefile:
make run
Conclusion
Look, I get it. Running AI tools with dangerous permissions sounds… well, dangerous. And it is – on your host system. But inside a container with carefully controlled mounts? It’s actually pretty safe.
The workflow is liberating. No more “mother may I” for every file listing. No more clicking approve fifty times to run a test suite. Just pure, uninterrupted AI-assisted coding.
And if something goes wrong? Blow away the container and start fresh. That’s the beauty of disposable environments.
If you’re doing serious AI-assisted development, you owe it to yourself to try a setup like this. The productivity gains from not having to babysit every permission request are substantial.
Just remember: containers only, minimal mounts, version control everything. Follow those rules and you can let Claude run wild without losing sleep.
Check out localdev on GitHub and give it a spin. Let me know what you think.