Why We Containerize Our Code But Not Our Tools: Introducing dc-toolbelt
Here’s a question that’s been bothering me: We religiously containerize our applications with Docker. We preach about reproducibility, isolation, and “it works on my machine” being unacceptable for production code. Yet somehow, we’re perfectly fine installing Node.js, Python, PostgreSQL clients, cloud CLIs, and dozens of other tools directly on our host machines.
Why do we containerize our code but not our toolbelt?
The Contradiction We Live With
Think about your typical development workflow:
You run your application in Docker because you want:
- Reproducibility: Same environment everywhere
- Isolation: No conflicts between project dependencies
- Portability: Works on any machine with Docker
- Version control: Lock specific tool versions per project
But then you install your development tools directly on your host:
- Node.js via nvm (or was it brew? Or the installer?)
- Python via pyenv
- PostgreSQL client from who-knows-where
- AWS CLI manually downloaded
- Google Cloud SDK from a custom installer
- Azure CLI from another package manager
- TypeScript globally via npm
- ESLint, Prettier, and fifty other npm packages
Each installed differently. Each with its own update mechanism. Each potentially conflicting with another project’s requirements.
The Real Cost of Host-Installed Tools
Let’s be honest about what this approach actually costs us:
Time lost to setup: A new developer joins your team. They spend their first day (or two, or three) following a 50-step setup document that’s already outdated. They hit issues the document doesn’t cover. They ask for help, interrupting someone else’s work.
The “works on my machine” epidemic: “Strange, the tests pass for me.” You spend an hour debugging only to discover they have PostgreSQL 15 while you have 14. Or their Node version is slightly different. Or they installed that global package you forgot to document.
Configuration drift: Your laptop has tools configured one way. Your desktop has a slightly different setup. Your colleague has their own preferences. Three months later, nobody remembers why something works in one place but not another.
Upgrade anxiety: You need to upgrade Node.js for a new project, but you’re worried about breaking your other five projects. So you set up nvm, create a complex switching mechanism, and add cognitive overhead every time you switch contexts.
Onboarding friction: Every new tool added to your stack means updating documentation, helping teammates install it, and fielding “how do I install X on Windows?” questions.
The accumulation problem: Over time, your machine becomes a graveyard of installed tools. Old versions, deprecated packages, orphaned dependencies. You’re afraid to clean up because something might still depend on them.
We Already Solved This Problem
Here’s the irony: we already solved this problem for our applications.
When we containerize an application, we get:
- A declarative definition of the entire environment
- Reproducibility across machines and team members
- Isolation between projects
- Version locking that actually works
- Instant setup for new developers
- No “it works on my machine” excuses
So why aren’t we doing the same for our development tools?
Enter Development Containers
Development containers (devcontainers) apply the same containerization philosophy to your development environment itself.
Instead of installing tools on your host machine, you define them in a container image:
- Node.js version? In the container.
- PostgreSQL client? In the container.
- Cloud CLIs? In the container.
- Git configuration? In the container.
- Shell preferences? In the container.
- VS Code extensions? Defined in configuration.
Clone the repository, open it in VS Code, and you’re working in a fully-configured environment within seconds. No installation steps. No conflicts. No “works on my machine.”
The same environment your teammate uses. The same environment that runs in CI. True reproducibility.
Why This Changes Everything
Onboarding goes from hours to minutes: New developer? Clone, open, done. The devcontainer builds automatically. Every tool, every configuration, ready to go.
No more environment debugging: When something doesn’t work, it’s not because of subtle environment differences. Everyone has the identical setup.
Per-project tool versions: Need Node 18 for one project and Node 24 for another? No problem. Each project’s container has its own versions, with zero conflicts.
Your host stays clean: Your laptop runs just Docker. Everything else lives in containers. No more accumulation of installed tools. No more version conflicts.
Try before you commit: Evaluating a new tool or language? Spin up a devcontainer, experiment, delete it when done. Your host machine remains pristine.
True portability: Switch from Mac to Windows to Linux? The devcontainer works identically. Work from your laptop, desktop, or that borrowed machine? Same experience.
But What About Performance?
I hear this objection a lot: “Won’t running in a container be slow?”
The answer is nuanced:
- On Linux: Near-native performance. You probably won’t notice the difference.
- On macOS/Windows: Docker Desktop has improved dramatically. For most development work, the performance is perfectly acceptable. File system operations can be slower, but Docker’s volume mounts have gotten much better.
And here’s the thing: even if there’s a small performance cost, it’s vastly outweighed by the time you save on setup, debugging environment issues, and helping teammates.
Plus, you’re already running your application in Docker. Your development environment is just one more container.
The Missing Piece: Pre-Configured Images
Okay, so devcontainers are great in theory. But there’s a problem: someone has to build and maintain these container images.
Most teams end up:
- Starting from scratch with base images
- Spending hours figuring out how to install and configure each tool
- Maintaining custom Dockerfiles that become complex and fragile
- Reinventing the wheel that other teams have already built
This is where the barrier to adoption really sits. Devcontainers are conceptually simple, but practically tedious.
Introducing dc-toolbelt
This is why I built dc-toolbelt.
dc-toolbelt is a collection of production-ready, pre-configured devcontainer images that give you the “containerize everything” workflow without the setup burden.
The philosophy:
- Batteries included: Essential tools come pre-installed and configured
- Consistent environment: Same setup across projects and machines
- Developer friendly: Oh My Zsh, helpful aliases, and quality-of-life improvements out of the box
- Production ready: Based on official, minimal base images
What’s Inside: Everything Containerized
Remember all those tools you used to install on your host machine? They’re all in the container now.
System essentials:
- Zsh with Oh My Zsh (robbyrussell theme)
- Git, Git LFS, and GitHub CLI
- PostgreSQL client tools
- Modern utilities like ripgrep, jq, curl, wget
Node.js tooling:
- Node.js 24 (latest LTS)
- TypeScript, ESLint, and Prettier pre-installed globally
- tsx for TypeScript execution
- npm-check-updates for dependency management
Developer experience enhancements:
- Common git aliases (gs, gc, gco)
- Shell completion configured
- Passwordless sudo
- UTF-8 locale properly set up
Cloud Platform Variants: Pick Your Stack
Here’s where dc-toolbelt really shines. Instead of a monolithic “kitchen sink” image, you choose the variant that matches your project.
Working on a GCP project? Use node24-gcloud. Moving to Azure? Use node24-azure. Each image contains only what you need, properly configured and ready to go.
This is the beauty of containerized development environments: different projects can use different stacks without any conflicts. No more “I need the AWS CLI for project A but it breaks something in project B.”
node24-gcloud
Everything from the base image plus:
- Google Cloud CLI (gcloud, gsutil, bq)
- GKE Auth Plugin for Kubernetes
- VS Code’s Google Cloud Code extension pre-configured
node24-azure
Everything from the base image plus:
- Azure CLI with all extensions
- Azure Account and Resource Groups VS Code extensions
- Ready-to-use Azure configuration
node24-aws
Everything from the base image plus:
- AWS CLI v2 with all services
- AWS Toolkit VS Code extension pre-configured
- AWS configuration directory mounted
Persistent Configuration Done Right
One key feature I’m proud of is how dc-toolbelt handles persistent configuration. Instead of bind mounts that tie you to specific host filesystem locations, it uses named Docker volumes.
This approach gives you:
- Per-devcontainer isolation (each project can have its own credentials)
- Persistence across container rebuilds
- No host filesystem dependencies
- Better performance, especially on macOS and Windows
The default volume names are:
dc-toolbelt-gh-configfor GitHub CLIdc-toolbelt-gcloud-configfor Google Clouddc-toolbelt-azure-configfor Azuredc-toolbelt-aws-configfor AWS
You can easily customize these names in your devcontainer.json if you need separate configurations per project.
Getting Started in 60 Seconds
Getting started is incredibly simple. Here’s a complete devcontainer.json for Node.js development:
{
"name": "Node 24 Toolbelt",
"image": "ghcr.io/totophe/dc-toolbelt:node24",
"remoteUser": "node",
"mounts": [
"source=dc-toolbelt-gh-config,target=/home/node/.config/gh,type=volume"
],
"customizations": {
"vscode": {
"settings": {
"terminal.integrated.defaultProfile.linux": "zsh",
"editor.formatOnSave": true,
"editor.defaultFormatter": "esbenp.prettier-vscode"
},
"extensions": [
"github.copilot",
"github.copilot-chat",
"esbenp.prettier-vscode",
"dbaeumer.vscode-eslint"
]
}
}
}
Need Google Cloud tools? Just change the image to node24-gcloud and add the gcloud volume mount. The repository includes ready-to-use templates for all variants.
Who Is This For?
dc-toolbelt is perfect if you:
- Work on multiple Node.js projects and want consistency
- Collaborate with teams and need reproducible environments
- Develop for cloud platforms (GCP, Azure, or AWS)
- Are tired of “works on my machine” problems
- Want to onboard new developers in minutes, not hours
- Value developer experience and productivity
Real-World Impact: The Containerization Dividend
The benefits I described earlier about containerizing applications? They all apply to your development environment now.
Faster onboarding: New team members clone the repo and open it in VS Code. The devcontainer builds, and they’re coding in minutes. No setup documentation. No troubleshooting. No “it works on my machine.”
True reproducibility: When something doesn’t work, it’s not an environment issue. Everyone has the exact same setup. Your local environment matches CI. Debugging becomes about code, not configuration.
Isolation without complexity: Work on five projects with five different Node.js versions? No problem. Each project’s container is isolated. Switch between them instantly. No nvm, no version managers, no conflicts.
Clean host machine: Your laptop runs Docker. That’s it. No accumulated cruft. No version conflicts. No wondering “what happens if I upgrade this?”
Simplified CI/CD: The same container that developers use locally can run in your CI pipeline. The build environment matches the development environment. No more “passes locally, fails in CI.”
Cross-platform consistency: macOS, Windows, Linux—the environment is identical. “It works on my machine” stops being an excuse because everyone truly has the same machine (virtually).
Experimentation without risk: Want to try a new tool? Install it in the devcontainer. Hate it? Delete the container. Your host stays pristine.
Try It Today
dc-toolbelt is open source and available on GitHub. The images are automatically built and published to GitHub Container Registry, so you can start using them immediately.
Check out dc-toolbelt on GitHub
# Pull the base Node.js image
docker pull ghcr.io/totophe/dc-toolbelt:node24
# Or use it directly in your devcontainer.json
"image": "ghcr.io/totophe/dc-toolbelt:node24-gcloud"
The repository includes templates for all variants in the templates/ directory, making it even easier to get started.
What’s Next?
I have several ideas for expanding dc-toolbelt:
- Additional language stacks (Python, Go, Rust)
- More cloud platform integrations
- Database-specific variants with pre-configured clients
- Framework-specific optimizations
But I’d love to hear what you need. If there’s a development stack you’d like to see, open an issue on GitHub or reach out directly.
Final Thoughts: Complete the Circle
We containerized our applications because it solved real problems: reproducibility, isolation, portability, consistency.
Those same problems exist in our development environments. And the same solution works.
dc-toolbelt isn’t revolutionary—it’s applying proven container principles to an area we’ve inexplicably left uncontainerized. It’s finishing what we started when we adopted Docker.
Your application runs in a container. Your development tools should too.
The images are opinionated because good defaults matter. But they’re also flexible enough to adapt to your needs. And most importantly, they’re ready to use right now.
Stop installing tools on your host machine. Stop fighting version conflicts. Stop spending hours on environment setup.
Containerize your toolbelt. You already know it works.
🍴 Fork it on GitHub
Want to try dc-toolbelt or contribute to the project?
⭐ Star the repository if you find it useful!
🐛 Report issues or suggest new features
🤝 Contribute devcontainer configurations for your favorite stack