Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Deployment of Docker containers is nice, but deploying a VM as in Vagrant was also fine. I avoided learning anything about Docker for about 2 years because I thought it was just a fad.

However, I would add that for my own personal use, it's invaluable for development work. All that work that you do _before_ your CI or deployment.

1) When I'm working with a collection of tools that I need but are a complete mess with lots of state (think: compiler tools, LaTeX, things like that), then docker image build with its incremental way of running each command piece by piece, and saving the state after each RUN, is actually a life saver. You follow the steps of some instructions, and of course, as usual, there's one extra step not documented in the manual, so you add that to your Dockerfile. You make a mistake, no big deal, just change the command, the bad state is discarded, and you get to try again. You don't have to run the whole thing all over again. And it's instantaneous.

2) When I have to work with a client's codebase, as a consultant, you'd be surprised how many projects do not have a reproducible build, with Docker or anything else. So I end up building my own Dockerfile. The number of times I've heard "but you just have to run this setup script once" -- well, those scripts never work (why would they? nobody runs them anymore). Especially when it begins with `npm` or `pip` -- almost guaranteed to fail catastrophically, with some g++ compile error, or a segfault, or just a backtrace that means nothing. For example, I recently had to run an `npm` install command and it failed with `npm ERR! write after end`. I re-ran the container again, and again once more, and then it succeeded (https://gist.github.com/chrisdone/ea6e4ba3d8bf2d02f491b4a17f...). npm has a race condition (https://github.com/npm/npm/issues/19989; fixed in the latest version). I wouldn't have been able to confidently share with my client this situation unless I had that reproducibility.

3) It's trivial to share my Dockerfile with anyone else and they can run the same build process. I don't have to share an opaque VM that's hundreds of megs and decide where to put it and how long I want to keep it there, etc.

4) It's a small one; but speed. Spinning up or resuming a VirtualBox machine is just slow. I can run docker containers like scripts, there isn't the same overhead.

5) Popularity is a blessing; the fact that I _can_ share my Dockerfile with someone is a network effect. Like using Git, instead of e.g. darcs or bzr.

By the way, you can also do nice container management with systemd and git. There's nothing inherently technologically _new_ about Docker; it's the workflow and network effects; it lets me treat a system's state like a Git repo.

Nix has similar advantages, but I see Docker as one small layer above Nix.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: