I've been playing with booting CoreOS images via iPXE but not on iron, instead I've been using this http://kimizhang.wordpress.com/2013/08/26/create-pxe-boot-im... technique with a couple OpenStack zones for getting a bare-PXE boot image that makes OpenStack happy. Pro tip: Disable "OpenStack network" DHCP on the subnet you're rolling these instances.
This is an especially exciting time for container visualization, considering Docker has their first 1.0.0 release candidate. (Docker 0.11)
I hope the CoreOS team manages to reach what they would consider a "stable, and production ready version of CoreOS" sometime around (after, but around) when Docker 1.0.0 lands.
Does anyone know of a tutorial on how to easily get, for example, a Django app (nginx + uwsgi + postgres + redis) running on CoreOS with Docker? I'm afraid all these parts of the stack are too many to wrap one's head around without being familiar with each layer.
I have exactly this setup running for my startup right now. I am planning to blog it after my wedding and honeymoon next week (so early June), but if you'd like to pick my brain, ping me, and I'll see what I can do.
I'm also interested in this. I'm used to running my Django/(formerly gunicorn but soon switching to uwsgi or passenger)/nginx/PG stack on EC2. I've been exploring Docker and Vagrant, and have almost finished setting up a useful dev environment. I'm trying to figure out the right way to do deploy my containers, and where CoreOS fits into everything.
* Install Docker on your dev machine (remember, there is a Mac version too)
* Add a Dockerfile to your source repo, specifying how to assemble a container image from source.
* Use a combination of "docker build" and "docker run" to test during development. You can use docker tags to build a separate image for each git commit/tag/branch.
* When ready to deploy, use "docker push" to upload your image to a registry (you can run your own, or use the official registry at https://index.docker.io). Note the official registry supports private images.
* From your production machines (presumably CoreOS but you may have a mix of other distros too) run "docker pull" and "docker run" to deploy your app.
* Use Links ("docker rum --link") to interconnect containers, for example your frontend to your database, etc.
Mostly if you use Docker for development, you don't need Vagrant. Specifically the Dockerfile is basically a replacement for the Vgrantfile. The caveat is that you can use vagrant for machine deployment to get to a working docker deployment. We used to recommend this but people got confused between the 2, so now we recommend boit2docker instead.
> Use Links ("docker rum --link") to interconnect containers, for example your frontend to your database, etc.
What's the current best-practice to interconnect containers running on different hosts? Is Docker going to add this capability itself, or will this always be something built on top of Docker by e.g. CoreOS?
CoreOS is awesome and "makes you do cloud right" by forcing you to do things like make sure your app can die and restart cleanly and make you store your data in a resilient way.
I'm super excited by this release and look forward to this shaping the way people do cloud.
Locksmith looks awesome. Is there a way for services to cause their node to retain its lock for some time after a reboot, for example to allow replication to catch up or similar?
...and also a new locksmithctl binary that has options for setting and unsetting locks among other things. I guess you could create a systemd unit that unlocks any set lock, and run it 10 minutes after boot with a systemd timer, as a start.
What hoops do you have to do to download a release of this system? Either I'm blind (very possible) or you either have to join their developer network, pay them or build it from scratch on github?
robszumski is right. There are lots of various "platforms" listed with great docs for each at https://coreos.com/docs/. I helped with the OpenStack docs, and if you happen to be trying to do something with CoreOS on OpenStack I'm happy to help.
The IRC channel #coreos on Freenode is also a great place to get help.
There are also scripts to generate ISOs from other images. If you didn't get all the way through @robszumski's link there are quite a number of options here:
http://storage.core-os.net/coreos/amd64-usr/beta/
I use CoreOS for my research, VM allocation problems, and it works well when I send them jobs via Docker HTTP API, it is easy to maintain them, it works fast. Hope to see final soon.
I'm not sure that could work, at least at the moment. Right now etcd ties into fleet very tightly.
If you stopped etcd on all your CoreOS nodes, installed consul, and had all of your CoreOS systemd units register with consul (say, with an ExecStartPre step), you'd technically be 'running CoreOS with Consul' - but fleet would be just straight up broken without etcd, meaning there'd be no way to submit units to different nodes in your cluster, or view logs, or really manage the cluster at all.
Looking at the fleet source code (https://github.com/coreos/fleet), it looks like swapping out etcd for consul would take a lot more effort than 'sed -i -e 's/etcd/consul/g' *.go'. You'd essentially have to do a rewrite of fleet from scratch.
I don't think they should be mutually exclusive. etcd is too baked-in to CoreOS. My plan is to try out Consul in containers rather than in CoreOS itself.
https://coreos.com/docs/running-coreos/bare-metal/booting-wi...
http://coreos.com/blog/boot-on-bare-metal-with-pxe/
But I've not heard a lot from people with clusters of them. Perhaps I'll have to snag a small cluster of lab boxes and give it a go myself!