For those leaning towards containerized instances, it blasts them in a controlled procedure, then monitors them for health. It's not tough to manage the life cycle of a CoreOS instance. RESTful commands do much of the heavy lifting.
Inside CoreOS is a Linux kernel, LXC capacity, and the etcd/etcd daemon service discovery/control daemon, along with Docker, the application containerization system, and systemd--the start/stop process controller that's replaced various initd (initial daemon) in many distros.
There is multiple instance management using fleet--a key benefit for those primarily starting pools and even oceans of instances of apps/OS instances on a regular basis.
Like Ubuntu and RedHat, it uses the systemd daemon as an interface control mechanism, and it's up to date with the same kernel used by Ubuntu 14.04 and RedHat EL7. Many of your updated systemd-based scripts will work without changes.
The fleetd is controlled by the user space command fleetctl and it instantiates processes, and the etcd daemon is a service discovery (like a communications bus) using etcdctl for monitoring--all at a low level and CLI-style.
The etcd is used to accept REST commands, using simple verbs. It uses a RESTful API set, and it's not Puppet, Chef, or other service bus communications bus controller, but a lean/tight communications methodology. It works and is understandable by Unix/Linux coders and admins.
A downside is that container and instance sprawl become amazingly easy. You can fire instances, huge number of them, at will. There aren't any clever system-wide monitoring mechanisms that will warn you that your accounting department will simply explode when they see your sprawl bill on AWS or GCE. Teardown isn't enforced--but it's not tough to do.
We did a test to determine the memory differences between Ubuntu 14.04 and CoreOS, configuring each OS as 1GB memory machines on the same platform. They reported the same kernel (Linux 3.12), and were used with default settings.
We found roughly 28% to 44% more memory available for apps with CoreOS -- before "swap" started churning the CPU/memory balances within the state machine.
This means an uptake in speed of execution for apps until they need I/O or other services, less memory churn and perhaps greater cache hits. Actual state machine performance improvements are dependent on how the app uses the host but we feel that the efficiencies of memory use and overall reduction in bloat (and security attack surface potential) are worth the drill.
These results were typical across AWS, GCE, and our own hosted platform that ran on a 60-core HP DL-580 Gen8. The HP server used could probably handle several hundred instances if we expanded the server's memory to its 6TB max--not counting Docker instances.
Sign up for CIO Asia eNewsletters.