if you wanna run a simple service with docker, just set up a server with a regular distro like debian and set up docker there
if you want to have a reliable cluster of machines where you have the ability to quickly provision another aws ec2 instance or even physical machine to run more containers in your cluster (that also has features like service discovery so you know where e.g. your database or microservice #17 is), there's container linux
@maffsie that's why there's ignition, which runs a script on the very first boot and just that boot only, in initramfs, so you can set up disks (partition/format), copy files (for very specific things), and setting up users
"ct" has no place being on the install medium because in a deployment, the config is meant to be pulled over the network (url, a service like etcd (high-reliability networked key-value store for config), hypervisor bridge), you'd never create it on the machine
you gotta understand the concept before shitting on it
the whole idea of container linux is that it's meant to be the foundation of a cluster of containers, and to make that more reliable and independent of the state of individual servers, coreos is meant to be immutable, so everything will be in a well-known state which is not a guarantee you get if you just install ubuntu server