summaryrefslogtreecommitdiff
path: root/published/docker-edit.txt
blob: 8d38d2ec0ebd1a0689e7e3e2ef222a41fd9f22b6 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
Containerization has taken the datacenter by storm. Led by Docker, a startup that's on a mission to make development and deployment as simple as it should be, Linux containers are fast changing the way developers work and devops teams deploy.

Containerization is so successful and so powerful of an idea that it's only slightly hyperbolic to suggest that the future of servers will not include operating systems as we think of them today.

To be sure it's still a ways off, but containerization will likely completely replace traditional operating systems -- whether Linux, Windows, Solaris, FreeBSD -- on servers. Instead servers will likely consist of simple, single-user installs of hypervisors optimized for the specific hardware. Atop that bare metal layer will be the containers full of applications.

Like many things to come out of Linux, containerization is not new, in fact the tools have been part of the kernel since 2008. But just as it took GitHub to finally push Git to mainstream developer popularity, the containerization tools in Linux did not really start to catch on until Docker came along. 

Docker is not the only containerization tool out there, but is currently leading the pack in both mind share and actual use. Google, Amazon and even Microsoft are all tripping over themselves to make sure their clouds offer full Docker integration. Google has even open sourced its own Docker management tool.

But what is a "container" and why is it suddenly such a big deal?

The shortest answer is that containers are static application environments, which means much more reliable deployments.

Solomon Hykes, Docker's Founder and CTO, likes to compare Docker containers to shipping containers (the company's logo is a collection of shipping containers riding on the back of a whale). Like the current devops world today, the shipping industry of old lacked standards. To ship something you just stuck it in whatever container you liked and it was loaded on a ship. That meant ships had thousands upon thousands of different different containers of all shapes and sizes. 

Then the shipping industry standardized around the colorful, but regular sized shipping containers you see stacked all over the docks today (this is the origin of Docker's name). The standardized containers mean that the shipping companies no longer need to worry about the actual freight, they can just stack containers on ship after ship without worrying about what will fit where.

Docker allows developers to do the same with code. In the shipping metaphor your applications and code are the goods, the Docker images are the containers and the ship is the server, virtual server or cloud where you're deploying your application. The server can just stack Docker images up without ever worrying about what's inside them.

Another way to think of a container is that it's a virtual machine without the operating system. It's a container that holds applications and all their prerequisites in a self-contained unit, hence the name. That container can be moved from one machine to another, or from virtual to dedicated hardware, or from a Fedora installation to Ubuntu and it all just works.

Or at least that's the latest wave of the "write-once-run-anywhere" dream that Docker has been riding to fame for the past two years. The reality of course is a little different.

Imagine if you could fire up a new virtual environment on your Linux laptop, write an application in Python 3 and then send it to your co-worker without needing to worry about that fact that she's running Windows and only has Python 2 installed. If you send her your work as part of a container then Python 3 and all the elements necessary to recreate the environment you were working in come with your app. All she has to do is download it and run it using Docker's API interface.

Then after your co-worker finishes up the app you can pull in her changes and send the whole thing up to your company's AWS EC2 server, again not worrying about the OS or environment particulars other than you know Docker is installed.

But there's the rub -- your app is now tied to Docker, which in turn means the future of your app is tied to the future of Docker. 

From a high level view, what Docker does is nothing new. Linux containers have been part of the kernel since 2008, but Docker has packaged up a very slick system for quickly and easily creating, running and connecting lightweight Linux containers. With Docker you don't need to configure a whole new virtual machine every time you want another instance.

That does not, however, make Docker a panacea. Not yet anyway. The containerization of all the things has a few flaws in its current form. The good news is that Docker is no longer the only story in the world of containerization. Competitors like Joyent and Canonical have both open sourced their own take on the containerization concept. The latter's take is particularly interesting since it focuses much of its efforts on security. Canonical's effort is two-fold, focusing on built-in tools like LXC (pronounced "lex-cee"), the client, and LXD (pronounced "lex-dee"), the server. Given that Canonical's Ubuntu OS is the basis of many Docker containers out there, a system specifically optimized for that setup will no doubt have appeal.

Another rapidly growing take on containerization is Rocket, created by the developers behind CoreOS. Rocket launched with a rather inflammatory post from the CoreOS developers claiming that "Docker is fundamentally flawed." The post calls out Docker for being insecure, but the main difference really comes down to... Ready for it? Systemd.

Rocket uses systemd. Docker on the other hand uses a daemon to spawn a child process that becomes the PID 1 of the container (which would normally be systemd). In the end Docker looks no more "fundamentally flawed" than Rocket, it just lacks systemd integration at the moment. 

Given the popularity of CoreOS among developer's using Docker -- it really is one of the nicest ways to run Docker -- Rocket's tighter, possibly more secure integration with CoreOS just might win some over from the Docker fold.

That leaves the container space in a situation somewhat akin to what happened when Google Chrome came along and stole much of Firefox's developer mindshare. Like as the Firefox of 2008 was to web developers, Docker is very much the darling of devops. And yet just Chrome made speed and web standards top priority -- something Firefox developers clearly wanted to do, but couldn't always do -- Rocket offers some real advantage over Docker. It's more modular, less dependent on a single core and, arguably, more in line with the popular Unix philosophy of small parts loosely joined.

In the end though the competition between the two will result in the same thing that happened with Firefox and Chrome -- the whole ecosystem ends up benefiting. In the first case the web gets faster and better for everyone. In the case of containerization, the datacenter becomes more user-friendly and development gets a little bit easier.

At this point perfection would be a lot to ask of a service as new as Docker, which just hit 1.0 less than a year ago in June 2014, or Rocket which is not even a year old. Neither is perfect, but already both are giving developers and devops much simpler ways to deploy, which is part of why the biggest cloud hosts in the world -- Google, Amazon and Microsoft to name a few -- are lining up to make sure their clouds work with Docker. This in turn means that Rocket and other competitors are also poised to take the datacenter by storm.

One of the most interesting tangential effects of containerization is that, from a developer perspective, it means that all cloud environments are effectively equal. There's no competition in terms of functionality, and no developer lock-in. Unhappy with your current host? Just deploy your container to another one, switch over DNS and you're done. This means that cloud hosts are no longer competing strictly on the features of their underlying systems, but with the extras they can offer. The full effect of this transition hasn't really been felt yet, but already Google and others are ramping up their "extras".

Google released an open source Docker management app, Kubernetes -- the Greek word for a ship's helmsman -- which the company claims will allow you to turn your cluster of container-filled virtual machines into a virtual Google data center. 

What's perhaps most interesting about Docker and its competitors is that in every case, from Canonical to Google, there's a very clear message: the future of deployment is in containers. The future of development and deployment, and especially the so-called cloud hosting market, will be containers. 

Containers won't solve every problem and won't be right for every deployment, but for the 80 percent use case -- and perhaps many more -- containerization will trump a dedicated virtual machine.