Windows Server 2016 will still have the familiar Hyper-V virtualization, but it will add a new way of compartmentalizing applications: containers. Even more confusingly, it will have not just one but two styles of containers – Docker-style containers and Hyper-V-based Windows Server containers. What’s the difference?
Don’t think of containers as replacing virtual machines, or even as being a form of virtualization; they solve different problems. Because of this, the way you work with them, and think about them is different. One is a platform for isolating entire operating systems and the other is for isolating user spaces; containers isolate applications and all the associated files and services, but not the whole OS. With containers, a single OS hosts multiple applications, each of which thinks it has its own OS – but it doesn’t.
Containers are best thought of as a lightweight way of composing applications from microservices that also take into account how you develop, manage, and co-ordinate them. Unlike a virtual machine, where you run a workload – the OS and however many applications you require to deliver the workload – a container is used for running a single application. Often, that’s not an application in the sense of Exchange of SQL Server; instead, it’s a single microservice, with lots of microservices orchestrated to create a single application which is composed of those microservices. You don’t create a general purpose container and then install applications the way you do in a VM; you create a container that’s specifically set up for a particular application or microservice. An application might easily use a dozen containers to provide the microservices it’s composed of, so if you’re running a dozen applications you’ll have hundreds of containers. Google deploys over a billion application containers a week (using what became Kubernetes).
A container doesn’t have its own copy of the operating system the way a virtual machine does; it has an instance of the same operating system the host is running. That’s why containers can be created so quickly; to the container, the disk looks like a brand new copy of the OS that’s just booted, but in reality it’s thanks to namespace isolation that gives each container its own, virtual namespace with access to files, network ports, and its own list of running processes.
That makes containers very efficient for deploying similar applications that can use the same kernel code. But it also means you can’t mix and match operating systems with containers the way you can with VMs. You can run a Linux VM on Windows Server using Hyper-V, or a Windows Server VM on a Linux host. But if you create a Linux Docker container it’s not going to run on Windows Server, unless you spin up a Linux VM with the Docker engine running in which to put it. It’s the Docker engine in Windows Server, but you’re going to be running Windows apps and microservices in Docker containers on Windows Server.
A container is not a security boundary in the same way that a virtual machine is. With virtual machines, the guest OSes are completely isolated from each other, with the hypervisor handling interactions between the guest and host OS. Containers can only see what’s in their own namespace; they shouldn’t be able to interfere with other containers because they can’t see the files, network ports, or processes in other namespaces. But many of the files, directories, and services of the OS are actually shared between containers and projected into each namespace. What containers have is userspace isolation – file changes made within one container don’t affect another container.
Windows Server containers exist somewhere between the two. They use Hyper-V (and, optionally, the Nano Server deployment option that gives you a minimal version of Windows Server 2016) to provide a very thin layer that runs in the hypervisor, with the container on top of that. You get the usual application isolation that a container provides, but the OS is also isolated by Hyper-V. There’s one benefit of this belt-and-braces approach that most businesses and developers won’t need – you don’t usually need to protect one of your own containers from attacks by another of your containers – but it’s ideal for cloud providers (whether that’s a public cloud or a private cloud infrastructure), because code running in one container isn’t able to impact either the host operating system or other containers running on the same host.But if you need to be able to guarantee the kernel version on which a container is running, for compatibility, while letting another container take advantage of new features in a later version of the Windows kernel, Hyper-V-based containers let you do that, without going back to a virtual machine architecture. Development using containers and microservices is a new frontier (even companies like Netflix, Apple, and Bloomberg that are building large-scale container systems are still building their own tools https://www.linkedin.com/pulse/scaling-mesos-apple-bloomberg-netflix-chuck-taylor); with two styles of container, Windows Server 2016 promises a more fundamental choice of how to isolate your applications.
Mary Branscombe is a freelance technology journalist for a wide range of sites. She has been a technology writer for more than two decades, covering everything from early versions of Windows and Office to the first smartphones, the arrival of the Web, and most things in between, from consumer and small business technology, to enterprise architecture and cloud services. She also dabbles in mystery fiction about the world of technology and startups. Visit www.marybranscombe.com or follow @marypcbuk on Twitter.