When Microsoft and Docker first announced that the next version of Windows Server would support the increasingly popular containerisation technology, there wasn't a lot of technical detail about what it will take to make that work, with questions raised about how native Docker could really be.
"The Docker engine for Windows Server will have effective feature parity with the Linux side," Docker vice president David Messsina told us. "There will be no difference in using a container, other than the OS inside the container. The APIs and everything else application developers need to rely on will be the same."
But on Linux, the Docker engine (that runs the containers and is separate from the Docker client you use to manage them) uses kernel features like namespaces and C groups. Will Windows Server have that, along with a registry, service hosting and Access Control Lists for each container?
"We'll do all that," Azure CTO Mark Russinovich confirmed to TechRadar Pro at TechEd Europe, "there wouldn't be much point if we didn't." The more interesting question is how this fits in with the Windows Server application model, and how it can take advantage of some types of virtualisation Microsoft has already put into its operating system for backwards compatibility.
Different approach
Docker containers take a very different approach from putting an entire OS and one or more applications in a virtual machine, and running that as a single system that might or might not communicate with other virtual machines across a virtual network.
Docker is about building a workload up from microservices, with one service per container. "Those containers can be distributed," Messina explained, "there can be multiple copies of each of those services distributed across the environment." Docker is about splitting things up into containers and then plugging them together like Lego blocks to make the system you need.
But how does that fit with Windows Server? "One of the key questions is how far we can take app compatibility with this," Russinovich pointed out. "When you look at the Windows app ecosystem it's very complex, and the applications are very complex in terms of their dependencies and the different services the server makes available to them. We're figuring out which services can be virtualised and which need to be virtualised so we can present that per container view of services. The easiest apps to handle are fully isolated, so they're not taking advantage of Windows Service Control Manager services."
Each to their own
Each container is going to need its own Registry as well, so applications can write into it, but that's something that has been in Windows for several years, as part of the move to stop users having to log in as administrators to install desktop applications.
"What we're doing there is more sophisticated but we're leveraging the file system virtualisation and registry virtualisation we've done, as well as network virtualisation." Getting the way services are virtualised right is the key to bringing Docker to Windows, Russinovich explained, and namespaces are part of making that work.
"There are some things that are unique when it comes to these containers. The virtualisation you typically see on Windows is all one level, but the Docker model is stacked virtualisation." That means that one Docker image might be just a reference to another image, plus some extra code.
"You start with a base image that's a virtual file system, then you layer on top another image with its own virtual file system and that composes on top [of the first image]. You can compose a number of these different layers together – that's the value of Docker packing format.
"You pull down an image from the Docker hub and activate that container, then you can inject code, using say a package manager, and create another image from that. And that image I'm creating doesn't copy the base image – it just references it.
"And then I can take that layer I just created and post it into the Docker hub… When somebody else pulls down that image, it has a reference to the base image – the Docker engine says 'this image from Mark that you're pulling down references Ubuntu's image and that's not already present on the host', so it goes and gets it. So you need that layered namespace virtualisation for those references to work.
"The other aspect is resource controls. We're strengthening the resource isolation to provide the level and the types of controls C groups do on Linux."
Drawbridge lessons
And yes, Microsoft is using what it learned from building its Drawbridge research operating system to build in Docker support, just as it's taking advantage of having already done file and registry virtualisation in the Windows client. But what you'll get isn't the same as the Drawbridge library OS containers. "Drawbridge was one layer again; it didn't have this stacking concept," explains Russinovich.
"In some senses we have the pieces we've developed over time in different places, and we're putting them together and packing them up to create this Windows container concept that can work well with the Docker APIs."
Russinovich also confirmed that Azure and Windows Server will have the same Docker support. The Windows Server team are building the Docker support, as they build everything in Windows Server (which Azure runs on). "We're working closely with the Windows team, giving them the requirements – they're implementing them in the next version of Windows Server and Azure is their primary customer. It's a combined effort."
Speedy deployment
Azure is the natural home for Docker containers because the cloud gives you much faster deployment. The real advantage of Docker is "incredible portability and much faster development cycles," claims Messina. Instead of taking months to get changes to a system approved, being able to show that the change doesn't affect anything except the container speeds up getting it approved. And you can start building new code far more quickly because you can use stacked reference images.
At the BBC, Docker has taken the process of starting a new piece of code "from a day of wasted time to minutes through automation," according to Messina. "You can have apps iterating dozens or hundreds of times a day, because of the portability and the focus on fast differential change."
http://ift.tt/1yCHsHR
No comments:
Post a Comment