This is part one in a three-part series on working with Windows Server Containers. The guidance here is drawn from a real-world implementation Zero Friction performed, migrating from an unreliable Cloud Services Worker Role, to 17 independently-deployable, scalable and monitored containers on AKS.
Containerisation on Linux has been around for years. Both the process of working with containers, and the tooling around these processes, are very mature.
Microsoft has over the last few years made a big push toward embracing open-source and cross-platform approaches across their entire ecosystem, especially in the developer space. Support is rapidly building around the ecosystem for Windows Server containers, as both the container community embrace and integrate Windows as a containerisable OS, and as Microsoft slims down its kernel and builds out features to support making Windows Server containers smaller, faster, and natively supported.
However, it's still early days for Windows containers. If Linux containers are Young Adults, then Windows Server containers are somewhere between toddler, and a 5-year-old (with all the associated tantrum-throwing you'd expect 😉)
In this series of three articles, we document the challenges encountered and learnings gathered at Zero Friction in adopting Windows Server containers as the hosting mechanism for one of our clients. It will cover the benefits of Containers as a way to host your .NET Framework applications, a pathway to modernising older "legacy" applications, considerations when choosing the appropriate Windows Server tier and version to base your images off, and some of the key considerations you need to keep in mind when building out your continuous integration and delivery pipeline to automate the build and release of your containerised apps.
The focus of this article is why you would choose to containerise a .NET Framework app using Windows Server Containers.
A pathway for modernising .NET Framework apps
Unless you're having big problems with the way your .NET Framework app is hosted right now (as our customer was), you're not likely to gain a huge amount of benefit right now just from containerising. In fact, you'll be adding a lot more complexity into your environment, with new tools to learn and new services to run.
The big benefit that migrating .NET Framework apps to containers brings, is a pathway forward. Once you've got container infrastructure up and running:
- You have a consistent approach that you can use to host both your newer apps (eg .NET Core services) alongside your older apps
- Deployment can be identical across environments
- You are free to upgrade the framework versions your app is built on (eg to .NET Standard as a path to .NET Core) without having to worry about how it will fare in production: if the image works locally, it will work anywhere
- Deployments become less cloud-specific and much more portable: you can way more easily lift and shift containers.
For the particular project these articles are based upon, the roadmap and strategy starts with containerising the applications, unlocking the freedom to start upgrading these containerised services one by one to modern framework versions, and eventually .NET Core. Over time this will give us a dramatically more modern tech stack, reduce our resource usage, and allow us to reduce the overall cloud bills (being able to run dotnet core on Linux in a dense container environment is much cheaper than having to pay for hefty VMs and associated Windows Server licenses). Along the way we also get greater visibility and transparency through the inbuilt metrics tooling AKS provides.
Microsoft's architectural guidance
Microsoft provides a great set of architectural patterns for various scenarios, one of them being modernising legacy .NET Framework apps; their recommended approach is containerisation.
If you're not sure exactly where to start on containerising a .NET Framework app, you can start by reading the rest of this series. A good next step would be to look into Microsoft's architecture guides, specifically the eShop Modernisation Guide, which takes you step-by-step through the process of modernising a legacy .NET Web Application.
The state of Windows Server Containers
Containers on Windows had quite a rocky start to life. Windows has never been known for its lightness or portability: historically it was heavy, slow, full of legacy, and moved at a glacial pace. After years of concerted work on the platform, things are vastly different today. You can read an interesting history of how Windows Server Containers came to life on MSDN. With each release of Docker, Windows Server and surrounding tooling, the support and performance increases significantly.
Their usage hasn't yet taken off in the same way it has in the Linux community, so in some areas this project felt like forging new ground. Here's some of the things you'll need to know if you want to get involved with Windows Server Containers.
Windows Server Containers vs Linux Containers
Linux containers were built upon an ecosystem that was already used to rolling different distributions of the OS as standard. Their open-source nature made it very easy for the community to customise and tailor the OS to support this new approach to thinking about and hosting applications.
The slimmest of Linux base images,
apline, weighs in at a tiny 5.58mb, which makes pulling an image based on it incredibly fast and lightweight.
By contrast, Windows Server containers are gigantic. With a minimum image size of 256mb, anyone who has previously worked with Linux containers will find it incredibly heavyweight at first. Luckily, Docker's approach to image layers and caching means that most of the bandwidth cost is paid on initial set-up: once you've got the base OS image cached locally, the rest is pretty light. Likewise, if you're running newer releases of Windows for both the Host OS and the Container OS, you get much improved memory-usage through improvements the Windows team has made to the containerisation approaches, allowing Windows containers to share a single kernel in a very similar way to how Linux containers work.
To get an idea of the differences in start-up times and weight, here's a comparison of the base Alpine image time to get to a command prompt, and the base Windows Server images doing the same. The timings were averaged over 10 iterations, with an initial warm-up iteration excluded. This was run on my Surface Pro 6 with Windows 10 Professional as a host - it's a pretty solid dev machine, but definitely not a beefy server.
|Base image||Size (pull)||Size (disk)||Time (average)||Time (max)||Time (min)|
Clearly, the Windows team have done some pretty epic work on getting those start-up times down for containers. In fact, I'm surprised that nanoserver is almost equivalent in start-up time to alpine. But the images sizes are where the real hit comes for Windows Server Containers right now: the initial pull or push of any Windows Server container image to prime the cache will take a little while depending on your connection speed and the speed of the registry serving/receiving the image.
Summary - why would you use Windows Server Containers?
- For .NET Framework apps, the only containerisation option (without huge changes to the app itself) is Windows Server Containers
- Containerising your .NET Framework apps provides a migration pathway to incremental modernisation
- Containerising your .NET Framework apps makes your deployments a lot more portable and cloud-agnostic
- Check out the eShop Modernisation Guide from Microsoft for a worked example of migrating a legacy .NET Framework app
Check out the next article in this series, Windows Server Containers, Part 2 - Choosing the right Base Image, to start to learn more about the Windows Server Containers ecosystem, the different versions available to you, and under which circumstances you'd choose to use each of them.