Conversation about containers

Richard Lander

Containers have becomes the most popular way to deploy cloud-based apps. It’s also one of the most common topics that web developers ask us about. We’ve been spending more and more effort on improving .NET for containers with each release. It’s now just assumed as the primary deployment model for new features and scenarios. It’s also the way our TechEmpower benchmark tests are run.

We’re using the conversation format again, this time with engineers who work on improving .NET for containers and ensuring we have good end-to-end workflows.

Glenn: They provide a greater level of assurance that your build output will work in your target environment then we have ever had before. In the case of .NET, this can seem negligible as we’ve got a pretty good track record of abstracting the underlying platform. But I have docker images with apps built using early versions of .NET Core that run perfectly today without me ever touching them. All the requirements, including the old runtime they work with, are encapsulated in that one Docker artifact.

Manish: Not quite sure whether developers care as much where their applications run, in contrast with devops folks who have to manage things in realtime and want isolation, density and the ability to seamlessly move workloads between VMs. Containers also provide a fast elastic way of scaling up and down as required.

Michael: Containers are great for both development and app deployment scenarios. They are a convenient means to capture the environment necessary to run either in. They provide isolation and are scalable. The tooling ecosystem has really flourished and makes working with containers simple and easy.

Rich: DevOps folks talk to us about environment promotion. That idea sums of the value of containers really well. As you promote your app from dev, to test, to staging, and eventually to production (where there may even be multiple rings), you get to count on certain aspects of containers being immutable (the files) and others being trivially easy to change (the configuration). As you promote the app through the environments, your confidence builds due to the image immutability, while the production nature of the app increases as it gets access to real data and secrets.

This model is also nicely aligned with secure supply chain since the container image isn’t changed — and shouldn’t be changed — as it goes through the environment promotion process. As image signing becomes more commonplace, this model will become standard in a lot of organizations. It’s very similar to the GitOps approach.

What’s the difference between containers on one hand and orchstrators like Kubernetes and docker swarm on the other? Where does docker-compose fit in?

Manish: The granularity for containers is at the OS or VM level, where as orchestrators manage containers across VMs, machines and even deployments. docker-compose feels like it is more of a orchestration thing since it needs to take into account the “environment” aspects as well. But interpretation is specific to the devops team building set of applications.

Michael: Containers are the components involved in your system such as an app and a database. Orchestrators define the configuration for how the containers are run such as how multiple containers communicate to each other.

Glenn: I think Manish and Michael have it covered. Orchestrators run containers across a set of hardware and I like to think of compose as a dev machine focused orchestrator.

Rich: I run pi-hole on my network with docker-compose.

I love the simplicity of the model:

  • docker-compose commands match the docker CLI, like docker-compose pull.
  • The UX is super simple, with docker-compose up and docker-compose down.
  • docker-compose comes with docker, so you don’t need to install another system.

In my opinion, pi-hole doesn’t need a more sophisticated system.

Tell us about the changes that have been made to improve the container experience for .NET developers?

Maoni: We added job support in .NET Framework 4.6.2, meaning GC started to specifically check if a process was running inside a job (ie, how a container is implemented on windows) so it could react to the memory pressure inside a container; however if you used Server GC with lots of cores and a small memory limit, this meant GC may not have been able to react fast enough so in .NET Core 3.0 we tighented this and made Server GC not throw premature OOMs for this scenario.

Manish: There have been many improvements in the .net container experience over the past few years. Since we are working on .NET 6 currently wanted to call out the following:

  1. Crossgen2: A new tool to precompile IL which replaces current crossgen. We have enabled more optimizations as part of that to improve startup, including ability to specify instruction sets, a new composite mode which compiles all native code into a single binary which have shown further improvements in startup time
  2. We recently improved support for windows containers based on process isolation to honor cpu limits set on them.

Glenn: It’s not distinct to .NET developers, but the VS family that almost all .NET developers use now have some great container tooling that you should check out. For .NET specific stuff I think the other folk have it covered. Maoni’s work in particular is the sort of thing that just makes .NET work better or more predictably without you even noticing. Because parts of the runtime, in this case the GC, are aware of the primitives that docker use for its isolation and constraints.

Michael: The size of the .NET Docker images has been reduced since they were initially released.  Part of this has come from product packaging optimizations but other significant gains have come from maximizing the number of layers shared across the .NET images – e.g. runtime, aspnet, sdk.

We also have made changes to the distros and os versions we offer official images for in response to feedback from the community.  For example Alpine was added in response to customers requesting a more secure container distro.

We recently added a dotnet/diagnostics image variant (dotnet/monitor) to offer the same diagnostics tools in containers that are useful for diagnosing .NET Core issues in other scenarios.

Rich: We use containers pervasively within our own engineering infrastructure. That gives us a lot of confidence that .NET works well in containers, although that just covers the basics. The next step was making changes to ensure that the runtime honors the environment, which is cgroups on Linux, and Job objects on Windows. That’s what Maoni and Manish are talking about. We made changes in .NET 6 to complete our offering. There were some CPU-related settings that we’d missed.

Our forward looking plan is to make the container images we publish more opinionated. The official builds of .NET are intended to work on a very wide variety of hardware and operating systems. That’s not changing. We can significantly optimise the container images we publish for modern environments if we we’re willing to reduce the scope where those images are intended to run (think hardware made in the last five years). We’re hoping that .NET 6 has the first set of changes in that model. In a world where most of the server compute is in the cloud, and climate change is threatening our way of life, it makes sense to take advantage of modern hardware and software to the greatest degree possible. That’s what we’re going to do.

What’s OOMKILL and is it gone for .NET apps?

Manish: OOMKill is the OS flavor of OutOfMemoryException. There were cases where .net wasnt good at detecting low memory conditions leading to the container exceeding its memory limits. Progressively we have improved the handling to be better aware of memory constraints for various environments.

We keep finding new cases like this one: Memory load includes the file cache on Docker Linux · Issue #49058 · dotnet/runtime (github.com), where the PAL wasnt accounting for file caches leading to excessive GC-ing. Not exactly related to OOMKill but in the similar realm.

Maoni: OOMKill is a concept on linux where you can specify how you want to the OS to select processes to kill when memory is tight. On Windows, when you are successful at committing memory, you are guaranteed that you can use that memory. on Linux this is not the case. there are many ways you can configure this on Linux; I often see folks just disable it.

Rich: In the early days of .NET Core, we saw a lot of reports of OOMKill and people were naturally unhappy. The memory limits model we included in .NET Core 3.0 changed that. I no longer see folks asking about this, at least not for the basic scenario. Certainly, there are always reasons why an app could be OOMKilled, but that’s going to be more nuanced.

Maoni wrote some great (now historical) posts on this topic:

One of the most recent changes is enabling custom values for Environment.ProcessorCount. In what scenario would you recommend that?

Context: dotnet/runtime #48094

Manish: ProcessorCount is used to configure a few things within the runtime plumbing, like # GC heaps etc. By limiting or increasing the count one can influence how the runtime behaves from a scaling perspective.

David: Lots of algorithms in .NET tune themselves according to the available processors. This includes things like concurrent data structures to the number of IO threads used by sockets and things like kestrel’s IO thread queues.

It’s extremely widespread in the BCL, Timers, Sockets, concurrent data structures, array pool, the list goes on.

Maoni: I view this as a way for users to manually influence concurrency – when you set the CPU limit on a container, the runtime will take that limit and return the # of processors calculated based on the limit, so if it’s 0.1 and you are on a 10 cpu system, when various components (like the GC) go and ask for the # of processors they’d get 1. you could change the # of processors the process thinks it has access to via this to influence those components

Rich: There are two main ways to configure CPUs with containers. The first is by specifying --cpus. That’s what Maoni is talking about, and there is a rounding algorithm that uses the next whole number if a decimal is provided. The second model is with CPU affinity, which can be specified with --cpusets. That means you specify exactly which cores you want to use, which by extension defines the number of cores. In both cases, you may want to tell docker one thing and your scaling algorithm another.

From what we have seen, there are a set of folks that want to more aggressively scale (provide a higher value) than the --cpus value, in particular, would allow. That’s why we enabled setting Environment.ProcessorCount with an environment variable. It is very similar to MAXPROCS in golang.

If you worked at either a startup or a big bank, what would you look for in a container-friendly platform like .NET, but not necessarily .NET?

Michael: Security is the first thing that comes to mind. I would look for a platform that addresses security vulnerabilities in a timely fashion. If the platform offers Docker images, I would expect the images to get updated as part the product release, not as an afterthought. I would also expect the Docker images to get updated within hours of any base image (e.g. distro) updates as well as anytime other components included in the images received critical security updates.

Glenn: Images supported by the team that makes the product. It indicates that they understand the importance of the scenarios and that you are going to get the images as fast as they can be delivered. Indicators that the team values the scenarios, i.e. features like we discussed in other answers. There are a few stacks that meet those criteria and I think they lead to the best customer experiences.

Maoni: Aside from things like security as Michael pointed out, and assuming the platform meets my basic functional/perf needs, I would look for a platform that gives me the best tools for doing diagnostics in a container ;).

Manish: Few things which are important for any platform:

  • Ease to development
  • Ease of Deployment
  • Monitoring and Diagnosability in container environments
  • Performance in container environments
  • A platform which is under continuous development so any bugs/issues can be resolved in good time

Rich: I would look for the following:

On the last point, we get the strongest requirements from within Microsoft and the US Government. It’s a rare day when a business (of any size) provides us with new security requirements.

The one clear exception is container image signing. We made a conscious decision not to support Docker Content Trust, even though we’ve been asked to by some customers. We are waiting for Notary v2 and plan to support it (sign our images and validate signatures of dependent base images) when it is ready.

Do you think of containers as primarily a deployment story or do you think devs should develop in containers, like VS Code Remote or Docker Tools for Visual Studio?

Brigit: With the advancement and growing popularity of dev containers, .NET dev containers provide current and potential .NET developers a lot of great options and flexibility.

In the Visual Studio Code Remote – Containers extension, we have a Remote-Containers: Try a Development Container Sample... command that allows users to quickly try different sample apps in dev containers. We have one for .NET, which clones https://github.com/microsoft/vscode-remote-try-dotnetcore in a container volume.

We have several .NET definitions in our dev containers definitions repo, which form the basis of definitions users can leverage in Remote – Containers and GitHub Codespaces.

With this in mind, I think developing in containers is a great story for developers from a variety of tech stacks, including .NET. It can be helpful in scenarios where folks want to get started quickly and haven’t installed .NET on their machine yet (new computer, students/devs who don’t know how to install or are new to .NET), have apps compatible with specific versions of .NET or other toolsets (i.e. an app works specifically with .NET Core 3.1, but maybe I have .NET 5.0 or 2.1 on my local machine, or in use in other apps; or maybe I’m using different versions of Node.js + .NET across this and other projects).

I also believe .NET is looking at how to get students up and running – we’ve found dev containers can be a great resource in education, such as saving time at the beginning of the semester when students need to install new tools for their classes. We have a blog post about leveraging dev containers in education and how educators have found success with dev containers.

Rich: If you want to learn containers or prototype with one of your apps, then Visual Studio and Visual Studio Code tools are great options. If you want to move a suite of apps to a container hosted service like AKS, then I’d suggest learning more about that service, prototyping in terms of the service and then working your way back towards your actual apps. We’ve seen folks prototype with our samples apps for that purpose.

In terms of daily development, that’s more of a toss-up in my mind. One of the great things about .NET is that it is a true cross-platform runtime and does a good job of hiding operating system differences. In most cases, you can successfully develop on one operating system and deploy to another, at least for web apps and services (which is what we’re focussed on for containers).

If I want Linux (and I’m on Windows), I typically first reach for WSL2. That gives me a persistent file system, all the unix-style commands I want (like time, xargs, and grep). I’d say I split half-way between using WSL2 as a terminal session and using the WSL2 Remote feature of VS Code. Both are excellent.

I also use zsh on macOS a fair bit and bash on my Linux machine. I consider those as similar to the experience I just described with WSL2.

My next step towards fidelity with a prod environment is volume mounting source or binaries, in an SDK or runtime container, respectively. I do this frequently. It’s a great way to validate that an app works with a given distro, like Alpine Linux or with container limits set.

In terms of development, I rarely actually build app container images. It takes too long and provides little value outside of the other options, for development workflow.

If I’m targeting Windows containers, then developing with Windows (uncontainerized) is likely the easiest.

I’m glad to see Microsoft investing so much in both Linux and container options for developers. It provides a lot of choice for developers, and enables people to be successful with their preferred workflows.

Are we at “peak containers” yet or is there still a lot of container growth left?

Brigit: A lot of growth in the dev containers space – we’re excited to see how folks continue to adopt them and hear their feedback.

We have a set of dev container definitions as I mentioned above, and we collect feedback through that repo and also accept community contributions for additional definitions or updates to the current definitions.

We’re also constantly working with the community to improve the dev containers experience (Remote-Containers extension, dev containers features/workflows/properties).

Rich: This is one of those typical adoption curve questions. I think that containers are completely accepted and ubiquitous at this point. At the same time, we’re still a ways out from peak containers. From talking to my friends on the AKS team, there service continues to grow, which is a pretty good indication that we’re not at peak containers.

Certainly, some folks are looking at using wasm as the next generation application deployment and execution model for cloud apps. It’s possible that wasm on the server may become a reality soon, but it would take some time to slow down the momentum of containers. In terms of investments on our team, we’re betting that containers remain the most popular cloud deployment model through 2025. It’s hard to predict further than that.

I guess we’ll really have reached peak containers when deploying software to a bare VM is something you did “back in the day”.

Closing

One of the engineers on the team likes to say that “containers are like water”. That captures our philosophy pretty well. We think of containers as being just another option for .NET apps, and do our part such that all the low-level details are taken care of. At the same time, there is complexity with using multiple containers in production, and there are industry solutions for that. Our goal is to enable you to use those systems — like from CNCF — with the same ease as any other cloud-oriented development platform.

Thanks again to Michael, Maoni, Manish, Glenn, David, and Brigit for sharing your insights on containers.

4 comments

Discussion is closed. Login to edit/delete existing comments.

    • Richard LanderMicrosoft employee 0

      Thanks. Yes, targeting the runtime-deps image is absolutely intended for what you are doing in that tutorial. We created that image type as a building block to enable a correct and secure layer for folks to use w/o needing to worry about the complexity that we do. As you know, it’s the same layer that we use for the higher-level images.

  • Robert Slaney 0

    The docker tooling for Visual Studio add a new extra parameters to the docker build/run process when you Hit ‘F5’ ( user secrets, remote debugging, etc). Sometimes we just want to run locally outside of VS, especially when used in a local ecosystem using microservices.

    Are the enhancements added by the tooling available for a command line build/run, like ‘dotnet run’, or we still going to be forced to start everything from VS ?

Feedback usabilla icon