Show dotnet: Investigating Alpine Linux CVEs in .NET container images

Richard Lander

Update: We recently published Container Vulnerability Workflow. It is intended to help guide you to the appropriate course of action when encountering reported vulnerabilities in the .NET container images

CVE management is an increasingly important topic. I wrote earlier this year about CVE management in Staying safe with .NET containers, to provide more insight and guidance on this topic. In this post, I walk through a CVE investigation I did earlier this week for a customer and new guidance that I shared with them. The investigation was focused on Alpine Linux, but the guidance was general.

Quick clarification about Alpine Linux. It’s great. We have relationships with Alpine Linux folks and are fans of their work. I’m using Alpine Linux as the example because that’s what the “household brand” customer was using. Great choice. That said, we also publish Debian- and Ubuntu-based containers. From what we’ve seen, they also have solid practices. I’m not picking favorites.

This focus on CVEs is quickly becoming an (unintentional) series of posts. It’s a reflection of how often we get asked to help folks with understanding why CVEs are showing up in their CVE scan reports.

I’ll start with the investigation, and then follow-up with the guidance that I shared.


The customer was using two .NET images:

  • dotnet/aspnet:5.0-alpine
  • dotnet/sdk:5.0-alpine

They are seeing the following CVEs in their scan reports for those images:

The last three were SDK-only.

Let’s go through them one at a time, and then switch to updated guidance.

You’ll soon see that the primary issue is that the customer’s copies of these images are stale. If you pull the latest .NET Alpine images, you’ll find that three of the CVEs are resolved and one doesn’t apply.


CVE-2021-30139 is an issue with tar, present in both the aspnet and sdk images. From a quick read, it’s looks like an “untrusted data” style of issue. So, you don’t have to worry at all if you’re not using tar, and not too much if you are only reading your own .tar or other files.

First, it’s good to realize that tar ships with Alpine, as demonstrated below:

rich@wayfarer ~ % docker run --rm alpine tar
BusyBox v1.32.1 () multi-call binary.

Usage: tar c|x|t [-ZzJjahmvokO] [-f TARFILE] [-C DIR] [-T FILE] [-X FILE] [--exclude PATTERN]... [FILE]...

On the CVE page, you’ll see that we’re looking for 2.12.5 or higher as a fixed version, in the apk-tools package. Let’s check the same alpine image.

rich@wayfarer ~ % docker run --rm alpine apk list apk-tools
apk-tools-2.12.5-r0 aarch64 {apk-tools} (GPL-2.0-only) [installed]

That’s the good version. Let’s see if the .NET runtime image for Alpine has the same fix.

rich@wayfarer ~ % docker run --rm apk list apk-tools
apk-tools-2.12.5-r0 aarch64 {apk-tools} (GPL-2.0-only) [installed]

Perfect. Same version.

Let’s look at how old these images are:

rich@wayfarer ~ % docker inspect alpine | grep Created
        "Created": "2021-04-14T18:42:38.108586646Z",
rich@wayfarer ~ % docker inspect | grep Created
        "Created": "2021-05-11T15:05:46.29582297Z",

The fix has been in Alpine since at least mid-April, and in .NET since mid-May. I wonder why it took so long to get .NET images updated. We’re actually looking at .NET 5.0.6. If we dig deeper, maybe we’ll find that .NET 5.0.5 had this fix, too.

rich@wayfarer ~ % docker run --rm apk list apk-tools
apk-tools-2.12.5-r0 aarch64 {apk-tools} (GPL-2.0-only) [installed]
rich@wayfarer ~ % docker inspect | grep Created 
        "Created": "2021-04-14T19:24:08.18300291Z",

Nice. The .NET image had this fix within an hour of it being in Alpine, or so it seems. As I’ve said in previous posts, we update .NET images quickly based on base image updates.

This confirms that the customer images were stale. It means that they were pulled well from MCR over a month ago. You are guaranteed to have CVEs in your images if you wait over a month to update.

We can safely check this CVE off the list.


CVE-2021-29468 is an issue in git, present in the sdk image. It’s another “untrusted data” sort of issue. That mostly strikes it off the list on its own. What’s interesting (comical, really) is that this flaw in git appears to only be a problem for git when compiled and used on Cygwin. Cygwin is Windows software. The git in Alpine is not compiled on Cygwin, nor is Cygwin present in Alpine.

A linked mail thread says that this issue is not resolved in the upstream git sources but it being a Cygwin issue. A good version is not listed. git versions up to 2.31.1-1 are affected.

Let’s check the .NET SDK image.

rich@wayfarer ~ % docker run --rm apk list git
git-2.30.2-r0 x86_64 {git} (GPL-2.0-or-later) [installed]

Unsurprisingly, the .NET SDK image isn’t patched. As suggested, it doesn’t matter. I’m considering this a false hit, as in “N/A”.

We can safely check this CVE off the list.


CVE-2021-22890 is an issue in curl, present in the sdk image. It’s another “untrusted data” issue, at least if you consider a malicious proxy the same as untrusted data. Does your service use curl to call into an untrusted proxy with TLS 1.3, in particular?

.NET doesn’t use curl. It used to use libcurl (the library, not the tool), I believe back in the .NET Core 1.0 days, but hasn’t in a long time. .NET now has it’s own HTTP stack based on sockets.

This issue is resolved with versions 7.76.0+. Let’s check the latest .NET 5 SDK image.

rich@wayfarer ~ % docker run --rm apk list libcurl
libcurl-7.76.1-r0 x86_64 {curl} (MIT) [installed]

That’s a good patch.

I’m going to skip the last CVE, CVE-2021-22876. It’s another libcurl issue, and is also resolved with version 7.76.0.

We can safely check both of those CVEs off the list.

Let’s now transition to guidance.

Managing a private registry

Most of the challenges we see are due to companies maintaining a private registry. On the one hand, we recommend using private registries. On the other, they make it more likely than not that you’ll find CVEs in your registry images.

By definition, disconnecting yourself from an upstream registry (like Docker Hub or Microsoft Container Registry) means that you are uniformly preventing both good and bad changes from naturally flowing into your environment. That means that one form of protection is at odds with another, and this tension needs to be explicitly and intentionally managed.

Most companies we talk to don’t have a model in place to keep their registry fresh with upstream content or have an explicit policy intentionally preventing it. Therein lies the problem.

I described in an earlier post that .NET images are frequently rebuilt due to Linux base image updates. Those base image updates frequently include CVE patches. If you don’t pull updated images at the same pace as we publish them, then you will see new CVEs in your scan reports. That’s the reality of the situation.

That’s the problem statement. What can you do?

Scan your registry AND upstream

The number one problem we see is that companies are scanning stale content, as I’ve already suggested. Invariably, we find that most or all of the CVEs in the scan reports they share with us have been resolved in the latest .NET image present in the upstream registry (MCR). I just proved that with the investigation.

There is an easy solution to this. Please scan your private registry AND your upstream one(s). If you find that the CVEs are only present in your registry and not upstream, then one or both of the following is true:

  • Your images are stale and should be re-built with re-pulled upstream images (like .NET).
  • You are installing components above and beyond the set present in upstream images (like .NET) and those components should be updated.

From experience, it’s almost always the first issue that causes .NET users to see unpatched CVEs in their reports. This is, in part, because .NET is a fully featured runtime environment and provides much of the functionality developers need to satisfy their business requirements.

Some companies have a policy against pulling from upstream as an arbitrary activity. That makes scanning upstream difficult. That leads to the next point of guidance.

Invest in automated infrastructure

It’s critical for you to know when dependent images has been updated. That enables you to make decisions. In absense of that information, you need to rely on a reactive model. And invariably, that reactive model is CVE reports.

The challenge with the reactive model is that you need a set of folks that are trained to correctly and pragmatically interpret scan results. This also includes working knowledge of your software. People with these skills seem to be in short supply. You can try to reach out to Microsoft for help, but helping with Linux CVEs really is outside our responsibility. It’s your responsibility. We’re doing our part by keeping our images up to date and writing this guidance.

Various services offer notifications when base images are updated. You might consider using one. You don’t even need to necessarily pull the images into your environment, but can use a system like that to send you some form of notification (like email).

Alternatively, you can also poll images — once a day is a good approach — to see if they have changed. A system like that still needs to be automated.

Establish a staging registry

We recommend establishing a staging registry for upstream content that is scanned and tested before being transitioned to your production registry. The intention would be that your staging registry is kept up to date (no more than a day out of date) and that scan results are compared between the staging and production registries.

Driving the point home: the staging registry should be at least as current as the CVE database your scanning software uses.

We’ve also heard of developers manually pulling upstream images to their desktop machines and then manually pushing them to a production registry. That’s an insecure practice we recommend against. We have secure automation that does this for us for .NET images.

This might be starting to sound expensive and complicated and making containers seem like a worse option that VMs. I have a few thoughts on that.

Let’s consider how containers compare to VMs, since CVE topics seem to have been less common when everyone was using VMs. Zooming out, the operating system you host in VMs is the same one that’s in containers. So, the container vs VM converation is largely moot. All the CVEs you saw me investigate earlier would equally apply to an Alpine VM. You might say “well, we don’t use Alpine in VMs.” That doesn’t matter. I believe only the tar CVE was Alpine-specific.

I did one of these investigations for a customer a couple years ago, focused on Ubuntu. Fixes were not yet available for the CVEs that were found. That was unfortunate. They moved their app from Ubuntu in containers to Ubuntu in a VM to get around the container scanning issues. Same Ubuntu version. That’s a broken model and obviously wasn’t an improvement in application security. I am sure that this customer is not alone in that practice.

The more interesting focus area is on how VMs are updated. By default, they are connected to an upstream package feed that you do not control. That’s the dynamic you are trying to avoid with having a private container registry. Containers are oriented on predictable and immutable artifacts; VMs are not. Unless you go to great lengths with VMs on feed management, containers offer a more secure supply chain friendly model than VMs.

Differentiating between sdk and runtime images

We see CVEs showing up in both runtime and SDK images. While you should take all CVEs seriously, it’s the CVEs that are present in production apps that are the most concerning. That means you should treat CVEs in runtime images quite differently than those in SDK images, asssuming SDK images run exclusively within your environment and exclusively process source code you trust.

Imagine you are scanning new .NET images to ingest into your evironment and find that the new images resolve a medium severity runtime image CVE and add a new medium severity SDK CVE. All things being equal, you should ingest these new images into your environment without any additional scrutiny. If you are holding off ingesting improved runtime images due to issues present in SDK ones, you have a priority inversion.

Consider using a non-root user

This same customer raised another scan issue, that .NET images are all configured as the root user. That’s true. We’ve discussed this on our team many times and have decided to leave our images configured this way.

All the base images we use are configured the same way, with the root user. That makes them super easy to use. We don’t think we should get in the way of that. If you didn’t use our images, but instead added .NET on top of Alpine (or another distro) yourself, you’d have this exact same issue. We don’t see it as our role to configure images on your behalf beyond what is required for .NET to run.

Instead, we encourage users to consider changing the user in their production images. If you enable a non-root user in your production images, you should consider doing same in the SDK containers too, particularly if you run tests in containers. If you only use SDK containers for building code, then adopting a non-root user isn’t critical.

We’ve seen multiple organizations that have a policy against even pulling images with the root user enabled. We think this is a bad rule that encourages bad not good outcomes. Microsoft doesn’t have this rule for our own operations. This is very similar to organization that don’t allow developers to be admins on their machines.


I hope you find these posts useful. I continue to write them given that my teams continues to be asked these questions. The industry is still in its infancy in establishing secure flows of publicly available assets. It will take at least another five years until the issues I’m talking about are clearly behind us.

I know that many companies operate in regulated environments that don’t have the pleasure of applying “pragmatic guidance from Microsoft.” I sympathize with your situation. At the same time, it’s my job to offer that pragmatic guidance and to shine a light on best practice patterns. For some companies, it is great guidance that is straightforward to apply and for others not. We’re all trying to establish safe and efficient computing environments. We’re committed to helping you with that.


Discussion is closed.

Feedback usabilla icon