Embracing nullable reference types

Mads Torgersen

Probably the most impactful feature of C# 8.0 is Nullable Reference Types (NRTs). It lets you make the flow of nulls explicit in your code, and warns you when you don’t act according to intent.

The NRT feature holds you to a higher standard on how you deal with nulls, and as such it issues new warnings on existing code. So that those warnings (however useful) don’t break you, the feature must be explicitly enabled in your code before it starts complaining. Once you do that on existing code, you have work to do to make that code null-safe and satisfy the compiler that you did.

How should you think about when to do this work? That’s the main subject of this post, and we propose below that there’s a “nullable rollout phase” until .NET 5 ships (November 2020), wherein popular libraries should strive to embrace NRTs.

But first a quick primer.

Remind me – what is this feature again?

Up until now, in C# we allow references to be null, but we also allow them to be dereferenced without checks. This leads to what is by far the most common exception – the NullReferenceException – when nulls are accidentally dereferenced. An undesired null coming from one place in the code may lead to an exception being thrown later, from somewhere else that dereferences it. This makes null bugs hard to discover and annoying to fix. Can you spot the bug?:

static void M(string s) 
static void Main(string[] args)
    string s = (args.Length > 0) ? args[0] : null;

In C# 8.0 we want to help get rid of this problem by being stricter about nulls. This means we’re going to start complaining when values of ordinary reference types (string, object, IDisposable etc) are null. However, new warnings on existing code aren’t something we can just do, no matter how good it is for you! So NRT is an optional feature – you have to turn it on to get new warnings. You can do that either at the project level, or directly in the source code with a new directive:

#nullable enable

If you put this on the example above (e.g. at the top of the file) you’ll get a warning on this line:

    string s = (args.Length > 0) ? args[0] : null; // WARNING!

saying you shouldn’t assign the right-hand-side value to the string variable s because it might be null! Ordinary reference types have become non-nullable! You can fix the warning by giving a non-null value:

    string s = (args.Length > 0) ? args[0] : "";

If you want s to be able to be null, however, that’s fine too, but you have to say so, by using a nullable reference type – i.e. tagging a ? on the end of string:

    string? s = (args.Length > 0) ? args[0] : null;

Now the warning on that line goes away, but of course it shows up on the next line where you’re now passing something that you said may be null (a string?) to something that doesn’t want a null (a string):

    M(s); // WARNING!

Now again you can choose whether to change the signature of M (if you own it) to accept nulls or whether to make sure you don’t pass it a null to begin with.

C# is pretty smart about this. Let’s only call M if s is not null:

    if (s != null) M(s);

Now the warning disappears. This is because C# tracks the null state of variables across execution flow. In this case, even though s is declared to be a string?, C# knows that it won’t be null inside the true-branch of the if, because we just tested that.

In summary the nullable feature splits reference types into non-nullable reference types (such as string) and nullable reference types (such as string?), and enforces their null behavior with warnings.

This is enough of a primer for the purposes of this post. If you want to go deeper, please visit the docs on Nullable Reference Types, or check some of the earlier posts on the topic (Take C# 8.0 for a spin, Introducing Nullable Reference Types in C#).

There are many more nuances to how you can tune your nullable annotations, and we use a good many of them in our “nullification” of the .NET Core Libraries. The post Try out Nullable Reference Types explores those in great detail.

How and when to become “null-aware”?

Now to the meat of this post. When should you adopt nullable reference types? How to think about that? Here are some observations about the interaction between libraries and clients. Afterwards we propose a shared timeline for the whole ecosystem – the “nullable rollout phase” – to guide the adoption based on what you are building.

What happens when you enable nullable reference types in your code?

You will have to go over your signatures to decide in each place where you have a reference type whether to leave it non-nullable (e.g. string) or make it nullable (e.g. string?). Does your method handle null arguments gracefully (or even meaningfully), or does it immediately check and throw? If it throws on null you want to keep it non-nullable to signal that to your callers. Does your method sometimes return null? If so you want to make the return type nullable to “warn” your callers about it.

You’ll also start getting warnings when you use those members wrong. If you dereference the result of a method that returns string? and you don’t check it for null first, then you’ll have to fix that.

What happens when you call libraries that have the feature enabled?

If you yourself have the feature enabled and a library you depend on has already been compiled with the feature on, then it too will have nullable and nonnullable types in its signatures, and you will get warnings if you use those in the wrong way.

This is one of the core values of NRTs: That libraries can accurately describe the null behavior of the APIs, in a way that is checkable in client code at the call site. This raises expressiveness on API boundaries so that everyone can get a handle on the safe propagation and dereferencing of nulls. Nobody likes null reference exceptions or argument-null exceptions! This helps you write the code right the first time, and avoid the sources of those exceptions before you even compile and run the code.

What happens when you call libraries that have not enabled the feature?

Nothing! If a library was not compiled with the feature on, your compiler cannot assume one way or the other about whether types in the signatures were supposed to be nullable or not. So it doesn’t give you any warnings when you use the library. In nullable parlance, the library is “null-oblivious”. So even though you have opted in to getting the null checking, it only goes as far as the boundary to a null-oblivious library.

When that library later comes out in a new version that does enable the feature, and you upgrade to that version, you may get new warnings! All of a sudden, your compiler knows what is “right” and “wrong” in the consumption of those APIs, and will start telling you about the “wrong”!

This is good of course. But if you adopt NRTs before the libraries you depend on, it does mean that you’ll get some churn as they “come online” with their null annotations.

The nullable rollout phase

Here comes the big ask of you. In order to minimize the impact and churn, I want to recommend that we all think about the next year’s time until .NET 5 (November 2020) as the “nullable rollout phase”, where certain behaviors are encouraged. After that, we should be in a “new normal” where NRTs are everywhere, and everyone can use this feature to track and be explicit about nullability.

What should library authors do?

We strongly encourage authors of libraries (and similar infrastructure, such as code generators) to adopt NRTs during the nullable rollout phase. Pick a time that’s natural according to your shipping schedule, and that lets you get the work done, but do it within the next year. If your clients pester you to do it quicker, you can tell them “No! Go away! It’s still the nullable rollout phase!”

If you do go beyond the nullable rollout phase, however, your clients start having a point that you are holding back their adoption, and causing them to risk churn further down the line.

As a library writer you always face a dilemma between reach of your library and the feature set you can depend on in the runtime. In some cases you may feel compelled to split your library in two so that one version can target e.g. the classic .NET Framework, while a “modern” version makes use of e.g. new types and features in .NET Core 3.1.

However, with Nullable Reference Types specifically, you should be able to work around this. If you multitarget your library (e.g. in Visual Studio) to .NET Standard 2.0 and .NET Core 3.1, you will get the reach of .NET Standard 2.0 while benefitting from the nullable annotations of the .NET Core 3.1 libraries.

You also have to set the language version to C# 8.0, of course, and that is not a supported scenario when one of the target versions is below .NET Core 3.0. However, you can still do it manually in your project settings, and unlike many C# 8.0 features, the NRT feature specifically happens to not depend on specific elements of .NET Core 3.1. But if you try to use other language features of C# 8.0 while targeting .NET Standard 2.0, all bets are off!

What should library users do?

You should be aware that there’s a nullable rollout phase where things will be in flux. If you don’t mind the flux, by all means turn the feature on right away! It may be easier to fix bugs gradually, as libraries come online, rather than in bulk.

If you do want to save up the work for one fell swoop, however, you should wait for the nullable rollout phase to be over, or at least for all the libraries you depend on to have enabled the feature.

It’s not fair to nag your library providers about nullability annotations until the nullable rollout phase is over. Engaging them to help get it done, through OSS or as early adopters or whatever, is of course highly encouraged, as always.

What will Microsoft do?

We will also aim to be done with null-annotating our core libraries when .NET 5 comes around – and we are currently on track to do so. (Tracking issue: Annotate remainder of .NET Core assemblies for nullable reference types).

We will also keep a keen eye on the usage and feedback during this time, and we will feel free to make adjustments anywhere in the stack, whether library, compilers or tooling, in order to improve the experience based on what we hear. Adjustments, not sweeping changes. For instance, issues filed by users of CoreFx on GetMethodInfo and ResolveEventArgs were already addressed by fixes in the CoreFx repo (GetMethodInfo and ResolveEventArgs).

When .NET 5 rolls around, if we feel the nullable rollout phase has been a success, I could see us turning the feature on by default for new projects in Visual Studio. If the ecosystem is ready for it, there is no reason why any new code should ignore the improved safety and reliability you get from nullability annotations!

At that point, the mechanisms for opt-in and opt-out become effectively obsolete – a mechanism to deal with legacy code.

Call to action

Make a plan! How are you going to act on nullable reference types? Try it out! Turn it on in your code and see what happens. Scary many warnings? That may happen until you get your signatures annotated right. After that, the remaining warnings are about the quality of your consuming code, and those are the reward: an opportunity to fix the places where your code is probably not null safe!

And as always: Have fun exploring!

Happy hacking,

Mads Torgersen, C# lead designer


Discussion is closed. Login to edit/delete existing comments.

  • Stilgar Naib 1

    Do you intend to increase the expressiveness of the language further regarding null constructs? For example I was looking at this question yesterday – https://stackoverflow.com/questions/59018601/can-i-tell-c-sharp-nullable-references-that-a-method-is-effectively-a-null-check . Seems like this is a valid construct not covered by the current tools in the language. Maybe more attributes could be added or something more drastic like TypeScript’s assertions should be added

    • Mads TorgersenMicrosoft employee 0

      We will consider improving expressiveness if we see a significant scenario that’s not covered. Most of our experience comes from annotating the Core Libraries, and the current set of attributes does a pretty good job with that. As we see more situations in the wild we’ll definitely ponder further improvements.

      • Wolfgang Rues 0

        I am using custom assertion methods for “not null” pre-condition, runtime and post-condition checks.
        At the moment the nullable checks do not recognize these checks and thus report invalid warnings.
        Currently I do not see how I could tell the C# compiler about these custom assert methods and
        I have understood the post above it does currently not exist.
        However, I am looking for a similar solution as ReSharper does provide for its [NotNull] checks using an [AssertionMethod] attribute that allows using the nullable checks in combination with custom assertion methods.

  • Wil Wilder Apaza Bustamante 0

    Trying to adopt, but I’m having trouble annotating some of my APIs.

    Question on Stack Overflow
    GitHub Issue

    Foo<string?> Bar(Foo<string?> foo)
    if (foo.Value is null) { /* Something */ }
    else { /* Some other thing */ }
    return foo;

    var foo1 = new Foo<string>(“s”);
    var ret1 = Bar(foo1); // Warns because foo1’s type parameter is not nullable

    var foo2 = new Foo<string?>(“s”);
    var ret2 = Bar(foo2); // Ok because foo2’s type parameter is nullable

    I’ve failed to annotate the nullability of T of the returning Foo to be the same as the nullability of T of the parameter Foo

    • Mads TorgersenMicrosoft employee 0

      Yes, this is not currently expressible when the type is sealed, like you string above. Otherwise you can use generics to establish a relationship between input and output.

      • Wil Wilder Apaza Bustamante 0

        this comment has been deleted.

  • Onur Gümüş 1

    Mads I respect to your work, but I have a different opinion. If null concept was a billion dollar mistake, null able references is another billion dollar mistake. The reason is, it encourages people to use regular reference types for existence checks. What we need instead is option types. I have already seen Elvis (.?)operator doing big damage that developers lazily apply it everywhere. Before it is too late, pleasebring the option as a standard concept to the language

    • Joseph Musser 0

      Now that the type system is no longer blind to reference-type nullability, there is an equivalence with option types already which is more idiomatic of C#. By that I mean that you can transcribe any code using option checks into code that works with nullable reference types and vice versa without loss of semantics or safety. The other side of this is that you can use an Option type today already and it will get even easier when discriminated unions are added to the language, but it will not do any good when interacting with the decades’ worth of existing code that uses C#’s null idiomatically.

    • Alfonso Ramos Gonzalez 0

      You could have used FSharp.Core.Option in C# for a long time, but you didn’t.

      • Francesco Cossu 0

        this answer doesn’t make sense

    • Sava Hmelnitski 0

      I agree. The current feature encourages to omit null checks, while does not guarantee non-nullability. I would vote for an option for a strong non-nullability mode where you are forced to use nullable types whenever the compiler cannot guarantee non-nullability.

      There is a related discussion here: https://github.com/dotnet/csharplang/issues/2244

    • Francesco Cossu 1

      Totally agree. Null references is a problem the Option pattern solves brilliantly, and it was something that could have been easily added to C# as well. What was instead done is exactly as you said: tweaking on normal reference types for existence check. This is also wrong on other aspects, will try so list the main ones:

      1) the usage of “?” is inconsistent with how it works on value types. My expectation, when I’ve first heard of this new feature, was that “ReferenceType” and “ReferenceType?” were 2 different data types, as is for “ValueType” and “ValueType?”. This would have been more in line with the Option pattern and consistent.

      2) the “!” to override the null check is also wrong. At some point we might know that a variable won’t be NULL and we’ll force the compiler to ignore the check, but all these “!” need to be maintained in time against changes to the code that could result in such assumption no longer being true. This breaks the purpose quite a lot in my view.

      In my view the way nullable references are done is conceptually wrong and makes the language confused, this is something that should be re-designed before it gets adopted too much

  • Jonas Nyrup 0

    Is there a write-up somewhere with the difference between .Net Core 3.0 and .Net Core 3.1 with respect to NRT?
    I’m interested in what the benefits there are of bumping requirement from .Net Core 3.0 to 3.1.

    • Phillip CarterMicrosoft employee 0

      Hello Jonas,

      .NET Core 3.1 is an LTS release and .NET Core 3.0 is a Current release. Aside from the fact that there are further bug fixes and performance improvements, the branding of .NET Core 3.1 as an LTS release means it is very stable and will remain supported for 3 or more years.

      Further improvements to NRT and other compiler features will be in the .NET 5 release cycle, so if you’re fine using previews, then that’s your best choice for more and better features.

      • Jonas Nyrup 0

        Hi Phillip.

        Thanks for the replying, but my question is only about the NRT differences between 3.1 vs 3.0.

        NRT was added in 3.0, but this article recommends using 3.1.
        As a library author I would like to target the lowest possible framework.
        What, if any, benefits do I or my consumers get by targeting my library against 3.1 instead of 3.0?

        • Phillip CarterMicrosoft employee 0

          .NET Core 3.0 is only a Current release, whereas .NET Core 3.1 is a LTS release. Most users of 3.0 will move to 3.1, since it has bug fixes and features that 3.0 will not receive, and it is a long-term supported release. In a few months, 3.0 will go out of support and users will be asked to move to 3.1.

          • Jonas Nyrup 1

            I’m well aware of the EOL of 3.0…
            Again, I’m only interested in the NRT aspect.
            Are there any NRT benefits for me as a library author or my users, if I re-target my library from 3.0 to 3.1

            I do understand if Microsoft would like me to bump my library from 3.0 til 3.1, to force my users to bump from 3.0 to 3.1.
            I don’t think it’s the task of libraries to force users to upgrade their target framework, but to not hindering them in upgrading.

  • Erik Berthold 0

    It could have been a nice feature at all, but the way it isn’t compatible to .Net Framework (Recommended not to use C# 8 and old project format in particular) is just a huge fail. (https://github.com/dotnet/project-system/issues/5551)
    I’m developing Visual Studio extensions at work and in my freetime. But VSIX proj seems not to be supported by SDK style csproj so no “nullable enable” in csproj for us extension developers.

    I think C# 8/NetStandard2.1/NetCore3 just prevents a big part of developers in LOB environments from benefiting using all those new technologies.

    • Charles Roddie 0

      Is there a timeline for the Visual Studio application itself moving to .Net Core? This would be a good reason for the next release being VS2020 instead of VS2021.

    • James Chaldecott 0

      Yeah. I don’t like being “that guy” but the “What should library authors do?” section is pretty embarrassing.

      What will it mean for our expectations of quality when almost all libraries in the ecosystem have been compiled in an unsupported configuration?

      This comment from Immo (on the issue you linked) shows that really mean it about not supporting this configuration, even though he explicitly mentions that the problem with that is that you are blocked from multi-targeting without it: https://github.com/dotnet/project-system/issues/5551#issuecomment-560524198

  • Rafael Herscovici 0

    Stopped reading when i saw ‘args[0] : “”‘ instead of ‘args[0] : string.empty’

  • Shimmy Weitzhandler 0

    I’m having an issue enabling it in WPF projects. Reported.

  • Alfonso Ramos Gonzalez 0

    What is the future of the “notnull” constraint? Currently I find it virtually useless, I rather have two versions of my code, one with the “struct” constraint and one with the “class” (but not “class?”) constraint. which is a sad situation. Also code with has the “notnull” constraint does not seem to play with the others generics with other constraints, or at least I have not figured it out well yet.

  • Daniel Payne 0

    The feature feels like it is not quite ready for prime time. It is lacking two major capabilities, at least. Please feel free to correct me if I’m missing something, because I really want this feature to work.

    First, I should be able to succinctly and clearly tell the compiler, “This nullable value is not null” in the same way I can tell it, “This BarBase is actually a FuBar” without the compiler verifying that I have actually checked it. I shouldn’t need to subvert the compiler with #pragma tags to avoid extraneous null checks. Let me explicitly cast T? to T.

    Second, I should not have to restrict T to class to use T?. What, it’s impossible to write generics that work with both classes and structs if we enable nullable reference types? Or we must subvert the compiler and muddy our code with #pragma tags? This is silly and makes our code objectively worse.

    Writing any kind of algorithm using nullable refrence types is just a nightmare. This is already complex code. A feature designed to improve our code shouldn’t make it even worse.

    • Jonathon MarolfMicrosoft employee 0

      First, I should be able to succinctly and clearly tell the compiler, “This nullable value is not null”

      You can use the bang (!) operator to do that (docs here)

      Second, I should not have to restrict T to class to use T?.

      The [MaybeNull] attribute indicates that a generic type is nullable if its a reference type

      [return: MaybeNull]
      public static T FirstOrDefault&amp;lt;T&amp;gt;(this IEnumerable&amp;lt;T&amp;gt; enumerable) {
      if (enumerable.Any()) {
      return enumerable.First();

      return default;

    • Eugene Ivanoff 0

      I fully agree with you. I have already wrote about it here that for a code reader it’s just nightmare. A code reader needs to keep in mind all possible situations about “HOW this function handles reference types?”. He needs to track all these #pragmas disabled and enabled, and also nullable setting in project file! I called it a zoo – and it perfectly reflects the current situation with this “feature”. And more importantly there are blogs out there which prove that this feature doesn’t shield a developer from nulls. As for me – I will never use this feature.

    • Keith Leonard 0

      You are right that this feature is not ready for prime time. I just spent a couple of weeks incorporating it into a project and it was a waste of time. Main problems: 1) False positives, 2) lack of useful info from compiler about nullability differences, 3) inability to mix nullable classes and nullable structs when calling generic methods (which is unintuitive since there is never a problem mixing non-nullable classes and structs)

  • Marcel 0

    Thanks for the great write-up Mads.

    Any plans in store for adopting Swift as an official language for .NET? (perhaps through a Silver acquisition similar to what was done for Xamarin? https://www.elementscompiler.com/elements/silver/)

    As great as C# has been in the era of a Java as its main competition, the world has long moved on and despite the great efforts that I highly praise for bringing NRT into the language, the language will just never catch up to Swift/Kotlin despite the 4+ lead time to try to do so. Neither of the features which C# 8 has been trying to adopt (ie: NRT, switch expressions – which don’t work with statement blocks making them no better/different than conditionals, lack of immutability everywhere ie: let/var val/var) match their equivalents in the Swift/Kotlin world, and at some point you have to recognize that just as C# was born out of abandoning Java/J#, we are at a time where it’s time to leave behind the Java/C# era and leap forward into the next generation that will breathe new life into the entire ecosystem.

    With Swift being open-sourced, and all the strategic advantages that MS/Azure has on the cloud front, I can’t think of a better ecosystem to make accessible from Swift than a .NET Core / ASP.NET / EF one. It would put Swift on the server-side immediately to shame and place Microsoft in the leading position for server-side innovation rather than trailing and held back by previous-generation language catch-ups.

    This strategy wouldn’t negate any of the efforts going into C# modernization either, as both languages could become first-citizens languages of the ecosystem – a generational opportunity to expand .NET adoption far beyond the current ‘.NET devs’ / legacy audiences which, outside of VS Code / JS, .NET Core unfortunately continues to cater to.

    Very curious to your thoughts/plans on this Mads – be they ‘official stances’ on it or otherwise.

    • Charles Roddie 0

      Swift is a very decent language but there is little benefit of bringing it to .Net. With Swift you get some of the functional features of F#, but F# does it more completely (e.g. type inference and pattern matching). The main thing that Swift would bring that existing .Net languages don’t have is determinstic GC but the algorithm (reference-counting) is not very interesting and it wouldn’t be the natural alternative to the regular .Net GC. A more interesting approach is Project Snowflake, which supplements regular .Net GC with a safe manual system inspired by Rust. Unfortunately there doesn’t seem much activity now in trying to get this productized.

      • Mads TorgersenMicrosoft employee 0

        I absolutely want to second the F# sentiment! If C# is not functional enough for you, F# is a great functional-first language that at the same time benefits from the whole breadth of the .NET ecosystem.

    • Mads TorgersenMicrosoft employee 0

      No plans to support Swift on .NET – from our side at least! 😉

      Thanks for your thoughts! Languages differ, and that’s ok. The specific things you list are certainly within the scope of what C# could embrace in a future version. There may come a day where C# is fundamentally unable to embrace an important new direction in programming, but that doesn’t seem to be the case yet.

      • Marcel 0

        Thank you for your thoughts on this Mads.

        One such example is the generational leap to declarative UIs that SwiftUI and Flutter are leading the way on. These are both examples where control and direct embedment into the language has unlocked the ability to define UIs in language rather than markup (https://www.swiftbysundell.com/articles/the-swift-51-features-that-power-swiftuis-api/, https://hackernoon.com/why-flutter-uses-dart-dd635a054ebf).

        Another benefit as mentioned is being able to lead in the market with the best back-end stack for swathes of new developers that are looking to adopt a great Swift back-end – which ASP.NET could immediately provide.

        It’s not always about just trying to hold on to all the existing .NET devs while new audiences adopt newer ecosystems. I think it would be wise to align the .NET ecosystem to be able to adopt things like SwiftUI once they inevitably open-source in the year or so ahead – rather than trying to push age-old technology (such as WPF) and spin up a competing UI platform that only .NET folks would ever consider adopting.

        Appreciate your consideration of this.

Feedback usabilla icon