Probably the most impactful feature of C# 8.0 is Nullable Reference Types (NRTs). It lets you make the flow of nulls explicit in your code, and warns you when you don’t act according to intent.
The NRT feature holds you to a higher standard on how you deal with nulls, and as such it issues new warnings on existing code. So that those warnings (however useful) don’t break you, the feature must be explicitly enabled in your code before it starts complaining. Once you do that on existing code, you have work to do to make that code null-safe and satisfy the compiler that you did.
How should you think about when to do this work? That’s the main subject of this post, and we propose below that there’s a “nullable rollout phase” until .NET 5 ships (November 2020), wherein popular libraries should strive to embrace NRTs.
But first a quick primer.
Remind me – what is this feature again?
Up until now, in C# we allow references to be null, but we also allow them to be dereferenced without checks. This leads to what is by far the most common exception – the NullReferenceException
– when nulls are accidentally dereferenced. An undesired null coming from one place in the code may lead to an exception being thrown later, from somewhere else that dereferences it. This makes null bugs hard to discover and annoying to fix. Can you spot the bug?:
static void M(string s)
{
Console.WriteLine(s.Length);
}
static void Main(string[] args)
{
string s = (args.Length > 0) ? args[0] : null;
M(s);
}
In C# 8.0 we want to help get rid of this problem by being stricter about nulls. This means we’re going to start complaining when values of ordinary reference types (string
, object
, IDisposable
etc) are null. However, new warnings on existing code aren’t something we can just do, no matter how good it is for you! So NRT is an optional feature – you have to turn it on to get new warnings. You can do that either at the project level, or directly in the source code with a new directive:
#nullable enable
If you put this on the example above (e.g. at the top of the file) you’ll get a warning on this line:
string s = (args.Length > 0) ? args[0] : null; // WARNING!
saying you shouldn’t assign the right-hand-side value to the string
variable s
because it might be null! Ordinary reference types have become non-nullable! You can fix the warning by giving a non-null value:
string s = (args.Length > 0) ? args[0] : "";
If you want s
to be able to be null, however, that’s fine too, but you have to say so, by using a nullable reference type – i.e. tagging a ?
on the end of string
:
string? s = (args.Length > 0) ? args[0] : null;
Now the warning on that line goes away, but of course it shows up on the next line where you’re now passing something that you said may be null (a string?
) to something that doesn’t want a null (a string
):
M(s); // WARNING!
Now again you can choose whether to change the signature of M
(if you own it) to accept nulls or whether to make sure you don’t pass it a null to begin with.
C# is pretty smart about this. Let’s only call M
if s
is not null:
if (s != null) M(s);
Now the warning disappears. This is because C# tracks the null state of variables across execution flow. In this case, even though s
is declared to be a string?
, C# knows that it won’t be null inside the true-branch of the if, because we just tested that.
In summary the nullable feature splits reference types into non-nullable reference types (such as string
) and nullable reference types (such as string?
), and enforces their null behavior with warnings.
This is enough of a primer for the purposes of this post. If you want to go deeper, please visit the docs on Nullable Reference Types, or check some of the earlier posts on the topic (Take C# 8.0 for a spin, Introducing Nullable Reference Types in C#).
There are many more nuances to how you can tune your nullable annotations, and we use a good many of them in our “nullification” of the .NET Core Libraries. The post Try out Nullable Reference Types explores those in great detail.
How and when to become “null-aware”?
Now to the meat of this post. When should you adopt nullable reference types? How to think about that? Here are some observations about the interaction between libraries and clients. Afterwards we propose a shared timeline for the whole ecosystem – the “nullable rollout phase” – to guide the adoption based on what you are building.
What happens when you enable nullable reference types in your code?
You will have to go over your signatures to decide in each place where you have a reference type whether to leave it non-nullable (e.g. string
) or make it nullable (e.g. string?
). Does your method handle null arguments gracefully (or even meaningfully), or does it immediately check and throw? If it throws on null you want to keep it non-nullable to signal that to your callers. Does your method sometimes return null? If so you want to make the return type nullable to “warn” your callers about it.
You’ll also start getting warnings when you use those members wrong. If you dereference the result of a method that returns string?
and you don’t check it for null first, then you’ll have to fix that.
What happens when you call libraries that have the feature enabled?
If you yourself have the feature enabled and a library you depend on has already been compiled with the feature on, then it too will have nullable and nonnullable types in its signatures, and you will get warnings if you use those in the wrong way.
This is one of the core values of NRTs: That libraries can accurately describe the null behavior of the APIs, in a way that is checkable in client code at the call site. This raises expressiveness on API boundaries so that everyone can get a handle on the safe propagation and dereferencing of nulls. Nobody likes null reference exceptions or argument-null exceptions! This helps you write the code right the first time, and avoid the sources of those exceptions before you even compile and run the code.
What happens when you call libraries that have not enabled the feature?
Nothing! If a library was not compiled with the feature on, your compiler cannot assume one way or the other about whether types in the signatures were supposed to be nullable or not. So it doesn’t give you any warnings when you use the library. In nullable parlance, the library is “null-oblivious”. So even though you have opted in to getting the null checking, it only goes as far as the boundary to a null-oblivious library.
When that library later comes out in a new version that does enable the feature, and you upgrade to that version, you may get new warnings! All of a sudden, your compiler knows what is “right” and “wrong” in the consumption of those APIs, and will start telling you about the “wrong”!
This is good of course. But if you adopt NRTs before the libraries you depend on, it does mean that you’ll get some churn as they “come online” with their null annotations.
The nullable rollout phase
Here comes the big ask of you. In order to minimize the impact and churn, I want to recommend that we all think about the next year’s time until .NET 5 (November 2020) as the “nullable rollout phase”, where certain behaviors are encouraged. After that, we should be in a “new normal” where NRTs are everywhere, and everyone can use this feature to track and be explicit about nullability.
What should library authors do?
We strongly encourage authors of libraries (and similar infrastructure, such as code generators) to adopt NRTs during the nullable rollout phase. Pick a time that’s natural according to your shipping schedule, and that lets you get the work done, but do it within the next year. If your clients pester you to do it quicker, you can tell them “No! Go away! It’s still the nullable rollout phase!”
If you do go beyond the nullable rollout phase, however, your clients start having a point that you are holding back their adoption, and causing them to risk churn further down the line.
As a library writer you always face a dilemma between reach of your library and the feature set you can depend on in the runtime. In some cases you may feel compelled to split your library in two so that one version can target e.g. the classic .NET Framework, while a “modern” version makes use of e.g. new types and features in .NET Core 3.1.
However, with Nullable Reference Types specifically, you should be able to work around this. If you multitarget your library (e.g. in Visual Studio) to .NET Standard 2.0 and .NET Core 3.1, you will get the reach of .NET Standard 2.0 while benefitting from the nullable annotations of the .NET Core 3.1 libraries.
You also have to set the language version to C# 8.0, of course, and that is not a supported scenario when one of the target versions is below .NET Core 3.0. However, you can still do it manually in your project settings, and unlike many C# 8.0 features, the NRT feature specifically happens to not depend on specific elements of .NET Core 3.1. But if you try to use other language features of C# 8.0 while targeting .NET Standard 2.0, all bets are off!
What should library users do?
You should be aware that there’s a nullable rollout phase where things will be in flux. If you don’t mind the flux, by all means turn the feature on right away! It may be easier to fix bugs gradually, as libraries come online, rather than in bulk.
If you do want to save up the work for one fell swoop, however, you should wait for the nullable rollout phase to be over, or at least for all the libraries you depend on to have enabled the feature.
It’s not fair to nag your library providers about nullability annotations until the nullable rollout phase is over. Engaging them to help get it done, through OSS or as early adopters or whatever, is of course highly encouraged, as always.
What will Microsoft do?
We will also aim to be done with null-annotating our core libraries when .NET 5 comes around – and we are currently on track to do so. (Tracking issue: Annotate remainder of .NET Core assemblies for nullable reference types).
We will also keep a keen eye on the usage and feedback during this time, and we will feel free to make adjustments anywhere in the stack, whether library, compilers or tooling, in order to improve the experience based on what we hear. Adjustments, not sweeping changes. For instance, issues filed by users of CoreFx on GetMethodInfo and ResolveEventArgs were already addressed by fixes in the CoreFx repo (GetMethodInfo and ResolveEventArgs).
When .NET 5 rolls around, if we feel the nullable rollout phase has been a success, I could see us turning the feature on by default for new projects in Visual Studio. If the ecosystem is ready for it, there is no reason why any new code should ignore the improved safety and reliability you get from nullability annotations!
At that point, the mechanisms for opt-in and opt-out become effectively obsolete – a mechanism to deal with legacy code.
Call to action
Make a plan! How are you going to act on nullable reference types? Try it out! Turn it on in your code and see what happens. Scary many warnings? That may happen until you get your signatures annotated right. After that, the remaining warnings are about the quality of your consuming code, and those are the reward: an opportunity to fix the places where your code is probably not null safe!
And as always: Have fun exploring!
Happy hacking,
Mads Torgersen, C# lead designer
In my 15 odd years of using C#, rarely have I come across a null reference exception - its a newbie mistake to not check for nulls.
A reference type is basically a pointer, and our code treats them as such. There are all kinds of places where there are loops and conditions that use it to check for availability and validity of data. I certainly do not want to go through all our code and...
What happens if you have the example in your text but in a multithreaded environment?
if (s != null) M(s);
Will the compiler flag it?
Unfortunately, most of my projects target .net standard lower than 2.0. Most of the attributes to configure this are unavailable and I cannot adopt this feature yet.
I would love to use nullable reference types. However, I am unable to since I have many ASP.NET Web Forms projects that I need to support and Microsoft refuses to move ASP.NET Web Forms forward onto .NET Core. It is ridiculous.
Thanks for the great write-up Mads.
Any plans in store for adopting Swift as an official language for .NET? (perhaps through a Silver acquisition similar to what was done for Xamarin? https://www.elementscompiler.com/elements/silver/)
As great as C# has been in the era of a Java as its main competition, the world has long moved on and despite the great efforts that I highly praise for bringing NRT into the language, the language will just never catch up to...
No plans to support Swift on .NET – from our side at least! 😉
Thanks for your thoughts! Languages differ, and that’s ok. The specific things you list are certainly within the scope of what C# could embrace in a future version. There may come a day where C# is fundamentally unable to embrace an important new direction in programming, but that doesn’t seem to be the case yet.
Thank you for your thoughts on this Mads.
One such example is the generational leap to declarative UIs that SwiftUI and Flutter are leading the way on. These are both examples where control and direct embedment into the language has unlocked the ability to define UIs in language rather than markup (https://www.swiftbysundell.com/articles/the-swift-51-features-that-power-swiftuis-api/, https://hackernoon.com/why-flutter-uses-dart-dd635a054ebf).
Another benefit as mentioned is being able to lead in the market with the best back-end stack for swathes of new developers that are...
Swift is a very decent language but there is little benefit of bringing it to .Net. With Swift you get some of the functional features of F#, but F# does it more completely (e.g. type inference and pattern matching). The main thing that Swift would bring that existing .Net languages don't have is determinstic GC but the algorithm (reference-counting) is not very interesting and it wouldn't be the natural alternative to the regular .Net GC....
I absolutely want to second the F# sentiment! If C# is not functional enough for you, F# is a great functional-first language that at the same time benefits from the whole breadth of the .NET ecosystem.
The feature feels like it is not quite ready for prime time. It is lacking two major capabilities, at least. Please feel free to correct me if I'm missing something, because I really want this feature to work.
First, I should be able to succinctly and clearly tell the compiler, "This nullable value is not null" in the same way I can tell it, "This BarBase is actually a FuBar" without the compiler verifying...
You are right that this feature is not ready for prime time. I just spent a couple of weeks incorporating it into a project and it was a waste of time. Main problems: 1) False positives, 2) lack of useful info from compiler about nullability differences, 3) inability to mix nullable classes and nullable structs when calling generic methods (which is unintuitive since there is never a problem mixing non-nullable classes and structs)
I fully agree with you. I have already wrote about it here that for a code reader it's just nightmare. A code reader needs to keep in mind all possible situations about "HOW this function handles reference types?". He needs to track all these #pragmas disabled and enabled, and also nullable setting in project file! I called it a zoo - and it perfectly reflects the current situation with this "feature". And more importantly there...
First, I should be able to succinctly and clearly tell the compiler, “This nullable value is not null”
You can use the bang (!) operator to do that (docs here)
Second, I should not have to restrict T to class to use T?.
The [MaybeNull] attribute indicates that a generic type is nullable if its a reference type
[return: MaybeNull]
public static T FirstOrDefault<T>(this IEnumerable<T> enumerable) {
if (enumerable.Any()) {
...
What is the future of the “notnull” constraint? Currently I find it virtually useless, I rather have two versions of my code, one with the “struct” constraint and one with the “class” (but not “class?”) constraint. which is a sad situation. Also code with has the “notnull” constraint does not seem to play with the others generics with other constraints, or at least I have not figured it out well yet.
I’m having an issue enabling it in WPF projects. Reported.
Stopped reading when i saw ‘args[0] : “”‘ instead of ‘args[0] : string.empty’
Why? Doesn’t the runtime cache all distinct string literals? https://docs.microsoft.com/en-us/dotnet/api/system.string.intern?view=netframework-4.8#remarks
LOLOLOLOL… right? Looks like someone isn’t using ReSharper. lololololol.
It could have been a nice feature at all, but the way it isn't compatible to .Net Framework (Recommended not to use C# 8 and old project format in particular) is just a huge fail. (https://github.com/dotnet/project-system/issues/5551)
I'm developing Visual Studio extensions at work and in my freetime. But VSIX proj seems not to be supported by SDK style csproj so no "nullable enable" in csproj for us extension developers.
I think C# 8/NetStandard2.1/NetCore3 just prevents a...
Yeah. I don't like being "that guy" but the "What should library authors do?" section is pretty embarrassing.
What will it mean for our expectations of quality when almost all libraries in the ecosystem have been compiled in an unsupported configuration?
This comment from Immo (on the issue you linked) shows that really mean it about not supporting this configuration, even though he explicitly mentions that the problem with that is that you are blocked from multi-targeting...
Is there a timeline for the Visual Studio application itself moving to .Net Core? This would be a good reason for the next release being VS2020 instead of VS2021.