February 29th, 2012

What’s New for Parallelism in .NET 4.5 Beta

Stephen Toub - MSFT
Partner Software Engineer

At //BUILD/ in September, we blogged about the wealth of new support available for parallelism in the .NET Framework 4.5 Developer Preview.  Since then, we’ve been hard at work on the .NET 4.5 Beta.  With the beta just released, here are a few interesting and related things that are new or have changed since the Developer Preview related to parallelism and asynchrony.

More Async Methods

If you browse around the Framework you’ll find many more async method implementations than were there previously, many of which are themselves utilizing the async method support provided by the .NET 4.5 compilers. For example, BufferedStream and CryptoStream now override ReadAsync and WriteAsync. HttpClient has been augmented with really helpful convenience methods like GetStringAsync, GetByteArrayAsync, and GetStreamAsync. System.Data.dll exposes new async methods like DbDataReader.GetFieldValueAsync and SqlBulkCopy.WriteToServerAsync. Just to name a few.

One new method I want to highlight in particular is SemaphoreSlim.WaitAsync.

As some of you may have discovered, the C# and Visual Basic compilers disallow using the await keyword inside the scope of a lock or SyncLock block.  This is for good reason, as the effect of using await inside of such a block would almost never be what you want.

SyncLock and lock are built on top of the .NET Monitor class, which is thread-affine.  When you acquire a lock on an object, that lock is then associated with a specific thread: any attempts to acquire the lock from another thread will block, that thread is allowed to enter the lock again (Monitors are reentrant), and only the thread that holds the lock may release it.  That’s all fine and dandy when writing synchronous code, but what about when writing asynchronous code?  When you await, the code that comes after the await is likely to run on another thread.  That means that things could go horribly wrong if you await while having entered a Monitor.  First, when your continuation runs and tries to release the lock, you could be doing so on a different thread than from which you acquired it.  Second, at the await point you likely allowed other code to be interleaved on that thread, which still holds the lock.  As such, any work performed on that thread would see the lock already acquired, and thus would be allowed access even though it technically should be denied.

All that said, there are still some async scenarios that require mutual exclusion, and so we need a synchronization primitive that is not thread-affine.  Semaphores to the rescue.  By their very nature, semaphores aren’t thread-affine.  They’re often used for producer-consumer scenarios, where a producer thread “releases” the semaphore (adding to its count) and a consumer thread “waits” on the semaphore (decrementing its count).  More generally, they’re used to gate access to a limited resource, such that multiple threads could successfully wait on the semaphore concurrently (in the case of producer-consumer, those resources are the items created by the producer).  And mutual exclusion is an extreme example of a limited resource: there’s only one resource, the ability to be inside the critical section.

In .NET, SemaphoreSlim is the recommended semaphore type for most situations that require semaphore behavior.  Even with the .NET 4.5 Developer Preview, we can use awaits with semaphores:

private static SemaphoreSlim m_lock = new SemaphoreSlim(initialCount:1);

public static async Task DoWorkAsync()
{
    m_lock.Wait();
    try
    {
        … // code here with awaits
    }
    finally { m_lock.Release(); }
}

However, we’re trying to write an asynchronous method here, and yet our call to SemaphoreSlim.Wait() will block if another thread is currently inside of the critical section.  That blocking largely defeats the purpose of having this be async.

To address this, in .NET 4.5 Beta we’ve added a WaitAsync method to SemaphoreSlim (for those of you who read my series on building async coordination primitives, this should feel very familiar).  As you might expect, WaitAsync returns a Task which will complete when the semaphore has been successfully waited on (WaitAsync also has overloads that accept a timeout and/or a CancellationToken).  This allows you to write code almost identical to the previous snippet, but using WaitAsync instead of Wait:

private static SemaphoreSlim m_lock = new SemaphoreSlim(initialCount:1);

public static async Task DoWorkAsync()
{
    await m_lock.WaitAsync();
    try
    {
        … // code here with awaits
    }
    finally { m_lock.Release(); }
}

Now, the body of DoWorkAsync will run atomically with regards to all other invocations of the method, with access gated by the SemaphoreSlim.  While one invocation is waiting for another to complete, no thread will be blocked.

Resource throttling is also a typical need in asynchronous applications, and this WaitAsync capability on SemaphoreSlim enables easy implementation of such scenarios. In the previous example, we configured SemaphoreSlim with an initialCount of 1, meaning that only one resource is protected by the semaphore.  If, however, we configured it with a higher number, like 10, we’d then be throttling DoWorkAsync calls to 10 at a time, again without synchronously blocking any additional callers that might come along.

Await Pattern

In the .NET 4.5 Developer Preview, the C# and Visual Basic compilers allowed you to await anything that followed a particular pattern.  Specifically, you could await types that exposed a GetAwaiter method (either instance or extension), and which returned a type that exposed IsCompleted, OnCompleted, and GetResult members of a particular form.  In .NET 4.5 Beta, that’s still true… mostly.  There are two new interfaces in the System.Runtime.CompilerServices namespace that are integral to the await pattern as well.  The first interface is INotifyCompletion, which exposes one method, OnCompleted, that has the same signature as the OnCompleted method previously expected by the compiler.  Now to implement a valid awaiter, you must implement INotifyCompletion.  Thus, where you previously had an awaiter:

public class MyCoolAwaiter
{
    public bool IsCompleted { get { … } }
    public void OnCompleted(Action continuation) { … }
    public void GetResult() { … }
}

for .NET 4.5 Beta you simply need to add “: INotifyCompletion” to the declaration:

public class MyCoolAwaiter : INotifyCompletion
{
    public bool IsCompleted { get { … } }
    public void OnCompleted(Action continuation) { … }
    public void GetResult() { … }
}

The compilers will enforce this, refusing to treat as awaiters types that don’t implement this interface.  I’ll explain in a moment why we made this requirement.
The second new interface is ICriticalNotifyCompletion, which inherits from INotifyCompletion.  In addition to the OnCompleted method, it also exposes an UnsafeOnCompleted method:

public interface ICriticalNotifyCompletion : INotifyCompletion
{
    [SecurityCritical]
    void UnsafeOnCompleted(Action continuation);
}

For those of you familiar with ThreadPool.UnsafeQueueUserWorkItem, you might be able to guess what is UnsafeOnCompleted.  In short, it’s exactly the same as INotifyCompletion.OnCompleted, except that it doesn’t need to flow ExecutionContext.

If an awaiter implements just INotifyCompletion, obviously the compiler will target its OnCompleted when implementing the async method’s state machine.  If, however, an awaiter also implements ICriticalNotifyCompletion, the compiler will target its UnsafeOnCompleted method.

For those of you familiar with ExecutionContext, at this point you’re likely asking yourself, “What?  How is that safe?  Doesn’t ExecutionContext need to flow across await points?”  Yes, it does.  However, what we discovered with the Async CTP and with the .NET 4.5 Developer Preview is that many folks didn’t realize that their awaiters needed to flow ExecutionContext in order to ensure context flowed across await points (even for those that did, this was a bit of a headache to implement).  So, for .NET 4.5 Beta, we’ve modified the async method builders in the Framework (e.g. AsyncTaskMethodBuilder); these are the types targeted by the compilers when building the state machines for async methods. The builders now themselves flow ExecutionContext across await points, taking that responsibility away from the awaiters.  That’s why it’s ok for the compilers to target UnsafeOnCompleted, because the builders will handle ExecutionContext flow.  That’s also why we needed the interfaces for awaiters, such that the builders could be passed references to awaiters and be able to invoke their {Unsafe}OnCompleted methods polymorphically.
Your next question then is likely, “Great, the builder is handling ExecutionContext flow… so why do we need two *OnCompleted methods?  Why can’t we just have an OnCompleted that doesn’t flow context?”  I’m glad you asked.  If you’re building an assembly with AllowPartiallyTrustedCallersAttribute (APTCA) applied to it, you need to ensure that any publicly exposed APIs from your assembly correctly flow ExecutionContext across async points… failure to do so can be a big security hole.  As awaiter types will often be implemented in APTCA assemblies, and since OnCompleted could be called directly by a user (even though it’s really meant to be used by the compiler), OnCompleted needs to flow ExecutionContext.  But if OnCompleted flows ExecutionContext, and if the builder flows ExecutionContext, now we’ll be flowing ExecutionContext twice.  While that’s not a problem functionally, it is an unnecessary and (potentially) non-trivial performance overhead.  So, we also have UnsafeOnCompleted, which doesn’t need to flow ExecutionContext, but which is also marked as SecurityCritical, such that partially trusted code can’t call it.
If you’re implementing your own awaiter, whenever possible implement both INotifyCompletion and ICriticalNotifyCompletion, flowing ExecutionContext in the former and not flowing it in the latter.  The only good reason not to implement both is if you’re implementing an awaiter in a situation where you can’t flow ExecutionContext, e.g. where your awaiter is partially trusted or where you otherwise don’t have the ability to use ExecutionContext, or where the APIs on which your awaiter relies doesn’t give you any option as to whether to flow context or not… in such cases, you can just implement INotifyCompletion.

One other note on awaiters.  In the Async CTP and in .NET 4.5 Developer Preview, the compilers employed some expensive tactics in order to support a very rare kind of awaiter: mutable value type awaiters.  For .NET 4.5 Beta, these kinds of awaiters are no longer allowed (or, more specifically, they’ll no longer behave correctly).  So, if you’re implementing an awaiter, either make it a reference type, or if you make it a value type, make sure it’s immutable (it’s ok for it to have fields of mutable reference types; it’s just not ok to expect that mutations in the OnCompleted method to the value type’s fields will persist).

System.Threading.Tasks.Dataflow.dll

Lots of great improvements have come to TPL Dataflow for .NET 4.5 Beta since the .NET 4.5 Developer Preview.  You might already be familiar with some of these from the TPL Dataflow CTP, as described at https://blogs.msdn.com/b/pfxteam/archive/2011/09/27/10217461.aspx. Here are some of the more prominent changes:

  • DataflowBlockOptions has new properties, including NameFormat (which can lead to an improved debugging experience) and SingleProducerConstrained (which can yield significantly better performance in certain constrained scenarios).
  • DataflowLinkOptions is a new type which enables you to control how your links behave, with properties like PropagateCompletion, Append, and MaxMessages.
  • DataflowBlock has new methods like NullTarget and cancelable overloads of SendAsync and OutputAvailableAsync.
  • Improved semantics in a variety of cases, for example support for broadcasting to multiple targets with back pressure through observables.
  • Lots of significant performance improvements, including much improved memory management in execution blocks (e.g. ActionBlock) and “fast paths” for many common operations (e.g. SendAsync when the target can immediately accept the data).

One other interesting change is that the ConcurrentExclusiveSchedulerPair type that was previously in this assembly has now been moved into mscorlib.dll (if you’re using the new Metro style surface area, it’s now exposed through the System.Threading.Tasks.dll reference assembly).

Async Unit Testing

With Visual Studio 11 Beta, you can now write async unit tests.  This means that instead of writing a unit test method like “public void TestMethod1()”, you can now write “public async Task TestMethod1()”, and use await inside of the test.

AsyncUnitTest

Note that “async void” is not supported in the beta, only “async Task”.  There is also no SynchronizationContext or TaskScheduler used to force continuations back to the original thread; for more information on how to achieve such behavior if it’s desired, see the three part blog series on Await, SynchronizationContext, and Console Apps.

Concurrency Visualizer

As was shown in the Developer Preview, the Concurrency Visualizer in Visual Studio 11 has been significantly overhauled for better performance and usability.  Work here continued for Beta, with some really nice improvements showing up.  One of my favorites is that GC activity is now highlighted as “memory management”, such that it’s clear visually when looking at a trace where threads were being blocked due to GC-related operations.

image

Another one of my favorite additions is support in the Concurrency Visualizer for async awaits.  TPL is now instrumented to output ETW events when tasks are awaited, and these events are viewable in the Concurrency Visualizer.

ConcurrencyVisualizer2

And More

These are just some of the more noticeable changes around asynchrony, concurrency, and parallelism for .NET 4.5 Beta.  There are of course others (e.g. TaskContinuationOptions now has a LazyCancellation value that prevents continuations from being canceled until all of the antecedents have completed). There have also been many performance improvements, such that your applications will hopefully just run faster and with less memory footprint.

Enjoy!  And as always, we look forward to your feedback.

Author

Stephen Toub - MSFT
Partner Software Engineer

Stephen Toub is a developer on the .NET team at Microsoft.

0 comments

Discussion are closed.