Dissecting the async methods in C#

Sergey Tepliakov

Sergey

The async series

The C# language is great for developer’s productivity and I’m glad for the recent push towards making it more suitable for high-performance applications.

Here is an example: C# 5 introduced ‘async’ methods. The feature is very useful from a user’s point of view because it helps combining several task-based operations into one. But this abstraction comes at a cost. Tasks are reference types causing heap allocations everywhere they’re created, even in cases where the ‘async’ method completes synchronously. With C# 7, async methods can return task-like types such as ValueTask to reduce the number of heap allocations or avoid them altogether in some scenarios.

In order to understand how all of this is possible, we need to look under the hood and see how async methods are implemented.

But first, a little bit of history.

Classes Task and Task<T> were introduced in .NET 4.0 and, from my perspective, made a huge mental shift in area of asynchronous and parallel programming in .NET. Unlike older asynchronous patterns such as the BeginXXX/EndXXX pattern from .NET 1.0 (also known as “Asynchronous Programming Model”) or Event-based Asynchronous Pattern like BackgroundWorker class from .NET 2.0, tasks are composable.

A task represents a unit of work with a promise to give you results back in the future. That promise could be backed by IO-operation or represent a computation-intensive operation. It doesn’t matter. What does matter is that the result of the operation is self-sufficient and is a first-class citizen. You can pass a “future” around: you can store it in a variable, return it from a method, or pass it to another method. You can “join” two “futures” together to form another one, you can wait for results synchronously or you can “await” the result by adding “continuation” to the “future”. You can decide what to do if the operation succeeded, faulted or was canceled, just using a task instance alone.

Task Parallel Library (TPL) had changed the way we think about concurrency and C# 5 language made a step forward by introducing async/await. Async/await helps to compose tasks and gives the user an ability to use well-known constructs like try/catch, using etc. But like any other abstraction async/await feature has its cost. And to understand what the cost is, you have to look under the hood.

Async method internals

A regular method has just one entry point and one exit point (it could have more than one return statement but at the runtime there is just one exist point for a given call). But async methods (*) and iterators (methods with yield return) are different. In the case of an async method, a method caller can get the result (i.e. Task or Task<T>) almost immediately and then “await” the actual result of the method via the resulting task.

(*) Let’s define the term “async method” as a method marked with contextual keyword async. It doesn’t necessarily mean that the method executes asynchronously. It doesn’t mean that the method is asynchronous at all. It only means that the compiler performs some special transformation to the method.

Let’s consider the following async method:

Method GetStockPriceForAsync ensures that the _stockPrices map is initialized and then gets the value from the cache.

To better understand what the compiler does or can do, let’s try to write a transformation by hand.

Deconstructing an async method by hand

The TPL provides two main building blocks that help us construct and join tasks: task continuation using Task.ContinueWith and the TaskCompletionSource<T> class for constructing tasks by hand.

The code is verbose but relatively straightforward. All the logic from GetStockPriceForAsync is moved to GetStockPriceForAsync_StateMachine.Start method that uses “continuation passing style”. The general algorithm of our async transformation is to split the original method is split into pieces at await boundaries. The first block is the code from the start of the method to the first await. The second block – from the first await to the second await. The third block – from code above to the third one or until the end of the method, and so forth:

Every awaited task now becomes a field of the state machine, and the Start method subscribes itself as a continuation of each of these tasks:

Then, when the task finishes, the Start method is called back and the _state field is checked to understand what stage we’re in. The logic then checks whether the task was finished successfully, was canceled, or successful. In the latter case, the state machine moves forward and runs the next block of code. When everything is done, the state machine sets the result of the TaskCompletionSource<T> instance and the resulting task returned from GetStockPricesForAsync changes its state to completed.

This “implementation” has a few serious drawbacks:

  • Lots of heap allocations: 1 allocation for the state machine, 1 allocation for TaskCompletionSource<T>, 1 allocation for task inside a TaskCompletionSource<T>, 1 allocation for continuation delegate.
  • Lack of “hot path optimizations”: if the awaited task was already finished there is no reason to create a continuation.
  • Lack of extensibility: the implementation is tightly coupled with Task-based classes that makes it impossible to use with other scenarios, like awaiting other types or returning types other than Task or Task<T>.

Now let’s take a look at the actual async machinery to see how these concerns are addressed.

Async machinery

The overall approach the compiler takes for async method transformation, is very similar to the one mentioned above. To get the desired behavior the compiler relies on the following types:

  1. Generated state machine that acts like a stack frame for an asynchronous method and contains all the logic from the original async method (**).
  2. AsyncTaskMethodBuilder<T> that keeps the completed task (very similar to TaskCompletionSource<T> type) and manages the state transition of the state machine.
  3. TaskAwaiter<T> that wraps a task and schedules continuations of it if needed.
  4. MoveNextRunner that calls IAsyncStateMachine.MoveNextmethod in the correct execution context.

The generated state machine is a class in debug mode and a struct in release mode. All the other types (except MoveNextRunner class) are defined in the BCL as structs.

(**) The compiler generates a type name like <YourMethodNameAsync>d__1 for the state machine. To avoid name collisions the generated name contains invalid identifier characters which can’t be defined or referenced by the user. But for the sake of simplicity in all the following examples, I will use valid identifiers by replacing < and > characters with _ and by using slightly easier to understand names.

The original method

Original “asynchronous” method creates a state machine instance, initializes it with the captured state (including this pointer if the method is not static) and then starts the execution by calling AsyncTaskMethodBuilder<T>.Start with the state machine instance passed by reference.

Passing by reference is an important optimization, because a state machine tends to be fairly large struct (>100 bytes) and passing it by reference avoids a redundant copy.

The state machine

The generated state machine looks complicated, but in essence, it is very similar to one we created by hand.

Even though the state machine is similar to a hand-crafted one it has a few very important differences:

1. “Hot path” optimization

Unlike our naive approach, the generated state machine is aware that an awaited task might be completed already.

If the awaited task is already finished (successfully or not) the state machine moves forward to the next step:

This means that if all awaited tasks are already completed the entire state machine will stay on the stack. An async method even today could have an extremely small memory overhead if all awaited tasks are completed already or will complete synchroonously. The only remaining allocation would be for the task itself!

2. Error handling There is no special logic which covers faulted or canceled state of the awaited tasks. The state machine calls awaiter.GetResult() that will throw TaskCancelledException if the task was canceled or another exception if the task was failed. This is an elegant solution that works fine here because GetResult() is a bit different in terms of error handling than task.Wait() or task.Result.

Both task.Wait() and task.Result throw AggregateException even when there is only a single exception that caused a task to fail. The reason for this is pretty simple: a task can represent not only IO-bound operations that usually have just one failure but the result of a parallel computation as well. In the latter case, the operation can have more than one error and AggregateException is designed to carry all these errors in one place.

But async/await pattern is designed specifically for asynchronous operations that usually have at most one error. So the language authors decided that it will make more sense if awaiter.GetResult() will “unwrap” an AggregateException and throw just the first failure. This design decision is not perfect and in one of the next posts, we’ll see when this abstraction can leak.

The async state machine represents just one piece of the puzzle. To understand the whole picture we need to know how a state machine instance interacts with TaskAwaiter<T> and AsyncTaskMethodBuilder<T>.

How different pieces are glued together?

Async_sequence_state_machine

The chart looks overly complicated but each piece is well-design and plays an important role. The most interesting collaboration is happening when an awaited task is not finished (marked with the brown rectangle in the diagram):

  • The state machine calls __builder.AwaitUnsafeOnCompleted(ref awaiter, ref this); to register itself as the task’s continuation.
  • The builder makes sure that when the task is finished a IAsyncStateMachine.MoveNext method gets called:
    • The builder captures the current ExecutionContext and creates a MoveNextRunner instance to associate it with the current state machine instance. Then it creates an Action instance from MoveNextRunner.Run that will move the state machine forward under the captured execution context.
    • The builder calls TaskAwaiter.UnsafeOnCompleted(action)that schedules a given action as a continuation of an awaited task.

When the awaited task completes, the given callback is called and the state machine runs the next code block of the asynchronous method.

Execution Context

One may wonder: what is the execution context and why we need all that complexity?

In the synchronous world, each thread keeps ambient information in a thread-local storage. It can be security-related information, culture-specific data, or something else. When 3 methods are called sequentially in one thread this information flows naturally between all of them. But this is no longer true for asynchronous methods. Each “section” of an asynchronous method can be executed in different threads that makes thread-local information unusable.

Execution context keeps the information for one logical flow of control even when it spans multiple threads.

Methods like Task.Run or ThreadPool.QueueUserWorkItem do this automatically. Task.Run method captures ExecutionContext from the invoking thread and stores it with the Task instance. When the TaskScheduler associated with the task runs a given delegate, it runs it via ExecutionContext.Run using the stored context.

We can use AsyncLocal<T> to demonstrate this concept in action:

In these cases, the execution context flows through Task.Run and then to Task.ContinueWith method. So if you run this method you’ll see:

But not all methods in the BCL will automatically capture and restore the execution context. Two exceptions are TaskAwaiter<T>.UnsafeOnCompleteand AsyncMethodBuilder<T>.AwaitUnsafeOnComplete. It looks weird that the language authors decided to add “unsafe” methods to flow the execution context manually using AsyncMethodBuilder<T> and MoveNextRunnerinstead of relying on a built-in facilities like AwaitTaskContinuation. I suspect there were some performance reasons or anther restrictions on the existing implementation.

Here is an example that demonstrates the difference:

The output is:

Conclusion

  • Async methods are very different from the synchronous methods.
  • The compiler generates a state machine per each method and moves all the logic of the original method there.
  • The generated code is highly optimized for a synchronous scenario: if all awaited tasks are completed, then the overhead of an async method is minimal.
  • If an awaited task is not completed, the logic relies on a lot of helper types to get the job done.

References

If you want to learn more about execution context, I highly recommend the following two blog pots:

Next: we’ll explore an extensibility model of asynchronous methods in C#.

Sergey Tepliakov
Sergey Tepliakov

Senior Software Engineer, Tools for Software Engineers

Follow Sergey   

No Comments.