.NET Parallel Programming

All about Async/Await, System.Threading.Tasks, System.Collections.Concurrent, System.Linq, and more…

Tasks and the APM Pattern

The Asynchronous Programming Model (APM) in the .NET Framework has been around since .NET 1.0 and is the most common pattern for asynchrony in the Framework.  Even if you’re not familiar with the name, you’re likely familiar with the core of the pattern.  For a given synchronous operation Xyz, the asynchronous version ...

Mechanisms for Creating Tasks

The core entity in the Task Parallel Library around which everything else revolves is System.Threading.Tasks.Task.  The most common way of creating a Task will be through the StartNew method on the TaskFactory class, a default instance of which is exposed through a static property on Task, e.g. var t = Task.Factory.StartNew(() => { &...

CLR 4 – Inside the ThreadPool

As we’ve mentioned previously, the .NET ThreadPool has undergone some serious renovations in .NET 4, improvements on which the Task Parallel Library and PLINQ both rely.  Erika Parsons and Eric Eilebrecht are the PM and developer on the CLR team for the ThreadPool, and they’re featured in a great new Channel9 video covering ...

Tasks and Unhandled Exceptions

Prior to the .NET Framework 2.0, unhandled exceptions were largely ignored by the runtime.  For example, if a work item queued to the ThreadPool threw an exception that went unhandled by that work item, the ThreadPool would eat that exception and continue on its merry way.  Similarly, if a finalizer running on the finalizer thread ...

ParallelOptions.MaxDegreeOfParallelism vs PLINQ’s WithDegreeOfParallelism

We exert a good deal of effort ensuring that the APIs we provide are consistent within Parallel Extensions as well as with the rest of the .NET Framework.  This is from many angles, including behavior and general design, but also naming.  So when there are slight differences in naming, it raises questions.One occurrence of such a ...

Partitioning in PLINQ

(image) Here’s a simple way to look at it.  On a 4-core machine, take 4 million elements, divide this into 4 partitions of 1 million elements each, and give each of the 4 cores a million elements of data to process.  Assuming that the data and the processing of the data is uniform, that all of the cores operate with the same ...