November 29th, 2007

Task Parallel Library changes since the MSDN Magazine article

Stephen Toub - MSFT
Partner Software Engineer

Back in the October 2007 issue of MSDN Magazine, we published an article on the beginning stages of what has become the Task Parallel Library (TPL) that’s part of the Parallel Extensions to the .NET Framework.  While the core of the library and the principles behind it have remained the same, as with any piece of software in the early stages of its lifecycle, the design changes frequently. In fact, one of the reasons we’ve put out an early community technology preview (CTP) of Parallel Extensions is to solicit your feedback on the APIs so that we know if we’re on the right track, how we may need to change them further, and so forth.  Aspects of the API have already changed since the article was published, and here we’ll go through the differences.

First, the article refers to System.Concurrency.dll and the System.Concurrency namespace.  We’ve since changed the name of the DLL to System.Threading.dll, and TPL is now contained in two different namespaces, System.Threading and System.Threading.Tasks.  AggregateException, the higher-level Parallel class, and Parallel’s supporting types (ParallelState and ParallelState<TLocal>) are contained in System.Threading, while all of the lower-level task parallelism types are in System.Threading.Tasks.

Next, the second page of the article states “if any exception is thrown in any of the iterations, … the first thrown exception is rethrown in the calling thread.”  The semantics here have changed such that we now have a common exception handling model across all of the Parallel Extensions, including PLINQ.  If an exception is thrown, we still cancel all unstarted iterations, but rather than just rethrowing one exception, we bundle all thrown exceptions into an AggregateException container exception and throw that new exception instance.  This allows developers to see all errors that occurred, which can be important for reliability.  It also preserves the stack traces of the original exceptions.

In the section on aggregation, the article talks about the Parallel.Aggregate API.  Since the article was published, we’ve dropped this method from the Parallel class.  Why?  Because a) PLINQ already supports parallel aggregation through the ParallelEnumerable.Aggregate extension method, and b) because aggregation can be implemented with little additional effort on top of Parallel.For if PLINQ’s support for aggregation isn’t enough.  Consider the example shown in the article:

int sum = Parallel.Aggregate(0, 100, 0,
                   delegate(int i) { return isPrime(i) ? i : 0; },
                   delegate(int x, int y) { return x + y; });

We can implement this with PLINQ as follows:

int sum = (from i in ParallelEnumerable.Range(0, 99)
                where isPrime(i)
                select i).Sum();

Of course, this benefits from LINQ and PLINQ already supporting a Sum method.  We can do the same thing using the Aggregate method for general reduction support:

int sum = (from i in ParallelEnumerable.Range(0, 99)
                where isPrime(i)
                select i).Aggregate((x,y) => x+y);

If we prefer not to use PLINQ, we can do a similar operation using Parallel.For (in fact, this is very similar to how Parallel.Aggregate was implemented internally):

int sum = 0;
Parallel.For(0, 100, () => 0, (i,state)=>
{
    if (isPrime(i)) state.ThreadLocalState += i;
},
partialSum => Interlocked.Add(ref sum, partialSum));

Here, we’re taking advantage of the overload of Parallel.For that supports thread-local state.  On each thread involved in the Parallel.For loop, the thread-local state is initialized to 0 and is then incremented by the value of every prime number processed by that thread.  After the thread has completed processing all iterations it’s assigned, it uses an Interlocked.Add call to store the partial sum into the total sum.  In fact, if you found yourself needing Aggregate functionality a lot, you could generalize this into your own ParallelAggregate method, something like the following:

static T ParallelAggregate<T>(
    int fromInclusive, int toExclusive, T seed,
    Action<int> selector, Action<T,T> aggregator)
{
    T result = seed;
    object aggLock = new object();
    Parallel.For(fromInclusive, toExclusive, () => initialValue, (i,state) =>
    {
        state.ThreadLocalState =
            aggregator(state.ThreadLocalState, selector(i));
    },
    partial => { lock(aggLock) result = aggregator(partial, result); } );
    return result;
}

With this, you can use ParallelAggregate just as is done with Parallel.Aggregate in the article:

int sum = ParallelAggregate(0, 100, 0,
                   delegate(int i) { return isPrime(i) ? i : 0; },
                   delegate(int x, int y) { return x + y; });

Moving on to the Task class, we’ve made some fairly substantial changes to the public facing API.  Here’s a summary of the differences from what’s described in the article:

  • As shown in the article, Task does not have a base class.  In the CTP, Task now derives from TaskCoordinator, a base class which provides all of the waiting and cancelation functionality.
  • The Task class described in the article exposes a constructor that accepts an Action delegate.  During construction, the Task is queued for execution.  Instead, we changed this so that Task doesn’t expose any constructors.  Tasks are now created using the Task.Create static method; we felt this factory approach was clearer than the constructor approach… please let us know what you think regarding this change.  Were we right?  Is one approach better than the other?  Would it be a positive or a negative if both approaches were supported?
  • The type of the delegate used by Task.Create is Action<Object> rather than just Action, and overloads of Task.Create support providing a state argument that will be passed to the Task’s action.
  • The article states that calling Cancel on a Task will cancel that task and all of the tasks that task created.  We’ve modified this approach so that the model is now opt-in, meaning that by default tasks created by a task will not be canceled when the parent task is canceled.  That behavior can be opted into, however, by specifying the RespectParentCancelation flag when the child Task instances are created.
  • The article describes a Task<T> type that derives from Task.  This has been renamed Future<T>.  As with Task, it no longer exposes a constructor that accepts a Func<T> delegate.  Instead, Future<T> provides a static Create factory method.  In addition, to better support type inference, we provide a non-generic Future class that exposes generic static Create<T> methods.  This makes it possible to create a Future<T> where T is an anonymous type returned from the provided Func<T>.
  • The article describes the ReplicableTask class as deriving from Task.  For the CTP, we’ve gotten rid of ReplicableTask, but we’ve kept the concept.  To create a Task as a replicable task, provide the TaskCreationOptions.SelfReplicating value to the Task.Create method when creating the Task.
  • The article describes the exception handling model for self-replicating tasks being one where only one exception is rethrown even if multiple exceptions were thrown from multiple invocations of the replicable task.  As with the loop exception handling model described earlier, we now collect all thrown exceptions into one new AggregateException and throw that exception instead of picking one of the thrown exceptions at random.

In addition to changes to the Task class, there have also been changes to the TaskManager class described in the article:

  • In the article, TaskManager exposed two constructors, one that’s parameterless and one that accepts an integer representing the maximum concurrency level for the TaskManager.  We’ve modified that second constructor to instead accept a TaskManagerPolicy, a type which represents the allowed behavior of the TaskManager, providing information such as desired concurrency level and execution context flow behavior.
  • Similarly, rather than exposing a MaxConcurrentThreads property, TaskManager now exposes a TaskManagerPolicy property, which allows the current policy to be retrieved and a new policy to be provided dynamically.
  • The article shows Task providing a constructor that accepts a TaskManager, in order to associate that Task with a specific TaskManager.  Such behavior is still possible, but the TaskManager is provided to overloads of Task.Create.

That sums up the changes we’ve made.  Even with these changes, the article should still provide you with a good overview of the library and its intended usage.  For more information, check out the documentation that’s included with the CTP, and stay tuned to this blog!  As mentioned before, we’re very interested in your feedback on the API, so please let us know what you think (the good and the bad).

Author

Stephen Toub - MSFT
Partner Software Engineer

Stephen Toub is a developer on the .NET team at Microsoft.

0 comments

Discussion are closed.