Feedback requested: TaskManager shutdown, Fair scheduling
One of the primary reasons we’ve released CTPs of Parallel Extensions is to solicit feedback on the design and functionality it provides. Does it provide all of the APIs you need to get your job done? Are there scenarios you wished the APIs supported and that you need to work around in klunky ways? And so forth. We’ve received some terrific feedback thus far, we’ve already made changes based on it, we’re currently making changes based on it, and we’ll continue making changes based on it moving forward.
We continually have discussions internally about additional support we could provide. Frequently these discussions result in our needing to know more about our customers’ needs. There’s a practically unlimited amount of functionality we could bake into Parallel Extensions, but each additional piece not only requires design, development, testing, and support, but it also can complicate other aspects of the design, potentially slow down other primary scenarios, and so forth. In order to focus our efforts, we need feedback.
Two such discussions occurred recently, and any feedback you provide would be useful in our deciding how to move forward.
The first is about shutting down a TaskManager instance. In the current implementation, we only support one kind of shutdown, which is triggered through a call to Dispose on TaskManager. This implementation is a synchronous invocation that blocks until all of the Tasks previously scheduled to the TaskManager have completed. However, there are other semantics we could potentially implement. For example, we could provide an asynchronous shutdown option, that allowed you to asynchronously call Shutdown, and the TaskManager would only be disposed of when all of the tasks scheduled to it completed. Or with a bit more internal reworking, we could support automatically canceling all of the tasks scheduled to the TaskManager. And again that could be done synchronously (canceling all and then waiting for all) or asynchronously (canceling all and then only cleaning up after the TaskManager when any that were currently executing completed). How useful would such capabilities be to you? In what scenarios would you find them useful or, more importantly, necessary? Would such capabilities be dangerous at all in your scenarios?
The second is about scheduling order. One of the benefits that a work-stealing scheduler (the kind of scheduler employed by the Task Parallel Library) provides is a distribution of work across all cores, where each core prefers to schedule new work to and pull work from its local queue(s). This scheduling can be made extremely efficient and can be made to improve locality and the like by using LIFO ordering, meaning that the task most recently scheduled is the one that will execute first. This is separate from work scheduled to the scheduler from other threads (such as an application’s main thread), which will typically still be scheduled in a generally FIFO order. The question, then, is if there are scenarios you might have where you always want that FIFO-ish order, regardless of where the work is scheduled from. Such an option would likely decrease performance in some key scenarios, but it would also be more fair in terms of the order in which work gets executed. Do you have any scenarios that would require such a PreferFairness option? We’d love to hear about them if you do.