Since the release of the December 2007 CTP of Parallel Extensions, we’ve received several questions about whether Parallel Extensions can be used from C++/CLI. In short, yes, it can! (It can be used with any .NET language, one of the beauties of this functionality being provided through a library.) To demonstrate, we included in the June 2008 CTP an example of doing just that, using Parallel Extensions to compute the Mandelbrot fractal. (To use Parallel Extensions in your own C++ project, you’ll need to add a reference to both System.Threading.dll and System.Core.dll.)
The sample application loads and displays the Mandelbrot fractal. You can click-and-drag to move around in the fractal, and on each move the image will be redrawn. However, as soon as you move again, the previous rendering will be canceled and a new rendering will begin; this keeps the UI responsive. You can also double-click with the left mouse button to zoom in or double-click with the right mouse button to zoom out. The application starts out in sequential rendering mode, signified by the "(1x)" in the title bar of the app. If you press the ‘p’ key while the application is running, it will switch over into parallel mode, and the title bar will display "(2x)" on a dual-core, "(4x)" on a quad-core, and so forth. The title bar also displays the time it took to perform the last rendering, and the image will be re-rendered every time you push ‘p’ or ‘s’ (for sequential), so you can quickly flip back and forth between the two modes to see the difference in rendering times.
While Parallel Extensions can be used from C++/CLI, it’s admittedly not the best experience due to C++’s current lack of support for lambda expressions and anonymous methods. In C# I could write a method something like the following to render an image (this is just pseudo-code):
static Bitmap Create(MandelbrotPosition position, int width, int height)
{
Bitmap bmp = new Bitmap(width, height);
Parallel.For(0, height, delegate(int j)
{
for(int i=0; i<width; i++)
{
bmp.SetPixel(i, j, RenderPixel(i,j,position));
}
});
return bmp;
}
This works because C# supports closures and is able to capture references to variables like width, height, position, and bmp in order to use them inside of the delegate. In C++/CLI in contrast, I need to write this more like the following:
static Bitmap^ Create(MandelbrotPosition position, int width, int height)
{
Bitmap^ bmp = gcnew Bitmap(width, height);
RenderImageData ^rid =
gcnew RenderImageData(bmp, position, width, height);
Parallel::For(0, height, gcnew Action<int>(
rid, &RenderImageData::RenderRow));
return bmp;
}
Not shown, I’ve defined RenderImageData as a class that stores the various values and that exposes a RenderRow method that implements the inner loop which renders a single row. I can call that RenderRow method in parallel by passing a delegate to it to the Parallel::For method.
Some of you who are familiar with OpenMP will note that this particular task of parallelizing a loop is OpenMP’s bread-and-butter and doesn’t require some of these funky gyrations to manually implement a closure, since the compiler handles it much of it for you. As it turns out, the nice folks in Visual C++ land implemented support for OpenMP 2.0 in C++/CLI, so you can actually use OpenMP from managed code. To do so, in your project you need to throw the /openmp switch, which can be done by setting the OpenMP Support value to Yes in the property pages for the project under Configuration Properties | C/C++ | Language. Under Configuration Properties | General, you’ll also need to set Common Language Runtime support to Common Language Runtime support (/clr). Now, on a for loop (like the one used to render the image), you can add an OpenMP pragma like:
#pragma omp parallel for
for(int j=0; j<height; j++) { … }
and the system will automatically parallelize the rendering of the loop (you may need to help the system to find the OpenMP runtime library VCOMP90.DLL, such as by copying that DLL into the directory containing your binary). Of course, the OpenMP approach has its downsides, too, besides all of the configuration we just went through. For one, OpenMP doesn’t provide an easy mechanism to jump out of parallelized loops, such as by using a break statement. This makes it difficult to cancel the rendering of the loop, such as how we do with Parallel::For. Additionally, OpenMP doesn’t currently provide good support for task parallelism as is available through the System::Threading::Tasks namespace in Parallel Extensions, or for the plethora of new coordination and synchronization constructs available in System::Threading and System::Threading::Collections. All of these Parallel Extensions constructs should be usable from C++/CLI.
For more details on this application, download and install the June 2008 CTP. The Mandelbrot sample is installed by default at %PROGRAMFILES%Microsoft Parallel Extensions Jun08 CTPSamplesMandelbrotFractals. We’d be very interested in hearing about your experiences using Parallel Extensions from C++/CLI, so please send any and all feedback our way, especially if you have suggestions for things we could do to improve the scenario.
0 comments