This week at the Supercomputing 2010 conference in New Orleans, we announced Service Pack 1 for Windows HPC Server 2008 R2, which integrates Windows Azure compute cycles with HPC Server. Often, HPC customers are interested in augmenting their on-premises HPC systems with cloud computing resources when bursty workloads create high demand for compute cycles that can’t be met by their on-premises resources. Other customers have occasional, large compute demands that are infrequent and don’t justify owning and maintaining a private cluster. For these customers, the cloud is an ideal, cost-effective solution.
Windows HPC Server makes it easy to integrate with Azure through a simple wizard. Just enter your Azure credentials and HPC Server connects into Windows Azure, provisioning cloud resources as you need them and shutting those resources down when you’re finished. At Supercomputing 2010, the HPC team is demonstrating offloading parallel applications built with Visual Studio and large Microsoft Excel parallel computations to the cloud through Windows HPC Server. Cluster users can submit jobs to the cluster as they did before, and HPC Server will determine if the jobs should run on the on-premises cluster, cycle-scavenged desktops, or Windows Azure.
The HPC Server team also completed their first petaFLOP supercomputing run using the Tsubame 2.0 cluster at the Tokyo Institute of Technology, joining an exclusive club of fewer than ten petaFLOP-scale supercomputers in the world. Windows HPC Server achieved 1.127 petaFLOPS on 1,296 compute nodes. The petaFLOP run demonstrates Windows can scale to the most demanding supercomputing scale. While this this quite an achievement, our goal is bring HPC to the technical computing mainstream, providing the tools, platforms, and services that support the computational needs of an increasingly broader group of scientists, engineers, and researchers. The petaFLOP run demonstrates scale and the cloud provides the greatest opportunity for the democratization of high performance computing.
You can learn more information on this here.