October 5th, 2021

GPU accelerated ML workflows broadly available in the Windows Subsystem for Linux

Clarke Rahrig
Senior Product Manager

Support for GPU accelerated machine learning (ML) training within the Windows Subsystem for Linux (WSL) is now broadly available with the release of Windows 11. Over the past year our engineering teams have listened to feedback and co-engineered with AMD, Intel, and NVIDIA enabling GPU access within WSL in support of data scientists, ML engineers, and developers.

Read on for more details about using WSL with your existing ML workflows.

NVIDIA CUDA support for existing ML workflows in WSL

Since the public preview launch one of our key focuses was ensuring professional ML workloads were fully supported with WSL. This includes supporting popular Linux-based tools, libraries, and frameworks used in the everyday workflow of a data scientist or ML engineer. Many of these workflows take advantage of GPU acceleration through NVIDIA CUDA.

Over the past year of co-engineering NVIDIA has steadily improved the CUDA experience within WSL. If you are interested in learning more about the specific investments from NVIDIA in WSL, take a look at their latest WSL focused blog post.

“Modern AI and data science workloads rely on GPU-accelerated computing. Bringing CUDA and the Windows Subsystem for Linux to the data science community makes GPU accelerated computing for machine learning now even more accessible to Windows customers.” – Chris Lamb, VP of Computing Software Platforms, NVIDIA

Want to see how WSL, NVIDIA CUDA, VS Code, Docker Containers, and Azure all come together to train a machine learning model to play a retro video game? Check out Craig Loewen’s end-to-end walkthrough from a recent VS Code Livestream.

ML training across GPU vendors with DirectML in WSL

DirectML is a hardware-agnostic ML library from the DirectX family that enables GPU accelerated ML training and inferencing on any DirectX 12 capable GPU. By coupling DirectML as a backend to TensorFlow, we are opening the opportunity for a larger set of Windows customers to take advantage of GPU accelerated ML training.

“AMD looks forward to supporting even more student and professional scenarios in machine learning that can fully leverage GPU acceleration on AMD hardware in Windows and within the Windows System for Linux.” – Andrej Zdravkovic, Senior Vice President Software Development, AMD

Through close collaboration with AMD, Intel, and NVIDIA, TensorFlow-DirectML is now generally available, supporting a wide range of workloads from different ML application domains.

YOLOV3 sample model training in WSL using TensorFlow-DirectML
YOLOV3 sample model training in WSL using TensorFlow-DirectML

The package easily installs from PyPI using “pip install tensorflow-directml” and works with existing TensorFlow model training scripts. Check out Microsoft Docs for more details on setting up TensorFlow-DirectML in WSL.

“Thanks to co-engineering with Microsoft, Intel is thrilled that even more Windows customers can benefit from GPU acceleration for their machine learning workflows within the Windows Subsystem for Linux.” – Lisa Pearce, Intel’s Vice President and General Manager, Visual Compute Group

Try WSL for your ML training workflow

Setup WSL to fit your existing ML workflow by first making sure you have the latest driver from AMD, Intel, or NVIDIA depending on the GPU in your system. Then setup NVIDIA CUDA in WSL or TensorFlow-DirectML based on your needs.

Feedback during the public preview has helped get to this point and we welcome additional feedback through the NVIDIA Community Forum for CUDA on WSL and the TensorFlow-DirectML Repo.

Support for GPU compute functionality in WSL is available in Windows 11, as well as in Windows 10, version 21H2. Windows 10, version 21H2 is currently available through the Windows Insiders Program within the Release Preview channel, with broad availability coming later this calendar year.

To stay in the loop on the latest news and future updates related to machine learning training on Windows, stay tuned to the Windows AI Platform blog!

Author

Clarke Rahrig
Senior Product Manager

Product Manager for the Windows AI developer platform. Focusing on hardware accelerating ML model inferencing and training through DirectML.

0 comments

Discussion are closed.