Train your machine learning models on any GPU with TensorFlow-DirectML

Clarke Rahrig

TensorFlow-DirectML improves the experience and performance of model training through GPU acceleration on the breadth of Windows devices by working across different hardware vendors.

Over the past year we launched the TensorFlow-DirectML preview for Windows and the Windows Subsystem for Linux (WSL), worked with the TensorFlow community, open sourced the project, and added targeted support for online course work.

Today we’re excited to exit preview and announce our first generally consumable package of TensorFlow-DirectML! We encourage you to use TensorFlow-DirectML whether you’re a student learning or a professional developing machine learning models for production. Read on for more details on how we invested and where we’re headed next.

TensorFlow-DirectML is easy to use and supports many ML workloads

Setting up TensorFlow-DirectML to work with your GPU is as easy as running “pip install tensorflow-directml” in your Python environment of choice. Once TensorFlow-DirectML is installed, it works seamlessly with existing model training scripts.

We assembled a wide range of model scripts from existing TensorFlow tutorials, online learning courses, the TensorFlow Benchmark set, and AI-Benchmark, as well as other commonly used neural networks. This model set provided a breadth of scenarios for ensuring TensorFlow-DirectML has the operator coverage and performance needed for students’ and professionals’ success.

Our team acted on customer feedback throughout the preview, improving the TensorFlow-DirectML experience on Windows and within WSL. Including optimizing specific operators like convolution and batch normalization and fine-tuning GPU scheduling and memory management so TensorFlow gets the most out of DirectML. We co-engineered with AMD, Intel, and NVIDIA enabling a hardware accelerated training experience across the breadth of DirectX 12 capable GPUs.

SqueezeNet sample model training in WSL using TensorFlow-DirectML
SqueezeNet model sample training in WSL using TensorFlow-DirectML

We encourage you to use your existing models but if you need examples to get started, we have a few sample models available for you. We will continue improving TensorFlow-DirectML through targeted operator support and optimizations based on the feedback from the community. We look towards bringing these same benefits to the TensorFlow 2 codebase, including our plan of making a TensorFlow PluggableDevice plugin for DirectML.

Try out TensorFlow-DirectML today

If you have a Python environment setup then it is as simple as running “pip install tensorflow-directml” to get going with your existing TensorFlow training scripts. More detailed setup instructions are found on Microsoft Docs for Windows and WSL. If you have feedback or run into issues with the package, please open an issue on the TensorFlow-DirectML GitHub repo. We would love to hear from you to make TensorFlow-DirectML even better!

For future updates on TensorFlow-DirectML, stay tuned to the Windows AI Platform blog!


Discussion is closed. Login to edit/delete existing comments.

  • Tristan Barcelon 0

    Hi Clarke, Is the version of WSL required for TensorFlow-DirectML dependent on Windows 11 or a specific build of Windows 10? If I recall correctly, apps running under WSL2 can access host computer’s GPU only under WSLG-enabled builds. Only Windows 10 builds from 21364 onwards have WSLG enabled. (,server%20to%20display%20the%20apps%20in%20Windows%2010.) Can you please clarify the dependencies required?

    • Clarke RahrigMicrosoft employee 0

      Hi Tristan, the build information you reference is accurate for WSLg support which covers GUI app support in addition to compute support. If you’re already setup with WSLg then TensorFlow-DirectML should also work and be able to access the GPU.

      As you call out WSL needing access to the host GPU is required which first became available in Build 20150 in combination with a supporting driver. This means TenosorFlow-DirectML can run all the way back to this build because it only needs GPU compute support, more details for getting setup are here.

  • Tony Henrique 0

    This is very good news that will make AI more available to more people!

Feedback usabilla icon