DirectML at Build 2023
DirectML shared some exciting announcements at Build this year! We demonstrated a few awesome demos and highlighted new capabilities during the Windows AI Breakout session, Deliver AI-powered experiences across cloud and edge, with Windows.
We are also excited to announce the launch of our new product landing page! This new product page provides the information you need to bring performant, cross-hardware AI acceleration into your app.
DirectML enables Adobe to scale to Intel’s first VPU to market
We partnered with Adobe and Intel to showcase how DirectML makes it possible for developers to integrate machine learning models into their applications to leverage next-generation hardware. Want to learn more? Check out Adobe Premiere Pro leverages DirectML on new AI Silicon for all the exciting details!
Amazing performance improvements with Olive and DirectML
Get ready to take your AI models to the next level with Olive (ONNX Live)- a powerful tool for optimizing ONNX models that integrates seamlessly with ONNX Runtime and DirectML.
With Olive, you’ll be able to optimize your models like never before, thanks to its advanced techniques that incorporate cutting-edge model compression, optimization, and compilation methods. When you pair Olive’s capabilities with DirectML, you’ll get lightning-fast hardware acceleration across the entire range of Windows GPUs.
Find out more in our blog post: Optimize DirectML performance with Olive.
Diffusion models optimized for DirectML
Text-to-image models, like Stable Diffusion, convert natural language into remarkable images. DirectML optimizations for the Windows hardware ecosystem enhance the performance of transformer and diffusion models, including Stable diffusion, enabling more efficient execution. The DirectML optimizations aim to empower developers to seamlessly integrate AI hardware acceleration into their applications at scale. Check out our DirectML ❤ Stable Diffusion blog post to learn more.
DirectML and the Hybrid Loop
We’re entering a new era of AI experiences that span across the cloud and edge. During Build last year, Hybrid Loop was first introduced, and this year we are thrilled to announce that the Hybrid Loop has become a reality. Our goal is to empower developers by reducing their workload and enabling seamless hybrid inferencing across Azure and client devices – DirectML plays a key role in this, allowing developers to scale inferencing to GPUs and soon to NPUs.
Together with Olive and ONNX Runtime, DirectML is a part of a cutting-edge hybrid platform that enables efficient deployment of AI experiences across the Windows hardware ecosystem. We can’t wait to see what you’ll create with this groundbreaking technology!