November 15th, 2023

Announcing preview support for Llama 2 in DirectML

At Inspire this year we talked about how developers will be able to run Llama 2 on Windows with DirectML and the ONNX Runtime and we’ve been hard at work to make this a reality.

We now have a sample showing our progress with Llama 2 7B!

See https://github.com/microsoft/Olive/tree/main/examples/directml/llama_v2

This sample relies on first doing an optimization pass on the model with Olive, a powerful optimization tool for ONNX models. Olive utilizes powerful graph fusion optimizations from the ONNX Runtime and a model architecture optimized for DirectML to speed up inference times by up to 10X!

After this optimization pass, Llama 2 7B runs fast enough that you can have a conversation in real time on multiple vendors’ hardware!

We’ve also built a little UI to make it easy to see the optimized model in action.

Thank you to our hardware partners who helped make this happen. For more on how Llama 2 lights up on our partners’ hardware with DirectML, see:

We’re excited about this milestone, but this is only a first peek – stay tuned for future enhancements to support even larger models, fine-tuning and lower-precision data types.

Getting Started

Requesting Llama 2 access

To run our Olive optimization pass in our sample you should first request access to the Llama 2 weights from Meta.

Drivers

We recommend upgrading to the latest drivers for the best performance.

  • AMD has released optimized graphics drivers supporting AMD RDNAâ„¢ 3 devices including AMD Radeonâ„¢ RX 7900 Series graphics cards. Download Adrenalin Editionâ„¢ 23.11.1 or newer (https://www.amd.com/en/support)
  • Intel has released optimized graphics drivers supporting Intel Arc A-Series graphics cards. Download the latest drivers here
  • NVIDIA: Users of NVIDIA GeForce RTX 20, 30 and 40 Series GPUs, can see these improvements first hand, in GeForce Game Ready Driver 546.01
Category
Windows AI

Author

Jacques van Rhyn
Program Manager

Senior Program Manager on the DirectML Team at Microsoft.

0 comments

Discussion are closed.