Faster responses, a lower chance of rate limiting, and 10% off premium requests for paid users – auto picks the best available model for each request based on current capacity and performance. With auto, you don’t need to choose a specific model. Copilot automatically selects the best one for your task. Auto model selection in Chat is rolling out in preview to all GitHub Copilot users.
How auto model selection works
Auto selects the best model to ensure that you get the optimal performance and reduce the likelihood of rate limits. Auto will choose between GPT-5, GPT-5 mini, GPT-4.1, Sonnet 4.5, and Haiku 4.5 and other models, unless your organization has disabled access to these models. Once auto picks a model, it uses that same model for the entire chat session. As we introduce picking models based on task complexity, this behavior will change over the next iterations.
For paid users, we currently primarily rely on Claude Sonnet 4.5 as the model powering auto.
When using auto model selection, Visual Studio uses a variable model multiplier based on the automatically selected model. If you are a paid user, auto applies a 10% request discount. For example, if auto selects Sonnet 4.5, it will be counted as 0.9x of a premium request; You can see which model and model multiplier are used by hovering over the chat response.
If you are a paid user and run out of premium requests, auto will always choose a 0x model (for example, GPT-4.1), so you can continue using auto without interruption.
What’s next
Our long-term vision for auto. We aim to make auto the best model selection for most users and to achieve this, here’s what we plan next:
- Dynamically switch between small and large models based on the task – this flexibility ensures that you get the right balance of performance and efficiency, while saving on requests
- Add more language models to auto
- Let users on a free plan take advantage of the latest models through auto
- Improve the model dropdown to make it more obvious which models and discounts are used
Thanks 😊

0 comments
Be the first to start the discussion.