June 17th, 2025
heart2 reactions

Codex Azure OpenAI Integration: Fast & Secure Code Development

Govind Kamtamneni
Technical Director, Global Black Belt

Introduction

You can now enjoy the same Codex experience in CLI or VS Code with Azure OpenAI support. We’ve contributed the following five pull requests to make this possible, giving you the familiar Codex functionality from ChatGPT while running securely on Azure:

We are collaborating with OpenAI on additional updates and integrations, so stay tuned. Meanwhile, follow the steps below to get Codex running on your Azure subscription.

OpenAI’s Codex CLI is the same coding agent that powers ChatGPT’s Codex. You can now run this coding agent entirely on Azure infrastructure, which keeps your data inside your compliance boundary and gives you the advantages of enterprise-grade security, private networking, role-based access control, and predictable cost management. Codex is more than a chat with your code agent – it is an asynchronous coding agent that can be triggered from your terminal, VS Code, or from a GitHub Actions runner, automatically opening pull requests, refactoring files, and writing tests with the credentials of your Azure OpenAI deployment. Explore deploying it with Azure OpenAI for use cases such as language translation, data-to-code, and legacy migration as detailed in the original Introducing Codex blog post. After you are up and running, visit aifoundry.app (and asyncloom.com) to configure your repository for Codex and to browse a growing catalog of other Azure-hosted coding agents, including GitHub Copilot Coding Agent, Cognition Devin, SRE Agent, and more.

Prerequisites

  • An active Azure subscription with access to Azure OpenAI.
  • Contributor permissions in Azure AI Foundry.
  • homebrew or npm for installing the CLI or VS Code with Codex extension.
Requirement Details
Operating systems macOS 12+, Ubuntu 20.04+/Debian 10+, or Windows 11 via WSL2
Git (optional, recommended) 2.23+ for built-in PR helpers
RAM 4-GB minimum (8-GB recommended)

Step 1 – Deploy a Codex model in Azure AI Foundry

  1. Go to Azure AI Foundry @ ai.azure.com and create a new project.
  2. Select a reasoning model such as codex-mini, gpt-5, or gpt-5-mini.
  3. Click Deploy, choose a name, and wait about two minutes.
  4. Copy the Endpoint URL and API key.

Step 2 – Install and Run the Codex CLI

brew install codex
codex --version # verify installation
Command Purpose
codex Interactive TUI
codex “…” Initial prompt for interactive TUI
codex exec “…” Non-interactive “automation mode”

Step 3a – Configure ~/.codex/config.toml

Create a TOML configuration file to tell Codex how to connect to Azure and other tools. Here is an example that configures both Azure OpenAI and an MCP server for Microsoft Learn documentation.

# Set the default model and provider
model = "gpt-5" 
model_provider = "azure"
preferred_auth_method = "apikey"

# Configure the Azure provider
[model_providers.azure]
name = "Azure"
# Make sure you set the appropriate subdomain for this URL.
base_url = "https://YOUR_PROJECT_NAME.openai.azure.com/openai"
env_key = "AZURE_OPENAI_API_KEY"
query_params = { api-version = "2025-04-01-preview" }
wire_api = "responses"
model_reasoning_effort = "high"

# Example of configuring an MCP server for Playwright
[mcp_servers.playwright] command = "npx" args = ["@playwright/mcp@latest"]
# Linux, macOS, or WSL 
export AZURE_OPENAI_API_KEY="<your-api-key>"

Step 3b – Use Codex from the VS Code extension

You can also access Codex directly inside Visual Studio Code using the OpenAI ChatGPT extension.

  1. Install the extension from the VS Code Marketplace: OpenAI Codex.
  2. When using Azure OpenAI (not ChatGPT), you must use this temporary workaround. First, ensure your configuration file has this entry:
    # ~/.codex/config.toml
    preferred_auth_method = "apikey"
  3. Export both API keys in your terminal, then launch VS Code from the same terminal (VS Code cannot be launched directly – it must inherit these environment variables from the command line):
    # macOS/Linux/WSL
    export OPENAI_API_KEY="your-azure-api-key-here"
    export AZURE_OPENAI_API_KEY="your-azure-api-key-here"
    code .  # Launch VS Code from terminal
  4. With the environment and configuration set, use the extension’s chat or inline actions to ask Codex to generate, refactor, or explain code – it will route through your Azure OpenAI deployment as configured (ensure you are not signed in to ChatGPT)codex vscode image
Note (temporary): This environment variable workaround is required due to an open bug affecting the extension + Azure OpenAI flow. See the maintainers’ update here. OpenAI is working on a fix to make this work via config.toml and settings.json – without requiring the environment variable workaround above.

Step 4 – Giving Codex Memory with AGENTS.md

You can give Codex extra instructions and guidance using AGENTS.md files. Codex looks for AGENTS.md files in the following places and merges them top-down, giving it context about your personal preferences, project-specific details, and the current task:

  • ~/.codex/AGENTS.md – personal global guidance.
  • AGENTS.md at your repository’s root – shared project notes.
  • AGENTS.md in the current working directory – sub-folder/feature specifics.

For example, to help Codex understand how to write code for Azure AI Foundry Agents, you could create an AGENTS.md in your project root with the following content, derived from the Azure AI Agents SDK documentation:

# Instructions for working with Azure AI Foundry Agents

You are an expert in the Azure AI Agents client library for Python.

## Key Concepts

- **Client Initialization**: Always start by creating an `AIProjectClient` or `AgentsClient`. The recommended way is via `AIProjectClient`.
- **Authentication**: Use `DefaultAzureCredential` from `azure.identity`.
- **Agent Creation**: Use `agents_client.create_agent()`. Key parameters are `model`, `name`, and `instructions`.
- **Tools**: Agents use tools to perform actions like file search, code interpretation, or function calls.
  - To use tools, they must be passed to `create_agent` via the `tools` and `tool_resources` parameters or a `toolset`.
  - Example: `file_search_tool = FileSearchTool(vector_store_ids=[...])`
  - Example: `code_interpreter = CodeInterpreterTool(file_ids=[...])`
  - Example: `functions = FunctionTool(user_functions)`

## Example: Creating a basic agent

```python
import os
from azure.ai.projects import AIProjectClient
from azure.identity import DefaultAzureCredential

# 1. Create Project Client
project_client = AIProjectClient(
    endpoint=os.environ["PROJECT_ENDPOINT"],
    credential=DefaultAzureCredential(),
)

# 2. Get Agents Client
with project_client:
    agents_client = project_client.agents

    # 3. Create Agent
    agent = agents_client.create_agent(
        model=os.environ["MODEL_DEPLOYMENT_NAME"],
        name="my-helpful-agent",
        instructions="You are a helpful agent that can answer questions.",
    )
    print(f"Created agent with ID: {agent.id}")
```

Step 5 – Explore with your coding agent

codex "write a python script to create an Azure AI Agent with file search capabilities"
  • # generate a unit test for src/utils/date.ts
  • # refactor this agent to use the Code Interpreter tool instead

Step 6 – Run Codex in GitHub Actions

codex blog featured image image

Codex can execute as part of your CI pipeline. Store your API key in the repository’s secret store as AZURE_OPENAI_KEY and add a job like this to automatically update your changelog before a release:

jobs:
  update_changelog:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - name: Update changelog via Codex
        run: |
          npm install -g @openai/codex
          export AZURE_OPENAI_API_KEY="${{ secrets.AZURE_OPENAI_KEY }}" 
          codex -p azure exec --full-auto "update CHANGELOG for next release"

Step 7 – Explore more agents with AsyncLoom

  • Browse detailed docs and benchmarks for other Azure-hosted agents.
  • Create repo-ready configuration guides with one click.
  • Experiment with GitHub Copilot Coding Agent, Cognition Devin, SRE Agent, and others.

Troubleshooting

 

Symptom Fix
401 Unauthorized or 403 Forbidden Export your AZURE_OPENAI_API_KEY environment variable correctly. Confirm that your key has project/deployment access.
ENOTFOUND, DNS error, or 404 Not Found Verify base_url in config.toml uses your resource name and correct domain, e.g. base_url = "https://<your-resource>.openai.azure.com/openai".
CLI ignores Azure settings Open ~/.codex/config.toml and ensure:

  • model_provider = "azure" is set.
  • The [model_providers.azure] section exists.
  • env_key = "AZURE_OPENAI_API_KEY" matches your exported variable.
In Codex, you get a warning that “codex-mini” is not in the list of available models for provider “azure” This is a known issue, and we have a PR in progress to address this. Ignore this warning and proceed with codex-cli tasks.
VS Code extension not using Azure key Apply the temporary workaround above: set preferred_auth_method = "apikey" in config.toml and export OPENAI_API_KEY and AZURE_OPENAI_API_KEY to the same Azure key. See the maintainers’ note.

Conclusion

In just a few minutes you can connect an AI coding agent to your Azure tenant, keep intellectual property secure, and accelerate software delivery. Combine Codex CLI, Codex in VSCode, and GitHub Actions to build a flexible AI-powered engineering workflow. Give it a try and share what you create.


Questions or feedback? Drop a comment below

Author

Govind Kamtamneni
Technical Director, Global Black Belt

17 comments

Leave a comment

Your email address will not be published. Required fields are marked *

Sort by :
  • Reza Alirezaei · Edited

    I get this error when configuring MS learn MCP...I am on Windows (no WSL) and Codex mini part works just fine.

    🖐  MCP client for `server-name` failed to start: program not found

    "npx -y mcp-remote https://learn.microsoft.com/api/mcp" runs the mcp server successfully and npx is available in both .cmd and .ps1 format in the following directory : C:Program Filesnodejsnpx.cmd

    Here is my TOML config:

    model = "codex-mini"
    model_provider = "azure"

    # Configure the Azure provider
    [model_providers.azure]
    name = "Azure"
    # Make sure you set the appropriate subdomain for this URL.
    base_url = "https://XYZ.openai.azure.com/openai"
    env_key = "XYZ"
    query_params = { api-version = "2025-04-01-preview" }
    wire_api =...

    Read more
  • Renze YuMicrosoft employee 2 weeks ago

    The video embedding is broken: “Video unavailable. Playback on other websites has been disabled by the video owner”

  • Doug Stanley

    Great stuff Govind. I’m on a .us (gov) instance of Azure, and the best model I have access to is o3-mini currently. Having difficulties, and want to make sure it will work. Proceeding with the build from source with rust, but it would be nice to know if this is blind alley (o3-mini) before going to far ahead. Thanks much

    • Govind KamtamneniMicrosoft employee Author · Edited

      Thank you, Doug. You don’t have to build from source anymore. Just look at my example `~/.codex/config.toml` above and update your model name to `o3-mini`. Let me know how it goes!

  • Sebastian Ignacio Gonzalez Franco

    Fomenta una cultura de experimentación y aprendizaje continuo. El valor no está solo en la herramienta en sí, sino en cómo el equipo la adopta, experimenta y descubre nuevas formas de aplicarla, lo que a su vez impulsa la colaboración y la innovación interna.

    • Govind KamtamneniMicrosoft employee Author

      Thank you for your thoughtful insight! I completely agree – tools like Codex Coding Agent deliver the most value when teams embrace a mindset of experimentation and continuous learning. It’s not just about what the technology can do, but how we explore new ways to apply it together. That spirit of discovery is what truly drives collaboration and internal innovation. Appreciate you highlighting that!

  • Peter Lee (CSA)Microsoft employee

    Hi Govind

    Great article—thank you for sharing it!

    I ran into an issue when executing codex -p azure and received an error message. Here are my current settings—do you happen to know what might be causing this? i am using WSL.

    plee@plee:~$ codex --version
    codex-cli 0.4.0
    plee@plee:~$ cat .codex/config.json
    {
    "model": "codex-mini",
    "provider": "azure",
    "providers": {
    "azure": {
    "name": "AzureOpenAI",
    "baseURL": "https://fine-tuning99.openai.azure.com/openai/",
    "envKey": "AZURE_OPENAI_API_KEY"
    }
    },
    ...

    Read more
    • Peter Lee (CSA)Microsoft employee

      fixed the issue and want to share it in here.
      cat ~/.codex/config.toml:

      profile = "azure"
      
      [model_providers.azure-ai-foundry]
      name = "azure"
      base_url = "https://fine-tuning99.openai.azure.com/openai/"
      env_key = "AZURE_OPENAI_API_KEY"
      query_params = { api-version = "2025-04-01-preview" }
      wire_api = "responses"
      
      [profiles.azure]
      model = "codex-mini"
      model_provider = "azure-ai-foundry"
      approval_policy = "on-failure"

      Afterwards you have to run this in your terminal:
      export AZURE_OPENAI_API_KEY=””

      Then run:
      codex “hi”

      Codex Version: codex-cli 0.5.0

      For GPT-4.1 model, use api-version = “2025-03-01-preview”

  • Andres da Silva Santos

    I followed the tutorial, but I’m encountering an error when trying to use codex-mini:

    “OpenAI rejected the request. Error details: Status: 400, Code: OperationNotSupported, Type: unknown, Message: 400 The chatCompletion operation does not work with the specified model, codex-mini. Please choose different model and try again. You can learn more about which models can be used with each operation here: https://go.microsoft.com/fwlink/?linkid=2197993.. Please verify your settings and try again.”

    Additionally, I noticed that after deploying codex-mini on AI Foundry, the URI field appears to be empty.

    • Govind KamtamneniMicrosoft employee Author

      Hi Andres, Thanks for giving it a shot. When you test again, please use o4-mini or o3 from Azure OpenAI.
      I’ve updated the blog to list the new PRs – two were merged today, including the codex-mini update that adds response-API support. OpenAI is slated to publish a new npm release tomorrow. Once you update codex from the npm package, codex-mini should work as well.

      In the meantime, o4-mini or o3 should let you proceed

  • Vhanakaware, Vishal (DI SW PLM ME PRD IN DEV1) · Edited

    Thanks for sharing, Govind. I am trying exact same steps from past few days but not able to successfully configure Azure Endpoint and key with the codex cli. I am getting “OpenAI rejected the request. Error details: Status: 404, Code: 404, Type: unknown, Message: 404 ╯
    Resource not found. Please verify your settings and try again.”

    I am trying to use o4-mini model. I couldn’t find the codex model in Azure foundry.

    I was able to run codex cli using my openAI account but want to run it with Azure and it is not working. Thanks in advance.

    • Govind KamtamneniMicrosoft employee Author

      Thanks for giving it a try, Anoop.

      - Make sure your `~/.codex/config.json` matches the sample in the post, and that you are launching with `codex -p azure`.
      - Confirm that the `AZURE_OPENAI_API_KEY` environment variable is set.
      - If you’re on Windows, test inside WSL.

      There’s a known bug on the main branch (PRs #1324 and #1321 are pending approval). Until those are merged, set OPENAI_API_KEY to the same value as AZURE_OPENAI_API_KEY. After the fix lands, this extra step won’t be necessary.

      Read more
      • Vhanakaware, Vishal (DI SW PLM ME PRD IN DEV1)

        Thanks, Govind. I verified the config.json and other details but still it is not working getting same error of resource not found. Thanks anyway. Maybe will wait till the issue is merged in the main branch. Thanks.

        • Govind KamtamneniMicrosoft employee Author

          Sorry to hear that. Please try using this URL pattern: https://.cognitiveservices.azure.com/openai – note the “cognitiveservices” segment. Depending on how you configured AI Foundry or your Azure OpenAI resource, the URI may vary. I’ve updated the troubleshooting section of the blog to reflect this. Let me know if it works. Meanwhile, the two PRs were just merged, so codex-mini should be functioning soon.

          • Chandan R

            waiting for codex-mini to work. o4-mini is working perfectly. getting the following:
            system
            ⚠️ OpenAI rejected the request. Error details: Status: 400, Code: OperationNotSupported, Type: unknown, Message: 400 The chatCompletion
            operation does not work with the specified model, codex-mini. Please choose different model and try again. You can learn more about which
            models can be used with each operation here: https://go.microsoft.com/fwlink/?linkid=2197993.. Please verify your settings and try again.