June 17th, 2025
heart1 reaction

Securely Turbo‑Charge Your Software Delivery with Codex Coding Agent on Azure OpenAI

Govind Kamtamneni
Technical Director, Global Black Belt

Introduction

We have contributed the following four pull requests to add Azure OpenAI support to Codex, letting you enjoy the same Codex experience as in ChatGPT while running securely on Azure:

We are collaborating with OpenAI on additional updates and integrations, so stay tuned. Meanwhile, follow the steps below to get Codex running on your Azure subscription.

Pending OpenAI's NPM release

Until OpenAI publishes a new release from main, please build from source
OpenAI’s Codex CLI is the same coding agent that powers ChatGPT’s Codex. You can now run this coding agent entirely on Azure infrastructure, which keeps your data inside your compliance boundary and gives you the advantages of enterprise-grade security, private networking, role-based access control, and predictable cost management. Codex is more than a chat with your code agent – it is an asynchronous coding agent that can be triggered from your terminal or from a GitHub Actions runner, automatically opening pull requests, refactoring files, and writing tests with the credentials of your Azure OpenAI deployment. Explore deploying it with Azure OpenAI for use cases such as language translation, data-to-code, and legacy migration as detailed in the original Introducing Codex blog post.

After you are up and running, visit gitagu.com to configure your repository for Codex and to browse a growing catalog of other Azure-hosted coding agents, including GitHub Copilot Coding Agent, Cognition Devin, SRE Agent, and more.

 

Prerequisites

  • An active Azure subscription with access to Azure OpenAI.
  • Contributor permissions in Azure AI Foundry.
  • macOS, Linux, or Windows 11 plus WSL 2 (Codex is trained on Unix-style shells).
  • Node 18+ and npm for installing the CLI.

Step 1 – Deploy a Codex model in Azure AI Foundry

  1. Go to Azure AI Foundry @ ai.azure.com and create a new project.
  2. Select a reasoning model such as codex-mini, o4-mini, or o3.
  3. Click Deploy, choose a name, and wait about two minutes.
  4. Copy the Endpoint URL and generate an API key.

Step 2 – Install the Codex CLI

npm install -g @openai/codex
export NODE_NO_WARNINGS=1 # Optional - to reduce verbosity
codex --version # verify installation

Step 3 – Configure ~/.codex/config.json

{
  "model": "codex-mini",
  "provider": "azure",
  "providers": {
    "azure": {
      "name": "AzureOpenAI",
      "baseURL": "https://<your-resource>.cognitiveservices.azure.com/openai",
      "envKey": "AZURE_OPENAI_API_KEY"
    }
  },
  "history": {
    "maxSize": 1000,
    "saveHistory": true,
    "sensitivePatterns": []
  }
}
# Linux, macOS, or WSL 
export AZURE_OPENAI_API_KEY="<your-api-key>"
# Work-around for current Codex bug – also set:
export OPENAI_API_KEY="$AZURE_OPENAI_API_KEY"

Step 4 – Explore with your coding agent

codex -p azure
  • # generate a unit test for src/utils/date.ts
  • # convert this Java class to Python

Step 5 – Run Codex in GitHub Actions

codex blog featured image image

Codex can execute as part of your CI pipeline. Store your API key in the repository’s secret store as AZURE_OPENAI_API_KEY and add a job like:

jobs:
  codex_refactor:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - name: Run Codex agent
        run: |
          npm install -g @openai/codex
          export AZURE_OPENAI_API_KEY=${{ secrets.AZURE_OPENAI_API_KEY }}
          export OPENAI_API_KEY=${{ secrets.AZURE_OPENAI_API_KEY }}
          codex -p azure "# refactor the authentication module for clarity"

Step 6 – Explore more agents with Gitagu

  • Browse detailed docs and benchmarks for other Azure-hosted agents.
  • Create repo-ready configuration guides with one click.
  • Experiment with GitHub Copilot Coding Agent, Cognition Devin, SRE Agent, and others.

Troubleshooting

 

Symptom Fix
401 Unauthorized or 403 Forbidden Export both AZURE_OPENAI_API_KEY and (until the bug is resolved) OPENAI_API_KEY with the same value. Confirm that your key has project/deployment access.
ENOTFOUND, DNS error, or 404 Not Found Verify baseURL in config.json uses your resource name and correct domain, e.g. https://<resource>.openai.azure.com/openai. Some tenants require the .cognitiveservices domain (example shared by a reader: https://ecano-project1rp-resource.cognitiveservices.azure.com/openai).
CLI says “no provider found” or ignores Azure settings Open ~/.codex/config.json and ensure:

  • "provider": "azure"
  • "envKey": "AZURE_OPENAI_API_KEY" matches your exported variable
  • The correct model and baseURL (try either …openai.azure.com or …cognitiveservices.com – as noted above) are present
In Codex, you get a warning that “codex-mini” is not in the list of available models for provider “azure” This is a known issue, and we have a PR in progress to address this. Ignore this warning and proceed with codex-cli tasks.

Conclusion

In just a few minutes you can connect an AI coding agent to your Azure tenant, keep intellectual property secure, and accelerate software delivery. Combine Codex CLI, GitHub Actions, and Gitagu’s agent catalog to build a flexible AI-powered engineering workflow. Give it a try and share what you create.


Questions or feedback? Drop a comment below

Author

Govind Kamtamneni
Technical Director, Global Black Belt

8 comments

Leave a comment

Your email address will not be published. Required fields are marked *

  • Peter Lee (CSA)Microsoft employee

    Hi Govind

    Great article—thank you for sharing it!

    I ran into an issue when executing codex -p azure and received an error message. Here are my current settings—do you happen to know what might be causing this? i am using WSL.

    plee@plee:~$ codex --version
    codex-cli 0.4.0
    plee@plee:~$ cat .codex/config.json
    {
    "model": "codex-mini",
    "provider": "azure",
    "providers": {
    "azure": {
    "name": "AzureOpenAI",
    "baseURL": "https://fine-tuning99.openai.azure.com/openai/",
    "envKey": "AZURE_OPENAI_API_KEY"
    }
    },
    ...

    Read more
  • Andres da Silva Santos

    I followed the tutorial, but I’m encountering an error when trying to use codex-mini:

    “OpenAI rejected the request. Error details: Status: 400, Code: OperationNotSupported, Type: unknown, Message: 400 The chatCompletion operation does not work with the specified model, codex-mini. Please choose different model and try again. You can learn more about which models can be used with each operation here: https://go.microsoft.com/fwlink/?linkid=2197993.. Please verify your settings and try again.”

    Additionally, I noticed that after deploying codex-mini on AI Foundry, the URI field appears to be empty.

    • Govind KamtamneniMicrosoft employee Author

      Hi Andres, Thanks for giving it a shot. When you test again, please use o4-mini or o3 from Azure OpenAI.
      I’ve updated the blog to list the new PRs – two were merged today, including the codex-mini update that adds response-API support. OpenAI is slated to publish a new npm release tomorrow. Once you update codex from the npm package, codex-mini should work as well.

      In the meantime, o4-mini or o3 should let you proceed

  • Vhanakaware, Vishal (DI SW PLM ME PRD IN DEV1) · Edited

    Thanks for sharing, Govind. I am trying exact same steps from past few days but not able to successfully configure Azure Endpoint and key with the codex cli. I am getting “OpenAI rejected the request. Error details: Status: 404, Code: 404, Type: unknown, Message: 404 ╯
    Resource not found. Please verify your settings and try again.”

    I am trying to use o4-mini model. I couldn’t find the codex model in Azure foundry.

    I was able to run codex cli using my openAI account but want to run it with Azure and it is not working. Thanks in advance.

    • Govind KamtamneniMicrosoft employee Author

      Thanks for giving it a try, Anoop.

      - Make sure your `~/.codex/config.json` matches the sample in the post, and that you are launching with `codex -p azure`.
      - Confirm that the `AZURE_OPENAI_API_KEY` environment variable is set.
      - If you’re on Windows, test inside WSL.

      There’s a known bug on the main branch (PRs #1324 and #1321 are pending approval). Until those are merged, set OPENAI_API_KEY to the same value as AZURE_OPENAI_API_KEY. After the fix lands, this extra step won’t be necessary.

      Read more
      • Vhanakaware, Vishal (DI SW PLM ME PRD IN DEV1)

        Thanks, Govind. I verified the config.json and other details but still it is not working getting same error of resource not found. Thanks anyway. Maybe will wait till the issue is merged in the main branch. Thanks.

        • Govind KamtamneniMicrosoft employee Author

          Sorry to hear that. Please try using this URL pattern: https://.cognitiveservices.azure.com/openai – note the “cognitiveservices” segment. Depending on how you configured AI Foundry or your Azure OpenAI resource, the URI may vary. I’ve updated the troubleshooting section of the blog to reflect this. Let me know if it works. Meanwhile, the two PRs were just merged, so codex-mini should be functioning soon.

          • Chandan R

            waiting for codex-mini to work. o4-mini is working perfectly. getting the following:
            system
            ⚠️ OpenAI rejected the request. Error details: Status: 400, Code: OperationNotSupported, Type: unknown, Message: 400 The chatCompletion
            operation does not work with the specified model, codex-mini. Please choose different model and try again. You can learn more about which
            models can be used with each operation here: https://go.microsoft.com/fwlink/?linkid=2197993.. Please verify your settings and try again.