May 3rd, 2026
0 reactions

Removing The Monkey Work of Migration

git-APE

Removing The Monkey Work of Migration; in this post we show how Git-Ape analyses an AWS deployment repo and generates an Azure-native replacement, with design critique built in.

This post walks through a real migration workflow: start with an AWS deployment repo and end with an Azure deployment repo. The goal isn’t a 1:1 syntax conversion. It’s intent extraction and architecture remapping—agents that read what your deployment does, propose an Azure-native equivalent, and generate deployment-ready artefacts. Along the way, a critique step flags design issues early, before you ship them.

 

Related reading: Platform engineering for the agentic AI era (Microsoft DevBlogs)

Related reading: Putting agentic platform engineering to the test (Microsoft DevBlogs)

TL;DR

  • Git-Ape analysed an AWS Terraform repo, extracted the real deployment intent, and mapped it to an Azure-native design.
  • A critique step caught two blockers early (don’t build on startup; don’t mirror S3 with Blob if CI deploys artefacts).
  • Output: Bicep + GitHub Actions that deploys a Next.js app to Azure App Service (Linux, Node 20), with Managed Identity and App Insights/Log Analytics.

Note: This walkthrough stops at repo and artefact generation. Git-Ape can also help with onboarding (OIDC/RBAC), quality gates (security, what-if, cost), and controlled deployments, but those steps aren’t shown here.

Prerequisites

You’ll need access to an Azure subscription, AWS credentials for the source environment (read-only is enough for analysis), and GitHub access to read/write repos. In the walkthrough, Git-Ape validates your local CLI tooling and active sign-ins up front (see Step 1).

The scenario: migrate an AWS Terraform deployment to Azure

Input (AWS): contoso-migration — Terraform that deploys the Contoso Outdoors web app on AWS with VPC/networking, EC2 (Ubuntu 22.04), an Application Load Balancer, S3, and IAM; the app runs via PM2 and exposes an ALB URL.

Output (Azure): contoso-azure — an Azure-native equivalent using git-ape + GitHub Actions to deploy to Azure App Service (Linux, Node.js 20 LTS) with Managed Identity, health checks, and monitoring via Application Insights/Log Analytics.

What happened (step by step)

Below is the sequence Git-Ape followed—from prerequisites through to generated Azure artefacts.

Step 1: Validate tooling and sign-in state

Invoking @git-ape triggered a prerequisite check that validated all required CLI tools and active auth sessions:

Tool Status Found Version Minimum
az 2.84.0 2.50
aws 2.34.30 2.0
gh 2.45.0 2.0
jq 1.7 1.6
git 2.43.0 any

Auth sessions confirmed: Azure subscription active, AWS credentials valid, GitHub CLI authenticated.

Step 2: Extract intent from the AWS repo

Git-Ape read the source repo via GitHub API — every Terraform file, the user_data.sh bootstrap script, and all documentation. From this, it extracted deployment intent:

  • Runtime: Next.js on Node.js 20, built with npm ci && npm run build, run via PM2 in cluster mode on port 3000
  • Compute: EC2 t3.micro (Ubuntu 22.04) with a 30GB gp3 root volume
  • Ingress: Application Load Balancer (HTTP port 80 → EC2 port 3000), public
  • Artefacts: App tarball stored in S3, downloaded to EC2 on boot via user_data.sh
  • Identity: IAM role allowing EC2 to read from S3
  • Networking: VPC (10.0.0.0/16) with two subnets across two AZs, internet gateway, security groups (ports 22, 80, 443, 3000 open to 0.0.0.0/0)
  • Monitoring: None
  • Estimated cost: ~$34/month

Step 3: Map services to an Azure-native design

Based on the extracted intent, Git-Ape proposed an Azure mapping focused on lowest cost:

AWS (source) Azure (target) Outcome
EC2 + user_data.sh + PM2 App Service (B1, Linux, Node 20) No server patching/SSH; platform-managed runtime
Application Load Balancer (HTTP) App Service built-in LB + HTTPS-only Simpler ingress + improved security defaults
IAM roles/policies System-assigned Managed Identity No stored credentials; Azure RBAC
Terraform apply from laptop GitHub Actions (OIDC) + Bicep Repeatable deployments with audit trail
No monitoring Application Insights + Log Analytics First-class telemetry and troubleshooting

Step 4: Critique the design before you generate code

Before generating any code, Git-Ape consulted its rubber-duck critique agent for an independent design review. This caught two blocking issues that significantly improved the final output:

Blocking fix 1 — Deployment model: The initial plan replicated the AWS pattern (download tarball from storage, run npm ci && npm run build on startup). The critique flagged this as a production anti-pattern: slow cold starts, nondeterministic deployments, broken scale-out, and build failures becoming runtime outages.

Fix: build in CI (GitHub Actions), deploy a ready-to-run artefact via zip deploy.

Blocking fix 2 — Storage layer unnecessary: The original plan included Azure Blob Storage to mirror S3 artefact storage. The critique pointed out this added cost and complexity with no benefit when deploying via CI.

Fix: drop Blob Storage entirely.

Other improvements adopted: drop PM2 (App Service manages processes), add Application Insights from day one, enforce HTTPS-only + TLS 1.2 + disable FTP.

Step 5: Generate IaC

Git-Ape initially started generating Terraform with the AzureRM provider. On human in loop review this then changed to: “write the most efficient code—doesn’t have to be Terraform.” That switch landed on Bicep (Azure’s native IaC), producing a single ~80-line template instead of 200+ lines of Terraform.

Step 6: Create the target repo and generate deployment artefacts

Once the target shape was agreed, Git-Ape generated the Azure repo and pushed it to GitHub:

  • Created the GitHub repo contoso-azure via gh CLI
  • Cloned the source repo, pushed history to the new repo
  • Generated infra/main.bicep: App Service Plan (B1 Linux), Web App (Node 20 LTS, HTTPS-only, TLS 1.2, FTP disabled, managed identity, health checks, always-on), Application Insights, Log Analytics
  • Generated .GitHub/workflows/deploy.yml: checkout → setup Node → npm ci → npm run build → Azure login (OIDC) → create RG → deploy Bicep → zip deploy → health check
  • Updated README.md and DEPLOYMENT_GUIDE.md for Azure
  • Removed all AWS files (main.tf, variables.tf, outputs.tf, terraform.tf, user_data.sh, etc.)
  • Committed and pushed to main

What improves with the generated Azure design

This is an architecture-level comparison derived from the generated configuration

Dimension AWS (was) Azure (generated)
Cost (estimate) ~$34/month (EC2 $8 + ALB $15 + S3 $1 + bandwidth $10) ~$13/month (App Service B1 $13, monitoring free tier)
Security HTTP-only, SSH open to 0.0.0.0/0, IAM keys HTTPS-only, TLS 1.2, FTP disabled, OIDC + Managed Identity
Operations SSH + PM2 babysitting, manual terraform apply Push-to-main deploy, health checks, App Insights
Auditability Terraform state on laptop Bicep + Actions in Git, deployment logs in GitHub

What Git-Ape could have also added for production environments.

The full Git-Ape workflow includes additional capabilities that were not used in this run:

  • Onboarding: OIDC federation setup, RBAC role assignments, GitHub environment creation
  • Security gate: automated security analysis of generated templates with blocking/pass verdicts
  • Preflight validation: ARM what-if analysis before deployment
  • Cost estimation: Azure Pricing API queries for per-resource cost breakdown
  • Deployment execution: az deployment create with progress monitoring
  • Integration testing: post-deployment health checks and resource verification
  • Human approval gates: explicit confirmation before any Azure changes

The control frameworks on Offer

To prevent “AI freestyle migrations” in production, Git-Ape supports a phased control framework:

  • Phase 0 — Pre-req validation: confirm tooling and auth (exercised ✅)
  • Phase 1 — Intake & intent extraction: read source repo, extract deployment intent (exercised ✅)
  • Phase 2 — Repo onboarding: configure OIDC, RBAC, GitHub environments (available, not exercised)
  • Phase 3 — Target blueprint: define Azure mapping, get design critique (exercised ✅)
  • Phase 4 — Generate artefacts: produce Bicep, workflows, docs (exercised ✅)
  • Phase 5 — Quality gates: security, preflight, cost checks (available, not exercised)
  • Phase 6 — Human approval + deploy + validate (available, not exercised)

Human-in-the-loop controls

Git-Ape supports two review models:

Option A: Local “generate then review” — generate artefacts locally, review diffs in your editor, run local validations, commit only artefacts that pass. This is what was demonstrated.

Option B: Pull request as the control plane — create a branch, open a PR with generated artefacts, require reviewers, run CI checks (template lint, security scans, preflight), use protected GitHub environments for staged deployment approval.

What to watch for

  • Over-permissioned deployments: keep RBAC scopes tight; require PR review for role assignments
  • Hidden runtime assumptions: user_data scripts encode tribal knowledge; ensure Azure runtime has equivalent environment variables, build steps, and health probes
  • Network exposure drift: confirm inbound rules match intent (the AWS setup had SSH open to the world — the Azure version closes this)
  • SKU creep: require cost checks for plan sizes and monitoring retention
  • One-click production risk: protect GitHub environments so merges don’t automatically push to prod without approval

Wrap-up

This example shows the shape of an agent-assisted migration: Git-Ape analysed an AWS repo, extracted deployment intent, proposed an Azure architecture, incorporated design critique that caught two blocking issues, and generated a deployment-ready Azure repo with Bicep infrastructure and CI/CD pipeline.

The real story isn’t “one IaC method over another.” It’s the combination of intent extraction from existing infrastructure code, architecture remapping to Azure-native services, critique-driven correction that improves the design before any code is written, and the generation of cleaner, more secure deployment assets than the original.

The win here is a safer, more deterministic starting point for “what I can run on Azure” to a clear path on how to layer on onboarding, quality gates, and controlled deployment when you’re ready to take it to production.

Next steps

  • Run the generated workflow in a dev subscription, then confirm the app responds to the health check and emits telemetry to Application Insights.
  • Decide how you want to handle environment separation (dev/test/prod) and protect deployments with GitHub environments and approvals.
  • Harden identity and networking based on your real requirements (RBAC scope, secret storage, private endpoints/VNet integration if needed).

If you try this: treat the first generated Azure design as a draft. Review it like any other change, especially identity, inbound exposure, and the gates that control promotion to production.

Go to Intelligent Cloud Deployment Agents | Git-Ape to find out more and get started.

 

Author

Arnaud Lheureux
Chief Developer Advisor, Asia

Arnaud is Chief Developer Advisor at Microsoft in Asia.

Suzanne Daniels
Chief Developer Advisor, EMEA

Suzanne is Chief Developer Advisor at Microsoft in EMEA.

0 comments

Leave a comment

Your email address will not be published. Required fields are marked *