February 5th, 2026
0 reactions

Minimal GitOps for Edge Applications with Azure IoT Operations and Azure DevOps

Maho Pacheco
Sr. Software Engineer

Introduction

Building and operating edge applications is a team sport, especially when you’re working with Azure IoT Operations (AIO), custom Rust services, and a mix of cloud and on-prem environments. Over the past year, our team set out to create a minimal GitOps workflow that’s simple enough for small pilots, but robust enough to scale to production.

This post shares our approach, the architectural decisions behind it, and practical templates you can use today. All pipeline YAMLs and scripts are available in our public repo.

Why Minimal GitOps?

Edge solutions are complex by nature: multiple services, hardware dependencies, and frequent updates. In Kubernetes-based environments—especially those running on industrial edge devices with limited connectivity and resource constraints—you can’t afford fragile or ad-hoc deployments. It’s not uncommon for a cluster on a plant floor to lose WAN connectivity for hours or even days, or for devices to operate with tight CPU and memory limits. We needed a workflow that:

  • Decouples application and infrastructure changes
  • Promotes changes safely across environments (Dev → QA → Prod)
  • Tracks versions and dependencies with clarity
  • Automates container builds, tagging, and deployments

Our answer: a multi-repo, GitOps-driven release model powered by Azure DevOps pipelines.

Note: In our scenario, Azure DevOps was a hard dependency because it was the primary CI/CD platform used by our customer. If you’re using GitHub Actions or workflows instead, you might consider Kalypso as a GitOps alternative for similar use cases.

The Strategy: Multi-Repo, Version-Locked, Automated

We adopted a strategy inspired by the current model from Azure IoT Operations, best practices, and previous experience:

  • Application repos: Each Rust service, connector, or UI lives in its own repo. CI pipelines build, test, and publish container images.
  • Infra/GitOps repo: Holds Kubernetes overlays, kustomize bases, secrets, and a central versions.yaml lock file.
  • Tooling/scripts repo: Contains helper scripts for tagging, version updates, and environment promotion.

Every time a component is updated, its pipeline opens a PR to the GitOps repo, bumping the image tag for the target environment. Promotion is explicit and gated—no surprises. This separation ensures that industrial edge deployments remain predictable, even when clusters are remote, connectivity is intermittent, and hardware profiles vary widely. For example, a wind farm site might only sync once a day due to limited satellite bandwidth—having deterministic, version-locked manifests makes that safe.

Pipeline Templates: Build, Tag, Push, and Promote

Let’s get practical. The heart of our GitOps workflow is a set of pipeline templates and scripts that keep everything humming along—from code commit to deployment. We designed these to be minimal, reusable, and easy for any team member to pick up.

Build and Push Rust Containers:

Whenever a Rust service or connector is updated, the pipeline springs into action. It builds the container image, tags it using semantic versioning (straight from the code’s version file or git tags), and pushes it to our Azure Container Registry. This step is all about consistency—every image is versioned, traceable, and ready for deployment. In Kubernetes-heavy edge environments, this consistency is what keeps multi-node clusters aligned, even if sites are physically far apart or running on low-powered devices.

Tag Rust Components:

Tagging is how we keep releases organized and auditable. A simple script scans each Rust component, reads its version, and creates a git tag in the format component-name/version. This makes it easy to see what’s running where, and to roll back if needed—critical if you’re dealing with a factory cluster that can’t tolerate downtime longer than a few minutes.

Update Versions in GitOps Repo:

Once a new image lands in the registry, another script takes over. It updates the Kubernetes deployment files in our GitOps repository, bumping the image tag to the latest version. This keeps our environments in sync with the most recent builds—no manual edits, no guesswork. For clusters with flaky backhaul links, this ensures the next time they pull updates, they get the correct, locked version.

Create GitOps Environment PR:

Promotion is explicit and transparent. When a new version is ready for an environment (say, QA or production), the pipeline opens a pull request to update the deployment manifests. This gives the team a chance to review, approve, and track every change before it goes live.

The beauty of this approach? It’s modular. Each step is automated but reviewable, and every change is version-locked. The result is a workflow that’s both minimal and robust—perfect for industrial edge deployments where downtime is costly, connectivity may be scarce, and auditability is a must.

Versioning and Promotion: SemVer All the Way

Let’s talk about versioning. Keeping things clear and predictable is key for edge deployments where Kubernetes orchestrates workloads across constrained or remote nodes. We stick to Semantic Versioning (SemVer) for everything:

  • MAJOR: Breaking changes (think API overhauls or schema updates)
  • MINOR: New features that don’t break anything
  • PATCH: Bug fixes and minor adjustments

Our image tags use the format vX.Y.Z for standard releases, or vX.Y.Z-rc<BuildId> for release candidates—helpful when you need to trigger a new build with the same version, such as testing a candidate before finalizing. This approach makes it easy to track exactly what’s running in each environment. Tagging is handled by the tag-components.rust.sh script, which reads version definitions from Cargo.toml.

Promotion is intentional and controlled. Pipelines use scripts to update the lock file and overlays, then open a PR for review before anything reaches production. These tools keep releases traceable and environments in sync—even when a cluster at a remote oil rig or vessel is running in isolation for weeks at a time.

Scaling Up: Separation, Audit, and Flexibility

As your solution matures, scaling your GitOps workflow becomes essential for maintainability and security. Here are some practical strategies:

  • Environment Separation: Split your GitOps repositories by environment—such as dev, qa, and prod. This separation helps isolate changes, reduces risk, and simplifies access management, which is particularly valuable when clusters run on distributed industrial devices with limited local capacity.
  • Access Controls: Tighten permissions for production environments to ensure only authorized users can promote changes. Use Azure DevOps security groups and branch protection rules to enforce this.
  • Inventory Management: Maintain a living inventory of all repositories and components. This makes it easier to track dependencies, audit changes, and onboard new team members.

Our scripts and pipeline templates support these practices. To implement environment separation, simply create distinct pipelines for each environment using the same reusable template. For example:

trigger: none
pr: none

resources:
    pipelines:
    - pipeline: ci-deploy-containers-to-acr
        source: ci-deploy-containers-to-acr
        trigger: true

jobs:
- template: templates/create-gitops-env-pr.yml
    parameters:
        jobId: 'update_devtest'
        displayName: 'Update devtest image tags and open PR'
        envName: 'devtest'
        repoName: 'gitops-devtest-qa'
        targetBranch: 'main'
        acrName: $(ACR_NAME)
        acrResourceGroup: $(ACR_RESOURCE_GROUP)
        azureServiceConnection: $(AZURE_SERVICE_CONNECTION)

- template: templates/create-gitops-env-pr.yml
    parameters:
        jobId: 'update_qa'
        displayName: 'Update qa image tags and open PR'
        envName: 'qa'
        repoName: 'gitops-devtest-qa'
        targetBranch: 'main'
        acrName: $(ACR_NAME)
        acrResourceGroup: $(ACR_RESOURCE_GROUP)
        azureServiceConnection: $(AZURE_SERVICE_CONNECTION)

Notice that your Azure Container Registries may differ across environments, so you can apply the same separation for CI pipelines. For example, for Dev/Test:

trigger:
- main

stages:
- stage: BuildContainers
    displayName: Build and Push Containers
    jobs:
    - template: templates/build-containers.yaml
        parameters:
            jobId: 'devtest_acr'
            containerRegistry: myacr.azurecr.io
            acrUsername: $(ACR_DEVTEST_USERNAME)
            acrPassword: $(ACR_DEVTEST_PASSWORD)
            imageNames:
                - heuristic
                - http-connector
                - pruning-service
                - media-capture-service

This approach keeps your environments isolated, makes auditing straightforward, and gives you the flexibility to scale as your edge solution grows. For industrial edge clusters—where physical access is rare, bandwidth is precious, and recovery windows are tight—this level of separation is what keeps operations resilient.

Try It Yourself

All templates and scripts are available in our public repo.

You’ll find:

  • Pipeline YAMLs for building and deploying Rust services
  • Scripts for tagging and version updates for rust images
  • Documentation and ADRs for governance

Final Thoughts

Our customer had little experience with Kubernetes or GitOps, so simplicity was essential. The goal of minimal GitOps wasn’t to cut corners, but to build a foundation that’s easy to understand, operate, and scale. By combining Azure IoT Operations, Azure DevOps, and a clear repo strategy, you can deliver edge solutions with confidence—even in environments where connectivity is intermittent, resources are constrained, and clusters are far from the datacenter floor. Whether it’s a remote plant floor that goes offline for 48 hours or a shipping vessel running with limited compute, a minimal GitOps approach keeps operations predictable and secure.

Questions or feedback? Open an issue or PR in the repo—we’d love to hear from you.

Author

Maho Pacheco
Sr. Software Engineer