Announcing General Availability of YAML CD features in Azure Pipelines

Roopesh Nair

Roopesh

Azure Pipelines YAML CD features now generally available

We’re excited to announce the general availability of the Azure Pipelines YAML CD features. We now offer a unified YAML experience to configure each of your pipelines to do CI, CD, or CI and CD together.

Releases vs. YAML pipelines

Azure Pipelines supports continuous integration (CI) and continuous delivery (CD) to test, build and ship your code to any target – repeatedly and consistently. You accomplish this by defining a pipeline. CD pipelines can be authored using the YAML syntax or through the visual user interface (Releases).

Classic vs YAML pipelines

You can create and configure release pipelines in the web portal with the visual user interface editor (Releases). A release pipeline can consume and deploy the artifacts produced by the CI pipelines to your deployment targets. On the other hand, you can define the CI/CD workflows in a YAML file – azure-pipelines.yml. The pipelines file is versioned with your code. It follows the same branching structure. As result, you get validation of the changes through code reviews in pull requests and branch build policies. Most importantly, the changes are in the version control with the rest of the codebase so you can easily identify the issue.

Highlights

YAML CD features introduces several new features that are available for all organizations using multi-stage YAML pipelines. Some of the highlights include.

Multi-stage YAML pipelines (for CI and CD)

Stages are the major divisions in a pipeline: “build app”, “Run tests”, and “deploy to Prod” are good examples of stages. They are a logical boundary in your pipeline at which you can pause the pipeline and perform various checks. Stages may be arranged into a dependency graph. For example, “run dev stage before the QA”.

Multi-stage cd pipelines

Learn more

Resources in YAML pipelines

A resource defines the type of artifact used by a pipeline. Resources provide full traceability of the artifacts consumed in your pipeline. It includes artifact, version, associated commits, and work-items. Most importantly, fully automate the DevOps workflow by subscribing to trigger events on your resources. Supported type of resources are: pipelines, builds, repositories, containers, and packages.

resources:
  pipelines: [ pipeline ]  
  builds: [ build ]
  repositories: [ repository ]
  containers: [ container ]
  packages: [ package ]

Environments and deployment strategies

An environment is a collection of resources that can be targeted for deployment from a pipeline. Environments can include Kubernetes clusters, Azure web apps, virtual machines, databases etc. Typical examples of environment names are Dev, Test, QA, Staging, and Production.

Environment view

Learn more

Kubernetes and Virtual Machine resources in environment

Kubernetes resource view provides the status of objects within the mapped Kubernetes namespace. It overlays pipeline traceability on top so you can trace back from the Kubernetes object to the pipeline and to the commit. Virtual machine resource view lets you add your VMs that are on any cloud or on-premises. As a result, rollout application updates and track deployments to the virtual machines.

Resources in environment

Approvals and checks on resources

Pipelines rely on resources such as environments, service connections, agent pools, and library items. Checks enable the resource owner to control whether a stage in a pipeline can consume the resource. As an owner, you can define the checks that must be met prior a stage consuming that resource can start. For example, a manual approval checks on an environment ensures that deployment can happen only after the reviewers have signed-off.

Approvals and checks on resources

Learn more

Review apps for collaboration

Review apps works by deploying every pull request from your git repository to a dynamically created resource in an environment. The team can see the changes in the PR, as well as how they work with other dependent services before they’re merged into the main branch. As a result, you to shift-left the code quality and improve productivity.

Review apps for collaboration

Learn more

Refreshed UX for service connections

We have improved the service connection user experience. Further, enabled configuring approvals and checks on service connections. For approvals, we follow segregation of roles between the resource owners and developers. For instance, any pipeline runs that use a connection with the checks enabled will pause the workflow.

Refreshed UX for service connections

Learn more

Thank you

Lastly, we want to thank all our users who have adopted Azure Pipelines. If you’re new to Azure Pipelines, get started for free on our website.. Learn what’s new and what you can do with Azure Pipelines. As always, we’d love to hear your feedback and comments. Feel free to comment on this post or tweet at @AzureDevOps with your thoughts.

52 comments

Comments are closed. Login to edit/delete your existing comments

  • Avatar
    Janusz Nowak

    Nice that one for GA Multi-stage YAML pipelines (for CI and CD), any plans for allowing to chose version of resources pipelines when scheduling Release in Yaml or option to have possibility to use variables to select version of resources, missing also option to have environment deployment on higher level then job ?

  • Avatar
    Keith Drew

    Good to see all the improvements and see it in GA, however until it supports Deployment Queue Settings (Similar to legacy Releases) we cannot move to this.

    • Avatar
      Ben Duguid

      You can use the Library to create groups of variables, and then load those at different points in the YAML – the pipeline as a whole has a variables section that you can load a library set into, and then each stage can also load in variable groups scoped as you would expect.

      The editing experience isn’t anywhere as good as the scoped variables on a release pipeline, with lots of copy and paste and chances for errors though 🙁

    • Avatar
      ConsulTent Support

      There are actually three places where variables may be stored now:
      1. Directly in the Pipeline yaml code.
      2. In a Variables sections when you edit the pipeline in your browser. This is actually a neat feature that allows you to override the variables with the override option set, per manual run.
      3. The Library. You will want to use this if you have secret keys, or if you want to link with the Azure Vault.

      Using a combination of all 3 allows a lot of flexibility.

  • Avatar
    Michael Taylor

    I assume this still does not support the 1 feature that everybody that uses “classic” releases is still waiting for – the ability to deploy to different environments using a single pipeline but not automated. Note that this is not the same as a manual approval and never has been. The team seems to keep lumping them together. The YAML pipelines require that all the stages either get run or opt out. In other words we build, deploy to Dev, maybe run tests, deploy to QA, maybe have a manual approval and then deploy to Prod. But the pipeline is “in progress” until all this completes. For companies that CD to Dev every day but only send a handful to QA maybe once a sprint and to Prod every couple of sprints you end up with either failed pipelines or multiple active ones.

    Is the highly requested feature of being able to use YAML pipelines and later (days, weeks even months) deploy to other stages using the same release like we can in classic or are we stuck using classic until MS realizes we don’t deploy the way you do?

    • Avatar
      ConsulTent Support

      This can be accomplished using multiple pipelines. Why would you want pipelines stacking up in a partially finished state?
      Pipelines can feed to other pipelines. Have another pipeline for approvals.

      • Avatar
        Daniel Schroeder

        I totally agree with Michael. This is the single feature that is keeping us from moving towards YAML for deployments. Sure, it can be accomplished via approvals, but that means constantly getting bombarded with emails everyday for every PR that gets merged in, when really we only want to push one version to Prod each sprint. Using approvals is a very clunky work around. I just want every PR merged in to get pushed to Dev and Int automatically, but want to manually push a button to go to RC and Prod, without having to tell everyone in the company to setup special email rules to send all approval emails to their trash, effectively making the approval notifications worthless.

      • Avatar
        Michael Taylor

        @ConsulTent Support, this is in no way solved by multiple pipelines. In fact that would make it worse. With classic release I can go to any of my 80+ pipelines that I manage and see at a glance that release 1 of Prod A is currently in QA whereas release 2 of Prod A is in Dev. I cannot remotely imagine how that would look if I had a separate pipeline for each of my stages. Release pipelines are nothing more than glorified back of artifacts with deployment instructions. There is no reason why you would ever treat a deployment into an environment of the same artifacts as a brand new release, at least in companies I’ve been at.

        The other thing is that the pipeline is partially finished. A release isn’t official until it gets deployed to Prod. There can be any # of releases to Dev and likely less releases to QA that ultimately cumulate into a Prod release. At any point in time a QA release could be promoted to Prod. Meanwhile you could have multiple other releases in other stages. The release isn’t finished but the stage is. Multiple pipelines doesn’t solve the underlying issue is just replicates how many pipelines you’re dealing with.

        The ask from the community has always been to have the same ability as classic release in terms of deploying to stages on demand. Until YAML releases can do this there is no reason for anyone to complicate their existing, working releases by any hacky approach. If multiple pipelines work for you then great but it is not the solution to the problem.

        • Avatar
          James May

          @Michael: Thank you for posting this – it has saved me a heap of time of figuring this out myself!

          @Roopesh (author): it would be awesome if you included a link in the post to the roadmap/future plans, especially around feature parity. I’m really looking forward to when I can finally get my release definitions into git.

  • Avatar
    Gavin Barron

    Is there planned work to map environments to stages for work item tracking like we get with classic so that we can see at a glance where a bug/story/task is deployed to from within the card