Using containerized services in your pipeline

Matt Cooper

Azure Pipelines has supported container jobs for a while now. You craft a container with exactly the versions of exactly the tools you need, and we’ll run your pipeline steps inside that container. Recently we expanded our container support to include service containers: additional, helper containers accessible to your pipeline.

Service containers let you define the set of services you need available, containerize them, and then have them automatically available while your pipeline is running. Azure Pipelines manages the starting up and tearing down of the containers, so you don’t have to think about cleaning them up or resetting their state. In many cases, you don’t even have to build and maintain the containers – Docker Hub has many popular options ready to go. You simply point your app at the services, typically by name, and everything else is taken care of.

So how does it work? You tell Azure Pipelines what containers to pull, what to call them, and what ports & volumes to map. The pipelines agent manages everything else. Let’s look at two examples showing how to use the feature.

Basic use of service containers

First, suppose you need memory cache and proxy servers for your integration tests. How do you make sure those servers get reset to a clean state each time you build your app? With service containers, of course:

resources:
  containers:
  - container: my_container
    image: ubuntu:16.04
  - container: nginx
    image: nginx
  - container: redis
    image: redis

pool:
  vmImage: 'ubuntu-16.04'

container: my_container

services:
  nginx: nginx
  redis: redis

steps:
- script: |
    apt install -y curl
    curl nginx
    apt install redis-tools
    redis-cli -h redis ping

When the pipeline runs, Azure Pipelines pulls three containers: Ubuntu 16.04 to run the build tasks in, nginx for a proxy server, and Redis for a cache server. The agent spins up all three containers and networks them together. Since everything is running on the same container network, you can access the services by hostname: that’s what the curl nginx and redis-cli -h redis ping  lines are doing. Of course, in your app, you’d do more than just ping the service – you’d configure your app to use the services. When the job is complete, all three containers will be spun down.

Combining service containers with a matrix of jobs

Suppose you’re building an app that supports multiple different database backends. How do you easily test against each database, without maintaining a bunch of infrastructure or installing multiple server runtimes? You can use a matrix with service containers, like this:

resources:
  containers:
  - container: my_container
    image: ubuntu:16.04
  - container: pg11
    image: postgres:11
  - container: pg10
    image: postgres:10

pool:
  vmImage: 'ubuntu-16.04'

strategy:
  matrix:
    postgres11:
      postgresService: pg11
    postgres10:
      postgresService: pg10

container: my_container

services:
  postgres: $[ variables['postgresService'] ]

steps:
- script: |
    apt install -y postgresql-client
    psql --host=postgres --username=postgres --command="SELECT 1;"

In this case, the listed steps will be duplicated into two jobs, one against Postgres 10 and the other against Postgres 11.

Service containers work with non-container jobs, where tasks are running directly on the host. They also support advanced scenarios such as defining your own port and volume mappings; see the documentation for more details. Like container jobs, service containers are available in YAML-based pipelines. [Edit, 2019-01-24: this feature has not finished rolling out everywhere on Azure DevOps. If it doesn’t work for you, wait a few days and give it another try. Sorry for the confusion!]

6 comments

Discussion is closed. Login to edit/delete existing comments.

  • Jaap-Willem van den Berg 0

    Hi,
    I’m trying to use the service containers in combination with Azure Container Registry.
    I have created a ‘Service Connection’ for ‘Docker Registry’ with the name ‘myacr’. I have set Authorization for the build script to use this Service Connection.
    I keep getting the error:
    ##[error]Value cannot be null.Parameter name: registryServer
    ##[debug]System.ArgumentNullException: Value cannot be null. 
    Am I doing something wrong or is this a known error?
    My yaml:
    resources:  repositories:   – repository: self
    containers:  – container: test    image: mysomething.azurecr.io/myimage    endpoint: myacr
    pool:  vmImage: ‘ubuntu-16.04’
    services:  test: test

  • Luigi Grilli 0

    What if you need to run a service on a linux container but your job/build on a windows host. Is it possible? Example: I want to build on VS2019 but run an elasticsearch service container which they are only officially available for linux. From what I can understand this is not possible. Would you be going to support such a scenario?

    • Matt CooperMicrosoft employee 0

      Our hosted Windows agents (at least up through VS2017 — I haven’t tested this on VS2019) don’t run Linux containers, but you absolutely could on a self-hosted agent.

  • Weihan Li 0

    Is there a complete example for using conatinerized service?
    I’m trying to use it in the linux build agent `ubuntu-16.04`,with pipeline config as follows:
    azure-pipelines.yml
    but got two errors: 
    “`
    /azure-pipelines.yml (Line: 15, Col: 3): Unexpected value ‘resources’/azure-pipelines.yml (Line: 23, Col: 7): A sequence was not expected
    “`

  • Ronald Suharta 0

    How do you integrate with AWS ECR as service container in the azure pipeline? i’ve setup the following but not sure how to call the aws ecr get-login as i’m getting no basic auth credentials.

    resources:
    containers:
    – container: mysqlserverdb
    image: 1915X606XXXX.dkr.ecr.ap-southeast-2.amazonaws.com/mysqlserverdb:latest
    options: –name mydb
    env:
    ACCEPT_EULA: Y
    SA_PASSWORD: Pass123!

    services:
    mysqlserverdb: mysqlserverdb

  • Jorrit Stutterheim 0

    Hi Matt,

    Thanks for your article!

    I am still trying to wrap my head around the use of service containers. I am in a unique situation where I run almost all jobs on private agents, for one of the required steps, a Fortify scan, I have made a docker image with a local Fortify installation.

    Now I am trying to run the build on one docker image containing the build tools, and then run fortify cli scan on a separate image, containing Fortify. The downside at his moment is that I am using 2 jobs to accomplish this.

    Do you think it is possible to do command line steps on the second image within the same job?

    Thanks and kind regards, Jorrit

Feedback usabilla icon