{"id":52955,"date":"2019-02-21T08:52:32","date_gmt":"2019-02-21T16:52:32","guid":{"rendered":"https:\/\/blogs.msdn.microsoft.com\/devops\/?p=47755"},"modified":"2019-04-03T08:56:29","modified_gmt":"2019-04-03T16:56:29","slug":"cross-platform-container-builds-with-azure-pipelines","status":"publish","type":"post","link":"https:\/\/devblogs.microsoft.com\/devops\/cross-platform-container-builds-with-azure-pipelines\/","title":{"rendered":"Cross-Platform Container Builds with Azure Pipelines"},"content":{"rendered":"<blockquote style=\"padding: 10px 15px;background-color: #eeeeee\">\n<p style=\"margin:0\">\n    This is a follow-up to Matt Cooper&#8217;s earlier blog post, &#8220;<a href=\"https:\/\/devblogs.microsoft.com\/devops\/using-containerized-services-in-your-pipeline\/\">Using containerized services in your pipeline<\/a>&#8220;. If you haven&#8217;t yet, I encourage you to read that post to understand the new `container` syntax in the pipeline definition.\n  <\/p>\n<\/blockquote>\n<p>As a program manager for Azure DevOps, I spend a lot of time speaking with customers about their DevOps practices. In a recent meeting, a development team was excited about <a href=\"https:\/\/azure.microsoft.com\/en-us\/services\/devops\/pipelines\/\">Azure Pipelines<\/a> and our Linux build agents that we manage in Azure, but they needed to build their application on CentOS instead of Ubuntu.<\/p>\n<p>Like text editors, whitespace and the careful placement of curly braces, Linux distributions can be hotly debated among engineers. But one of the great things about Azure Pipelines is that you don\u2019t need to rely on our choice of Linux distribution. You can just bring your own \u2013 using containers. It\u2019s easy to create a Docker image that has the exact distribution that you want to run your builds on. Want to build on an older LTS version of Ubuntu like Trusty? No problem. Want to run the very latest RHEL or CentOS? That\u2019s great, too.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/devblogs.microsoft.com\/wp-content\/uploads\/sites\/6\/2019\/05\/image126.png\" alt=\"\" width=\"225\" style=\"padding: 5px 0 15px 25px\" align=\"right\" \/><\/p>\n<p>Of course, the choice of distribution isn\u2019t just a personal preference: there\u2019s usually a solid technical reason for wanting a CI build on a particular platform. Often you want to perform your build on a system that\u2019s identical \u2014 or nearly so \u2014 to the system you\u2019re deploying to. And since Azure Pipelines offers a single Linux based platform: Ubuntu 16.04 LTS (the LTS stands for Long-Term Support), this might seem like a problem to you if you wanted to build on a different distribution, like CentOS.<\/p>\n<p>Thankfully, it\u2019s easy to run your build in a CentOS container. And even better than building in a container with the base distribution, you can provide your own container that has the exact version of the dependencies that you want, so there\u2019s no initial step of running apt-get or yum to install your packages.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/devblogs.microsoft.com\/wp-content\/uploads\/sites\/6\/2019\/05\/image127.png\" alt=\"\" align=\"left\" width=\"300\" style=\"padding: 5px 20px 20px 0\" \/><\/p>\n<p>I&#8217;m a maintainer of the <a href=\"https:\/\/libgit2.org\/\">libgit2<\/a> project and we recently moved over to Azure Pipelines using container builds. The project decided to adopt containers so that we could build on Ubuntu 12.04 (\u201cTrusty\u201d), which is libgit2\u2019s oldest supported platform.<\/p>\n<p>But what if we wanted to build libgit2 on a different distribution? Let\u2019s walk through how to use Azure Pipelines to build this project on the latest CentOS image instead.<\/p>\n<h3>Creating an Image<\/h3>\n<p>The first thing we need to do is create an image that has our dependencies installed. That begins, of course, with the creation of the <code>Dockerfile<\/code>.<\/p>\n<p>This description starts with a base Linux distribution image and adds in our dependencies, in much the same way that we used to do on each CI build.<\/p>\n<pre><code>FROM centos:7\nRUN yum install -y git cmake gcc make \\\n    openssl-devel libssh2-devel openssh-server \\\n    git-daemon java-1.8.0-openjdk-headless\n<\/code><\/pre>\n<p>Once the Dockerfile is created, we can build our image:<\/p>\n<p><code>docker build -t ethomson\/libgit2-centos:latest .<\/code><\/p>\n<p>And then push it up to Docker Hub:<\/p>\n<p><code>docker push ethomson\/libgit2-centos:latest<\/code><\/p>\n<p>Finally, for maintenance and repeatability, we <a href=\"https:\/\/github.com\/libgit2\/docker-build\">check these <code>Dockerfile<\/code>s in<\/a> to a repository once we&#8217;ve created them.<\/p>\n<h3>Testing that image<\/h3>\n<p>One of my favorite things about using containers in my CI build is that I can also use the same containers in my <em>local<\/em> builds. That way I can make sure that my container is set up exactly how I want it before I push it up to Docker Hub or start my first build with Azure Pipelines.<\/p>\n<p>This keeps the inner loop very tight when you&#8217;re preparing your CI system: since everything&#8217;s in a container, you can get things working on your local machine without experimenting on the CI system. So there&#8217;s no time spent provisioning a VM and no time spent downloading a git repository; it&#8217;s all ready to go locally.<\/p>\n<p>The other great thing is that everything\u2019s installed and running within the container. If you have test applications then they stay isolated. In my example, the libgit2 tests will optionally start up a git server and an ssh server. I\u2019m <em>much<\/em> happier running those in a container than on my actual development box \u2013 and I\u2019m lucky enough to work in a company where I&#8217;m actually able to start these on my local machine. For developers working in an environment with stricter controls on machine level changes like that, containers provide a fantastic solution.<\/p>\n<p>And with Docker Desktop, you can do this even if you&#8217;re using Mac or Windows on your development box and building in a Linux container.<\/p>\n<p>To run our build locally:<\/p>\n<pre><code>docker run \\\n    -v $(pwd):\/src \\\n    -v $(pwd)\/build:\/build \\\n    -e BUILD_SOURCESDIRECTORY=\/src \\\n    -e BUILD_BINARIESDIRECTORY=\/build \\\n    -w \/build \\\n    ethomson\/libgit2-centos:latest \\\n    \/src\/ci\/build.sh\n<\/code><\/pre>\n<p>What we&#8217;ve done here is mapped the current directory, our git repository, to <code>\/src<\/code> on the container, and a subdirectory called build to the <code>\/build<\/code> directory on the container.<\/p>\n<p>We&#8217;ve also set two environment variables, <code>BUILD_SOURCESDIRECTORY<\/code> and <code>BUILD_BINARIESDIRECTORY<\/code>. This isn&#8217;t strictly necessary, but it&#8217;s useful since these are the variables used by the Azure Pipelines build agent. This means you can share your build scripts between a bare metal Azure Pipelines build agent and a container without any changes.<\/p>\n<h3>CI\/CD in Azure Pipelines<\/h3>\n<p>One of the nice features of Azure Pipelines is that you get an actual virtual machine, which means that you can run your own Docker images as part of the CI\/CD pipeline.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/msdnshared.blob.core.windows.net\/media\/2018\/11\/image128.png\" alt=\"\" align=\"right\" width=\"350\" style=\"padding: 5px 0 10px 20px\" \/><\/p>\n<p>You can imagine this container as a bit of abstraction over the pipeline. Azure Pipelines will orchestrate multiple container invocations, each picking up where the other left off. Instead of simply invoking the compiler to take your source and build it, you run the compiler inside the container instead. The container is isolated, but since you have your source and binary directories mapped, you capture the output of the build to use in the next stage.<\/p>\n<p>You can do this as coarsely or as fine-grained as you&#8217;d like. For libgit2, we have a script that does our build and one that runs our tests. The build script uses cmake within the mapped source directory to discover the container&#8217;s environment, configure the build and then run it. The binaries &#8211; our library and the tests &#8211; will be put in the mapped output directory.<\/p>\n<p>The next step of our build runs the tests, again inside the container. The source and binary directories are mapped just like before, so the test step can pick up where the build step left off. In the example of libgit2, the test script will start some applications that we&#8217;ve pre-installed on the container (some network servers that the tests will communicate with) and then run the test applications that we compiled in the build step.<\/p>\n<p>libgit2\u2019s test framework writes a report in JUnit-style XML, which is a common feature in test frameworks, and a feature that Azure Pipelines has native support for. In the next step of the build process, we simply publish that XML so Azure Pipelines can analyze them and display the test results.<\/p>\n<p>Thus, the libgit2 build configuration looks like this:<\/p>\n<pre><code>resources:\n  containers:\n  - container: centos\n    image: ethomson\/libgit2-centos:latest\n\npool:\n  vmImage: 'Ubuntu 16.04'\n\ncontainer: centos\n\nsteps:\n- script: $(Build.SourcesDirectory)\/ci\/build.sh\n  displayName: Build\n  workingDirectory: $(Build.BinariesDirectory)\n- script: $(Build.SourcesDirectory)\/ci\/test.sh\n  displayName: Test\n  workingDirectory: $(Build.BinariesDirectory)\n- task: publishtestresults@2\n  displayName: Publish Test Results\n  condition: succeededOrFailed()\n  inputs:\n    testResultsFiles: 'results_*.xml'\n    mergeTestResults: true\n<\/code><\/pre>\n<p><img decoding=\"async\" src=\"https:\/\/devblogs.microsoft.com\/wp-content\/uploads\/sites\/6\/2019\/05\/image129.png\" alt=\"\" align=\"right\" width=\"300\" style=\"padding: 5px 0 10px 20px\" \/><\/p>\n<p>I can check that file right in to my repository &#8211; if I name it <code>azure-pipelines.yml<\/code> and put it at the root of my repo, then Azure Pipelines will detect it during setup and streamline my configuration.<\/p>\n<p>This happens when I set up Azure Pipelines for the first time through the <a href=\"https:\/\/github.com\/marketplace\/azure-pipelines\">GitHub Marketplace<\/a>. Or if I&#8217;m already an Azure DevOps user, when I set up a new Pipelines Build and select my repository.<\/p>\n<h3>Success!\u2026?<\/h3>\n<p>I was excited to queue my first build inside a CentOS container but just as quickly dismayed: as soon as it finished, I saw that two of my tests had failed.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/devblogs.microsoft.com\/wp-content\/uploads\/sites\/6\/2019\/05\/image130.png\" alt=\"Tests Failed\" width=\"1100\" height=\"115\" class=\"alignnone size-full wp-image-47805\" \/><\/p>\n<p>But that dismay evaporated quickly and gave way to interest: although nobody wants to see red in their test runs, a failure <em>should<\/em> be indicative of a problem that needs to be fixed.<\/p>\n<p>Once I started investigating, I realized that these were the SSH tests that were failing. And they were only failing when trying to connect to GitHub. It turns out that the version of the SSH library included in CentOS 7 is &#8211; well, it&#8217;s a bit old. It&#8217;s old enough that it&#8217;s only using older ciphers that <a href=\"https:\/\/githubengineering.com\/crypto-deprecation-notice\/\">GitHub has disabled<\/a>. I&#8217;d need to build libgit2 against a newer version of libssh2.<\/p>\n<p>At that point, I updated my <code>Dockerfile<\/code> to download the newest SSH library, build it and install it:<\/p>\n<pre><code>FROM centos:7\nRUN yum install -y git cmake gcc make openssl-devel openssh-server \\\n    git-daemon java-1.8.0-openjdk-headless\nWORKDIR \"\/tmp\"\nRUN curl https:\/\/www.libssh2.org\/download\/libssh2-1.8.0.tar.gz \\\n    -o libssh2-1.8.0.tar.gz\nRUN tar xvf libssh2-1.8.0.tar.gz\nWORKDIR \"\/tmp\/libssh2-1.8.0\"\nRUN .\/configure\nRUN make\nRUN make install\nENV PKG_CONFIG_PATH \/usr\/local\/lib\/pkgconfig\n<\/code><\/pre>\n<p>At this point, I built my new docker image and pushed it up to Docker Hub. Once it was uploaded, I queued my new build. And now all our tests succeed.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/devblogs.microsoft.com\/wp-content\/uploads\/sites\/6\/2019\/05\/success.png\" alt=\"Successful Build\" width=\"2008\" height=\"330\" class=\"alignnone size-full wp-image-48345\" \/><\/p>\n<p>This is a wonderful illustration of why it&#8217;s so important to my project to build on a variety of systems: not only do we have confidence that we work correctly on many platforms, we also understand the problems that our users might run into and how they can work around those problems.<\/p>\n<p>Success.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>This is a follow-up to Matt Cooper&#8217;s earlier blog post, &#8220;Using containerized services in your pipeline&#8220;. If you haven&#8217;t yet, I encourage you to read that post to understand the new `container` syntax in the pipeline definition. As a program manager for Azure DevOps, I spend a lot of time speaking with customers about their [&hellip;]<\/p>\n","protected":false},"author":233,"featured_media":55826,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[226],"tags":[],"class_list":["post-52955","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ci"],"acf":[],"blog_post_summary":"<p>This is a follow-up to Matt Cooper&#8217;s earlier blog post, &#8220;Using containerized services in your pipeline&#8220;. If you haven&#8217;t yet, I encourage you to read that post to understand the new `container` syntax in the pipeline definition. As a program manager for Azure DevOps, I spend a lot of time speaking with customers about their [&hellip;]<\/p>\n","_links":{"self":[{"href":"https:\/\/devblogs.microsoft.com\/devops\/wp-json\/wp\/v2\/posts\/52955","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/devblogs.microsoft.com\/devops\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/devblogs.microsoft.com\/devops\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/devops\/wp-json\/wp\/v2\/users\/233"}],"replies":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/devops\/wp-json\/wp\/v2\/comments?post=52955"}],"version-history":[{"count":0,"href":"https:\/\/devblogs.microsoft.com\/devops\/wp-json\/wp\/v2\/posts\/52955\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/devops\/wp-json\/wp\/v2\/media\/55826"}],"wp:attachment":[{"href":"https:\/\/devblogs.microsoft.com\/devops\/wp-json\/wp\/v2\/media?parent=52955"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/devops\/wp-json\/wp\/v2\/categories?post=52955"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/devops\/wp-json\/wp\/v2\/tags?post=52955"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}