{"id":14601,"date":"2023-04-13T00:00:00","date_gmt":"2023-04-13T07:00:00","guid":{"rendered":"https:\/\/devblogs.microsoft.com\/cse\/?p=14601"},"modified":"2023-06-19T11:26:18","modified_gmt":"2023-06-19T18:26:18","slug":"deploy-production-ready-nodejs-application-in-azure","status":"publish","type":"post","link":"https:\/\/devblogs.microsoft.com\/ise\/deploy-production-ready-nodejs-application-in-azure\/","title":{"rendered":"How to Deploy a Production-Ready Node.js Application in Azure"},"content":{"rendered":"<h2>Introduction\/Context<\/h2>\n<p>In this blog post, we will explore how to Dockerize a Node.js application for production using Docker. Specifically, we will focus on practices for TypeScript code. In addition to containerizing the Node.js application, we will further showcase how to automate the deployment process using an Azure Pipeline after the Dockerization.<\/p>\n<p>Before deploying the application to production, developers or the engineering team will typically write and test code in the development environment. This stage is sometimes called the &#8220;development stage&#8221; within a multi-stage process. During the development stage, engineers have more permissions and access to the system, allowing them to create and test new features. However, when the code is pushed to staging or production stages, we need to consider whether the initial approach will work in a permissions-restricted environment and whether the entire code base will be used in those stages.<\/p>\n<p>What we faced initially when we built the Node.js solution was that we developed a non-production-ready solution in the development stage. We were still running everything with dev dependencies, TypeScript was not transpiled, and the files were not optimized for production. This blog will present how we solve all the issues we mentioned earlier and successfully deploy the production-ready solution.<\/p>\n<h2>Creating a Dockerfile<\/h2>\n<p>A Dockerfile is a text document that contains all the commands and instructions required to build a Docker image. Each instruction corresponds to a command executed on the host machine during the image build process. The result of each instruction is a new layer the image. In this example, the Dockerfile will create a new image that launches a node.js TypeScript application.<\/p>\n<h3>Step 1: Create a Dockerfile with a Base Image for Building the App<\/h3>\n<p>To create a Dockerfile for our Node.js application, we will start with a base image that contains the Node.js runtime. We can use the <a href=\"https:\/\/hub.docker.com\/_\/node\">official Node.js Docker image<\/a> from Docker Hub as our base image.<\/p>\n<pre><code class=\"language-dockerfile\">FROM node:19-alpine As prod-build <\/code><\/pre>\n<p>This instruction sets the base image for our Dockerfile to Node.js version 19 running on Alpine Linux.<\/p>\n<h3>Step 2: Install Node.js and dependencies<\/h3>\n<p>Next, we need to copy our Node.js application files to the Docker image and install its dependencies. We will do this using <code>COPY<\/code> and <code>RUN<\/code> instructions.  <\/p>\n<pre><code class=\"language-dockerfile\">ENV PATH \/Sample\/node_modules\/.bin:$PATH \r\n\r\nWORKDIR \/Sample \r\n\r\nCOPY . .\/ \r\n\r\nRUN npm ci \u2013production <\/code><\/pre>\n<p>The <code>ENV PATH \/Sample\/node_modules\/.bin:$PATH<\/code> instruction ensures that the executables created during the npm build or the yarn build processes can be found.<\/p>\n<p>The <code>WORKDIR<\/code> instruction sets the working directory for subsequent instruction in the Dockerfile. In this case, we are setting it to <code>\/Sample<\/code>.  <\/p>\n<p>The <code>COPY<\/code> instruction copies the entire application files to the Docker image.  <\/p>\n<p>The <code>RUN<\/code> instruction runs <code>npm ci \u2013 production<\/code> to install the application dependencies, and only install the one that is non-dev (production). This part is crucial since in production, we only need necessary code for our solution, and keep it lightweight and efficient.  <\/p>\n<h3>Step 3: Build the Application<\/h3>\n<pre><code class=\"language-dockerfile\">RUN npm run build <\/code><\/pre>\n<p>If you are building your Node.js application using TypeScript, you will need to build or compile your TypeScript code into JavaScript. This step will generate a single JavaScript file in the \/dist folder, which is much more lightweight and efficient.<\/p>\n<h3>Step 4: Create an Image for the Production Used<\/h3>\n<pre><code class=\"language-dockerfile\">FROM node:19-alpine As prod-run <\/code><\/pre>\n<p>This line creates a new image, separate from the previous steps. The reason for pulling and creating another image is to ensure the production environment contains only the necessary code and keep it as lightweight as possible. Docker lets us pull only the files we need from the build image into this new image.<\/p>\n<h3>Step 5: Copy the Compiled File and Set the Permission and Environment<\/h3>\n<pre><code class=\"language-dockerfile\">COPY --chown=node:node --from=prod-build \/Sample\/dist \/Sample\/ \r\n\r\nWORKDIR \/Sample\r\n\r\nENV NODE_ENV production <\/code><\/pre>\n<p>We copy the compiled TypeScript file, which is now a single JavaScript file, to our production environment. The <code>--chown=node:node<\/code> sets the permission for the user: node, which will be used in the next step, and lets the user <code>node<\/code> have permission to read and write the file (default permission is root user). The <code>--from=production-build<\/code> is retrieving the file from the previous image that we used to build the TypeScript file. We then set the working directory as \/Sample and set the node environment as production.<\/p>\n<h3>Step 6: Expose the Port<\/h3>\n<pre><code class=\"language-dockerfile\">EXPOSE 3001 <\/code><\/pre>\n<p>We need to expose the port that our Node.js application is listening. We will use the <code>EXPOSE<\/code> instruction to do this. This instruction exposes port 3001 to the outside world from the Docker image.  <\/p>\n<h3>Step 7: Set User<\/h3>\n<pre><code class=\"language-dockerfile\">USER node <\/code><\/pre>\n<p>Set the user as <code>node<\/code> instead of <code>root<\/code>. It is safer to have a container run as non-root. This user <code>node<\/code> already exists in the base Alpine image.<\/p>\n<h3>Step 8: Execute<\/h3>\n<pre><code class=\"language-dockerfile\">CMD [\"node\", \"index.js\"] <\/code><\/pre>\n<p>The <code>CMD<\/code> instruction sets the command that will be run when the Docker container is started. In this case, it runs <code>node index.js<\/code> to start our Node.js application.  <\/p>\n<p>Putting it all together, our complete Docker file looks like this:<br \/>\n<strong>Sample Dockerfile<\/strong><\/p>\n<pre><code class=\"language-dockerfile\">FROM node:19-alpine As prod-build \r\n\r\nENV PATH \/Sample\/node_modules\/.bin:$PATH \r\n\r\nWORKDIR \/Sample \r\n\r\nCOPY . .\/ \r\n\r\nRUN npm ci --production \r\n\r\nRUN npm run build \r\n\r\nFROM node:19-alpine As prod-run\r\n\r\nCOPY --chown=node:node --from=prod-build \/Sample\/dist \/Sample\/ \r\n\r\nWORKDIR \/Sample \r\n\r\nENV NODE_ENV production \r\n\r\nEXPOSE 3001 \r\n\r\nUSER node \r\n\r\nCMD [\"node\", \"index.js\"]<\/code><\/pre>\n<h3>In addition<\/h3>\n<p>If you want to run multiple containers, you can utilize Docker compose. Here is an example of a Docker compose file:\n<strong>Sample docker-compose.yml<\/strong><\/p>\n<pre><code class=\"language-yaml\">version: '3' \r\n\r\nservices:\r\n  app:\r\n    build:\r\n      context: ..\/ \r\n      dockerfile: sample\/Dockerfile \r\n    ports:\r\n      - \"3000:3000\" \r\n    env_file:\r\n      - .env \r\n  func:\r\n    build:\r\n      context: ..\/ \r\n      dockerfile: sample\/docker\/azure-functions\/Dockerfile\r\n    ports:\r\n      - \"8080:80\" \r\n    env_file:\r\n      - .env<\/code><\/pre>\n<h2>Automate the deployment using Azure Pipeline<\/h2>\n<p>Dockerizing the Node.js application is just the first step towards automating the deployment process. Using Azure Pipelines is a good practice to further streamline out the deployment process. In this section, we will provide an example Continuous Deployment (CD) pipeline and template to build our Dockerfile using an Azure Pipeline.<\/p>\n<p>Below is the yaml file for an Azure Pipeline that continuously (on every change) builds the Dockerfile and pushes it to a Docker repository:<\/p>\n<pre><code class=\"language-yaml\">trigger: \r\n- dev\r\n\r\npr: \r\n- none \r\n\r\nresources: \r\n- repo: self \r\n\r\nparameters: \r\n- name: environment\r\n  displayName: 'Deploy to Environment' \r\n  default: 'Prod' \r\n\r\nvariables:\r\n- ${{ if eq(parameters['environment'], 'Prod') }}: \r\n  - group: Prod \r\n- name: vmImageName \r\n  value: 'sample-agent' \r\n- name: tag \r\n  value: '$(Build.BuildId)'\r\n\r\nstages: \r\n- stage: BuildAndPush \r\n  displayName: Build and Push \r\n  jobs: \r\n  - job: Build \r\n    displayName: Build \r\n    pool: '$(vmImageName)' \r\n    steps: \r\n    - template: build-push-docker-image.yml \r\n      parameters: \r\n        dockerFilePath: '$(Build.SourcesDirectory)\/Sample\/Dockerfile'\r\n        serviceName: 'Sample Api' \r\n        repository: $(DOCKER_IMAGE_REPOSITORY) \r\n        containerRegistry: '$(DOCKER_REGISTRY_SERVICE_CONNECTION)' \r\n        tag: $(tag) <\/code><\/pre>\n<h3>Template for Building and Pushing Docker Image<\/h3>\n<p>Filename: build-push-docker-image.yml<\/p>\n<pre><code class=\"language-yaml\">parameters: \r\n- name: dockerFilePath \r\n  type: string \r\n- name: serviceName \r\n  type: string\r\n- name: repository \r\n  type: string\r\n- name: containerRegistry\r\n  type: string\r\n- name: tag\r\n  type: string\r\nsteps: \r\n  - task: Docker@2 \r\n    displayName: 'DockerBuild: Build ${{ parameters.serviceName }} docker image' \r\n    inputs: \r\n      command: build \r\n      repository: '${{ parameters.repository }}' \r\n      dockerfile: '${{ parameters.dockerFilePath }}' \r\n      containerRegistry: '${{ parameters.containerRegistry }}' \r\n      tags: '${{ parameters.tag }}' \r\n\r\n  - task: Docker@2 \r\n    displayName: 'DockerPush: Push ${{ parameters.serviceName }} docker image to ACR' \r\n    inputs: \r\n      command: push \r\n      repository: '${{ parameters.repository }}' \r\n      containerRegistry: '${{ parameters.containerRegistry }}' \r\n      tags: '${{ parameters.tag }}' <\/code><\/pre>\n<h2>What have we learned from this?<\/h2>\n<p>During our exploration of deploying a production-ready Node.js application, we learned the difference between a production and development deployment for TypeScript\/Node. With the production deployment, we want to keep the solution as lightweight as possible and use a non-root user. To achieve this, we set up two images in the Dockerfile.<\/p>\n<p>The first image was used to build and transpile the TypeScript code into a single index.js file with only the dependencies needed for production. In the second image, we copied only the transpiled index.js file, so that we can keep the solution&#8217;s code as lightweight as possible. Additionally, we set the file access permissions for the user called <code>node<\/code>, so that we can switch the user from root to this specific node user and prevent the wide-access permission of the root user.<\/p>\n<h2>Conclusion<\/h2>\n<p>When building solutions, engineers will test out a lot of code and features to create and optimize their solutions. However, they often forget to clean up unnecessary code or dependencies. If DevOps engineers are not aware of these issues, they may deploy the code into the next stage, such as Staging, or even Production. To make sure that we only deploy the necessary source code, install the packages using production filter, and copy only the transpiled TypeScript file into the real production images.<\/p>\n<p>Lastly, many thanks to team Mercury for the opportunity to encounter these production deployment tasks. This blog is conducted by Jeff Ding, Sean Miller, with support of manager Etienne Margraff.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Step-by-step guide on how to deploy node.js application in production using docker and Azure pipeline<\/p>\n","protected":false},"author":115507,"featured_media":14606,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[15,1],"tags":[3390,3322,156,3389,3391,296,3302],"class_list":["post-14601","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-containers","category-cse","tag-container","tag-deployment","tag-docker","tag-nodejs","tag-pipeline","tag-production","tag-typescript"],"acf":[],"blog_post_summary":"<p>Step-by-step guide on how to deploy node.js application in production using docker and Azure pipeline<\/p>\n","_links":{"self":[{"href":"https:\/\/devblogs.microsoft.com\/ise\/wp-json\/wp\/v2\/posts\/14601","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/devblogs.microsoft.com\/ise\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/devblogs.microsoft.com\/ise\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/ise\/wp-json\/wp\/v2\/users\/115507"}],"replies":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/ise\/wp-json\/wp\/v2\/comments?post=14601"}],"version-history":[{"count":0,"href":"https:\/\/devblogs.microsoft.com\/ise\/wp-json\/wp\/v2\/posts\/14601\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/ise\/wp-json\/wp\/v2\/media\/14606"}],"wp:attachment":[{"href":"https:\/\/devblogs.microsoft.com\/ise\/wp-json\/wp\/v2\/media?parent=14601"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/ise\/wp-json\/wp\/v2\/categories?post=14601"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/ise\/wp-json\/wp\/v2\/tags?post=14601"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}