How to Deploy a Production-Ready Node.js Application in Azure

Jeff Ding

Sean Miller (CSE)

Introduction/Context

In this blog post, we will explore how to Dockerize a Node.js application for production using Docker. Specifically, we will focus on practices for TypeScript code. In addition to containerizing the Node.js application, we will further showcase how to automate the deployment process using an Azure Pipeline after the Dockerization.

Before deploying the application to production, developers or the engineering team will typically write and test code in the development environment. This stage is sometimes called the “development stage” within a multi-stage process. During the development stage, engineers have more permissions and access to the system, allowing them to create and test new features. However, when the code is pushed to staging or production stages, we need to consider whether the initial approach will work in a permissions-restricted environment and whether the entire code base will be used in those stages.

What we faced initially when we built the Node.js solution was that we developed a non-production-ready solution in the development stage. We were still running everything with dev dependencies, TypeScript was not transpiled, and the files were not optimized for production. This blog will present how we solve all the issues we mentioned earlier and successfully deploy the production-ready solution.

Creating a Dockerfile

A Dockerfile is a text document that contains all the commands and instructions required to build a Docker image. Each instruction corresponds to a command executed on the host machine during the image build process. The result of each instruction is a new layer the image. In this example, the Dockerfile will create a new image that launches a node.js TypeScript application.

Step 1: Create a Dockerfile with a Base Image for Building the App

To create a Dockerfile for our Node.js application, we will start with a base image that contains the Node.js runtime. We can use the official Node.js Docker image from Docker Hub as our base image.

FROM node:19-alpine As prod-build 

This instruction sets the base image for our Dockerfile to Node.js version 19 running on Alpine Linux.

Step 2: Install Node.js and dependencies

Next, we need to copy our Node.js application files to the Docker image and install its dependencies. We will do this using COPY and RUN instructions.

ENV PATH /Sample/node_modules/.bin:$PATH 

WORKDIR /Sample 

COPY . ./ 

RUN npm ci –production 

The ENV PATH /Sample/node_modules/.bin:$PATH instruction ensures that the executables created during the npm build or the yarn build processes can be found.

The WORKDIR instruction sets the working directory for subsequent instruction in the Dockerfile. In this case, we are setting it to /Sample.

The COPY instruction copies the entire application files to the Docker image.

The RUN instruction runs npm ci – production to install the application dependencies, and only install the one that is non-dev (production). This part is crucial since in production, we only need necessary code for our solution, and keep it lightweight and efficient.

Step 3: Build the Application

RUN npm run build 

If you are building your Node.js application using TypeScript, you will need to build or compile your TypeScript code into JavaScript. This step will generate a single JavaScript file in the /dist folder, which is much more lightweight and efficient.

Step 4: Create an Image for the Production Used

FROM node:19-alpine As prod-run 

This line creates a new image, separate from the previous steps. The reason for pulling and creating another image is to ensure the production environment contains only the necessary code and keep it as lightweight as possible. Docker lets us pull only the files we need from the build image into this new image.

Step 5: Copy the Compiled File and Set the Permission and Environment

COPY --chown=node:node --from=prod-build /Sample/dist /Sample/ 

WORKDIR /Sample

ENV NODE_ENV production 

We copy the compiled TypeScript file, which is now a single JavaScript file, to our production environment. The --chown=node:node sets the permission for the user: node, which will be used in the next step, and lets the user node have permission to read and write the file (default permission is root user). The --from=production-build is retrieving the file from the previous image that we used to build the TypeScript file. We then set the working directory as /Sample and set the node environment as production.

Step 6: Expose the Port

EXPOSE 3001 

We need to expose the port that our Node.js application is listening. We will use the EXPOSE instruction to do this. This instruction exposes port 3001 to the outside world from the Docker image.

Step 7: Set User

USER node 

Set the user as node instead of root. It is safer to have a container run as non-root. This user node already exists in the base Alpine image.

Step 8: Execute

CMD ["node", "index.js"] 

The CMD instruction sets the command that will be run when the Docker container is started. In this case, it runs node index.js to start our Node.js application.

Putting it all together, our complete Docker file looks like this:
Sample Dockerfile

FROM node:19-alpine As prod-build 

ENV PATH /Sample/node_modules/.bin:$PATH 

WORKDIR /Sample 

COPY . ./ 

RUN npm ci --production 

RUN npm run build 

FROM node:19-alpine As prod-run

COPY --chown=node:node --from=prod-build /Sample/dist /Sample/ 

WORKDIR /Sample 

ENV NODE_ENV production 

EXPOSE 3001 

USER node 

CMD ["node", "index.js"]

In addition

If you want to run multiple containers, you can utilize Docker compose. Here is an example of a Docker compose file: Sample docker-compose.yml

version: '3' 

services:
  app:
    build:
      context: ../ 
      dockerfile: sample/Dockerfile 
    ports:
      - "3000:3000" 
    env_file:
      - .env 
  func:
    build:
      context: ../ 
      dockerfile: sample/docker/azure-functions/Dockerfile
    ports:
      - "8080:80" 
    env_file:
      - .env

Automate the deployment using Azure Pipeline

Dockerizing the Node.js application is just the first step towards automating the deployment process. Using Azure Pipelines is a good practice to further streamline out the deployment process. In this section, we will provide an example Continuous Deployment (CD) pipeline and template to build our Dockerfile using an Azure Pipeline.

Below is the yaml file for an Azure Pipeline that continuously (on every change) builds the Dockerfile and pushes it to a Docker repository:

trigger: 
- dev

pr: 
- none 

resources: 
- repo: self 

parameters: 
- name: environment
  displayName: 'Deploy to Environment' 
  default: 'Prod' 

variables:
- ${{ if eq(parameters['environment'], 'Prod') }}: 
  - group: Prod 
- name: vmImageName 
  value: 'sample-agent' 
- name: tag 
  value: '$(Build.BuildId)'

stages: 
- stage: BuildAndPush 
  displayName: Build and Push 
  jobs: 
  - job: Build 
    displayName: Build 
    pool: '$(vmImageName)' 
    steps: 
    - template: build-push-docker-image.yml 
      parameters: 
        dockerFilePath: '$(Build.SourcesDirectory)/Sample/Dockerfile'
        serviceName: 'Sample Api' 
        repository: $(DOCKER_IMAGE_REPOSITORY) 
        containerRegistry: '$(DOCKER_REGISTRY_SERVICE_CONNECTION)' 
        tag: $(tag) 

Template for Building and Pushing Docker Image

Filename: build-push-docker-image.yml

parameters: 
- name: dockerFilePath 
  type: string 
- name: serviceName 
  type: string
- name: repository 
  type: string
- name: containerRegistry
  type: string
- name: tag
  type: string
steps: 
  - task: Docker@2 
    displayName: 'DockerBuild: Build ${{ parameters.serviceName }} docker image' 
    inputs: 
      command: build 
      repository: '${{ parameters.repository }}' 
      dockerfile: '${{ parameters.dockerFilePath }}' 
      containerRegistry: '${{ parameters.containerRegistry }}' 
      tags: '${{ parameters.tag }}' 

  - task: Docker@2 
    displayName: 'DockerPush: Push ${{ parameters.serviceName }} docker image to ACR' 
    inputs: 
      command: push 
      repository: '${{ parameters.repository }}' 
      containerRegistry: '${{ parameters.containerRegistry }}' 
      tags: '${{ parameters.tag }}' 

What have we learned from this?

During our exploration of deploying a production-ready Node.js application, we learned the difference between a production and development deployment for TypeScript/Node. With the production deployment, we want to keep the solution as lightweight as possible and use a non-root user. To achieve this, we set up two images in the Dockerfile.

The first image was used to build and transpile the TypeScript code into a single index.js file with only the dependencies needed for production. In the second image, we copied only the transpiled index.js file, so that we can keep the solution’s code as lightweight as possible. Additionally, we set the file access permissions for the user called node, so that we can switch the user from root to this specific node user and prevent the wide-access permission of the root user.

Conclusion

When building solutions, engineers will test out a lot of code and features to create and optimize their solutions. However, they often forget to clean up unnecessary code or dependencies. If DevOps engineers are not aware of these issues, they may deploy the code into the next stage, such as Staging, or even Production. To make sure that we only deploy the necessary source code, install the packages using production filter, and copy only the transpiled TypeScript file into the real production images.

Lastly, many thanks to team Mercury for the opportunity to encounter these production deployment tasks. This blog is conducted by Jeff Ding, Sean Miller, with support of manager Etienne Margraff.

Feedback usabilla icon