October 25th, 2018

How to Build A K8S Http API For Helm, and Serve Micro-services Using A Single IP

Image feature

Image credit: Pexels.com

Background

Axonize is a global provider of an IoT orchestration platform which automates the process of IoT deployments, cutting the process down from months to days. Their innovative platform consists of Data Gateways, which can accept even non-standard IoT sensor input, reformat the payload, and then deliver the data to the Azure IoT Hub, or virtually any other cloud destination.

Axonize had already been working with Microsoft to deploy their micro-services solution on Azure Managed Kubernetes – AKS, but wanted to go a step further, and be able to deploy hundreds of Data Gateway applications to their Kubernetes cluster directly from their website.

To accomplish this, the Microsoft Commercial Software Engineering team (CSE) was tasked with helping Axonize scale their Data Gateways using a micro-services architecture and deploying them on AKS, while also exposing the Data Gateways to the internet using a single public IP. To do this, our team had to find a way to enable web applications in a Kubernetes cluster to programmatically install helm charts and expose them through a single public facing IP.

Our design had to support the following requirements:

  • Automating the deployment of hundreds of Data Gateways application, and virtually much more.
  • Having the installed gateways publicly accessible in an automated way, without using a new public IP per gateway.
  • Running the processes from within the cluster, where the admin website panel is installed. The admin panel triggered the gateway deployment process upon user interaction.

Challenges

One first challenge was to find a simple but all-inclusive way to manage this flow of events. To do this our team chose to implement a simple web server which upon request could initiate a Kubernetes application installation, find an unused port and expose the newly installed application via that port.

Since each Kubernetes application to deploy had its own versioning, and possibly multiple YAML files for different deployments types, the natural choice for our team was Helm. Helm was created to simplify the Kubernetes application management by decoupling YAML files from the values and allow versioning of the deployment files. These YAML templates are called charts and come with a rich CLI. Separating the values away from the YAML files allows for easy automation.

Getting Helm to work inside the Kubernetes cluster as a Dockerized image was challenging since Helm was originally designed to be a client-side utility which communicates with the remote tiller server. As such, we had to think about the right way to incorporate Helm into the web server, how to give the RBAC permissions and also how to design the API. We decided to incorporate Helm as an executable within our Docker image, rather than the more common approach of interacting with Tiller via gRPC. Doing so leverages the work done on Helm which already implements the Tiller API.  This is important as the next major release of Helm V3.0, will not include Tiller, making the gRPC approach impossible.

In order to have multiple apps deployed at the same time, while using a single IP but a different port, we decided to use the Nginx Ingress Controller which is a commonly used ingress controller and supports network layer 4 (Transport). This is important because many IoT sensors do not support higher networking layers (e.g. http). In addition to that, Nginx has an official helm chart, with a huge community support, which makes setup and management much easier.

And lastly, after the app was installed, we had to find a simple way to choose a free external port. Since Kubernetes doesn’t support finding ports natively, our team implemented a simple port-allocator which automatically searches for an available port in the ingress controllers.

Considered Alternatives:

Our team considered using Brigade.sh to manage the flow of the events, however, at the time of writing, Brigade.sh was not ready for production use (alpha version). Therefore, we chose to implement the flow ourselves.

We thought about using  Traefik as an ingress controller (instead of Nginx). However, at the time of writing, Traefik only supported the HTTP protocol, making it unsuitable for IoT devices which use a plethora of protocols, such as TCP, UDP, MQTT and more.

The Solution

Eventually, our solution consisted of two parts:

  1. “Helm as a service”: A Helm binary wrapped and deployed as an Node.js Express server which allows developers to manage helm charts from inside the cluster, using a simple REST API.
  2. “Expose as a service”: An Express server with ports allocation functionality, which exposes installed helm charts to the internet, via a single IP.

All together we were able to automatically deploy helm charts, and dynamically create a public endpoint in the form of IP:Port:

Creating a RESTful API as a wrapper for Helm binary

When designing the micro-services based architecture, our team decided that it would be advantageous if clients resources could be separated, meaning each new client who registered to the system would be given their own gateway, making it easier to protect resources and allow for greater scaling flexibility. Deploying the different resources manually per client is straightforward using Helm, as a simple ‘helm install’ command is all it takes to deploy the new gateway to the cluster.

Managing the gateways manually is not a scalable solution, and is much more challenging to automate. As such, we decided to leverage Helm and run it within the cluster, as a RESTful endpoint. To do this, we created a Node.js express server, which receives clients commands with a JSON payload and then prepares the relevant Helm command and runs it immediately.

Example 1: json command to install nginx ingress

{
  "chartName":"stable/nginx-ingress",
  "releaseName":"mynginx1"
}

Example 2: json command to install a private chart

{
  "chartName":"sampleApp",
  "releaseName":"sampleApp1",
  "privateChartsRepo": "https://raw.githubusercontent.com/username/helm_repo/master/index.yaml"
}

Example 3: alternatively using custom values (to be used with helm –set), and letting helm choose a release name

{
  "chartName":"stable/rabbitmq",
  "values": {
       "rabbitmq.username" : "admin" ,
       "rabbitmq.password" : "secretpassword",
       "rabbitmq.erlangCookie": "secretcookie"
    }
}

By leveraging Helm and running it as a containerized application within the cluster, our team was able to automate the process and gain easier control over the cluster, by harnessing Helm features such as packaging code and versioning of the deployment files.

Exposing an app to the internet automatically using a single IP. 

A publicly exposed app is an app with a public IP and port, open for inbound (and outbound) traffic.

Exposing a service to the internet using a  LoadBalancer  type, provisions a new public IP, and assigns it to the service. If you have just a few services, this is the right course of action. However, if you need to support a larger number of public endpoints, you should consider using some sort of ingress controller.

Using an ingress controller has many benefits, among which, the ability to route inbound traffic via a single IP to different services. This came to be very useful in our use-case where dozens or even hundreds of services were installed.

If we had wanted to also use TCP and UDP protocols, the easiest way would have been using nginx ingress controller. Configuring these types of controllers manually is simply done by specifying the desired external port.

However, when doing so, there is no option to program the controller to automatically select an available port, unless you make your service a load-balancer, which was not suitable for this project. Therefore we chose a solution which involved creating a simple Port-Manager, accessible via HTTP API to find an available port in a given load-balancer (ingress controller), in order to use it later to create an ingress rule.

Exposing the application using the ingress controller takes 2 steps:

  1. Choosing an available port, using Port-Manager.
  2. Configuring the ingress controller, using Ingress-Manager.

Ingress rules are configured within the ingress controller, and there could be several ingress controllers within the cluster. As part of this process, the Port-Manager will try to find a free port on an Ingress Controller. If there are multiple Ingress Controllers, one of the ingress controller is selected by:

  1. Specifying a namespace where the ingress controllers were deployed. The namespace is set in an Environment Variable called LoadBalancerNamespace.
  2. Adding the appingress label to each ingress controller you have chosen with the Port-Manager, for better granularity. The value of the label is set in an Environment variable called IngressLabel.

Once all the applicable ingress controllers have been found, the system will perform one of the following actions:

  1. Pick a random controller and find a free port.

-or-

  1. Pick the controller specified in the HTTP request by setting the lbip parameter in the query string.

For example: http://<k8s-deployer-url>/getport?lbip=1.2.3.4

Dockerizing Helm

Managing cluster applications requires specific permissions which are not given by default. This means that our Helm client container had to be given those permissions in order to be able to control the cluster.

To do this, we first prepared the stage for the tiller (helm’s server side) and created a service account named ’tiller’:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
automountServiceAccountToken: true

We then bound this to the cluster-admin role, so it would be able to control the cluster:

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system

We then installed Tiller on the cluster with the following command:

helm init --service-account tiller

A Helm client running in a pod needs certain privileges to be able to talk with a Tiller instance. Specifically, the Helm client needed to be able to create pods, forward ports and be able to list pods in the namespace where Tiller was running (so it could find Tiller). It also needed permissions to install apps and extensions. Our complete configuration can be found here (for more information regarding Helm and RBAC, click here).

We then deployed our web server and specified the helm service account (Please note that ‘helm’ is the default value used by our chart and can be changed by explicitly specifying when installing the chart):

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: k8s-http-deployer
spec:
  replicas: 1
  ...
  serviceAccountName: helm
...

b) Supporting helm inside a container – we needed to include the Helm binary itself together with node and NPM. We added the following snippet to our Dockerfile to achieve this:

# Note: Latest version of kubectl may be found at: # https://aur.archlinux.org/packages/kubectl-bin/ 
ARG KUBE_LATEST_VERSION="v1.10.2" 
# Note: Latest version of helm may be found at: # https://github.com/kubernetes/helm/releases 
ARG HELM_VERSION="v2.10.0" 

ENV HELM_HOME="/usr/local/bin/"
ENV HELM_BINARY="/usr/local/bin/helm"
RUN mkdir /usr/local/bin/plugins
RUN apk add --no-cache ca-certificates bash 
    && wget -q https://storage.googleapis.com/kubernetes-release/release/${KUBE_LATEST_VERSION}/bin/linux/amd64/kubectl -O /usr/local/bin/kubectl 
    && chmod +x /usr/local/bin/kubectl 
    && wget -q http://storage.googleapis.com/kubernetes-helm/helm-${HELM_VERSION}-linux-amd64.tar.gz -O - | tar -xzO linux-amd64/helm > /usr/local/bin/helm 
    && chmod +x /usr/local/bin/helm

This snippet downloaded the specified Helm binary and placed it in a pre-configured location with appropriate execute permission.

c) When we executed the install command using helm, the response was not formatted in json, or any other format. This made it hard for the machines to parse the response correctly and therefore broke the automation process. To overcome this, we created a json helm plugin, which converted the output to json.

Installing our solution

Installing our solution with the default configuration was fairly simple. For instructions as to how to follow the same processes, please refer to our installation how-to.

Once the service is running, you can access it internally from http://k8s-deployer.default.svc.cluster.local or externally from the allocated public ip, on port 4000. For complete and up-to-date instructions please refer to the Github repository.

This can be installed on the non-default namespace in the cluster and can support more advanced configurations, such as installing charts from private helm repositories and passing additional arguments to override the chart’s defaults. To learn more about this process, please refer to the project’s Github page and the API Documentation.

Code sample

The sample app in the repo shows how to install a service and expose it externally. Here are notable commands to inspect:

// perform helm install
var installResponse = await requestPostAsync(Paths.HelmInstall, { form: { chartName: "<some-chart-name>" } });

// create a rule to expose the new service expternally
var ingressResponse = await requestGetAsync(Paths.SetIngressRule, { serviceName: installResponse.serviceName, servicePort: <some port> });

return `Your new service: ${ingressResponse.releaseName}, is publicly accessible on ${ingressResponse.ip}:${ingressResponse.port}`;

Conclusion

Our team was pleased with the outcome of the project, and we hope to partner with Axonize again in the future. During the project, we created an ‘out of the box’ solution providing a set of internal web services, which abstracts and simplify the process of dynamically deploying new apps to Kubernetes and exposing them to internet automatically. The solution is easy to set up using chart values and environment variables and allows users to automate the process of deploying apps to a Kubernetes cluster and make them publicly available via a single IP.

We hope that our solution, when integrated into Axonize’s environment, will allow users to dramatically reduce the overhead involved in onboarding new clients, and allow for much greater scalability.

The outlined solution can also help in any other scenarios where a single IP is required to serve multiple micro-services, and in cases when deploying new micro-services needs to be done programmatically and automatically. This solution could also be used as an enabler for any kind of Kubernetes environment in which there is a large emphasis on automation, and efficiently utilizing resources such as IP addresses.

For anyone working on projects where a single IP is required to serve multiple micro-services, micro-services need to be deployed automatically or working within Kubernetes environments in which there is a large emphasis for the need for automation, please feel free to utilize our code which can be found on our Github repository.

For those interested in our API processes, please check out the following API resources.

Author

0 comments

Discussion are closed.