A Survey of Kubernetes Features in Azure
App Dev Manager Jason Venema explores support and functionality for Kubernetes offered across a range of Azure services.
So, have you heard about Kubernetes? I bet you have. It’s no secret that Kubernetes has become the de facto container orchestrator for companies that develop software and want to maintain highly available applications for their customers. And let’s be honest, these days that includes pretty much every company that writes any sort of software.
Alright, so we all know that Kubernetes is taking over the world. For those of you who may have been sitting on the sidelines for a while, biding your time until the inevitable freight train that is Kubernetes comes to run you over, let me help demystify some of the tools and services available in Azure that make Kubernetes much easier to use than it ever has been before.
You may be wondering what it’s like to actually develop something that will run on Kubernetes. Maybe you’ve heard that Kubernetes can be complex to set up and operate, and you’re wondering if it’s also complex to develop and deploy your applications? And what about event driven auto-scaling – is that even possible in Kubernetes?
In this post, I’m going to provide an overview of some of the features and services available in Azure that make developing applications for Kubernetes a relative breeze. I’ll be giving an overview of the following features:
- Azure Kubernetes Service (AKS)
- Azure Container Instance (ACI)
- Kubernetes-based Event Drive Autoscaling (KEDA)
- Azure Dev Spaces
- Azure Pipelines Kubernetes CI/CD
Azure Kubernetes Service
Alright, admittedly this one is kind of obvious. If you want to deploy a containerized application to Kubernetes, you’re clearly going to need a Kubernetes cluster. And if you want to deploy your own Kubernetes cluster, then Azure Kubernetes Service is by far the easiest way to do it. If you want to do it the hip way (you do), then it’s a simple matter of navigating to the Azure portal, opening up a Bash Cloud Shell and typing:
az aks create --resource-group myResourceGroup --name myAKSCluster
That’s the simplest possible way to create a cluster, though to be honest you might want to pass a few other parameters to control things like the number of agent nodes, install add-ons, and so on. Oh, and if you want to create your cluster using the portal, that’s perfectly alright, too (no shame!)
In addition to being simple to deploy a cluster (a major feat in itself), AKS brings a lot of other benefits to the table. One key benefit is that although AKS is comprised of both master and agent nodes (servers), you will only pay for the agent nodes. Microsoft covers the cost of the masters, and also takes care of managing them behind the scenes so you don’t have to think about. And don’t forget about Azure AD integration, integrated logging and monitoring using Azure Monitor, auto-scaling, managed upgrades of the agent nodes, storage volume support, virtual network support and ingress with HTTP application routing… to name just a few.
Azure Container Instances (ACI)
You’re probably convinced by now that AKS is pretty great. I would agree. For organizations that want to run their production applications on top of a fully featured container orchestrator, AKS is definitely the way to go. Sometimes, though, you just need a place to deploy a container and let it run for a few minutes. Maybe you’re just doing some testing, or running a batch job that only takes a few minutes to complete. For those cases, setting up a full-blown AKS cluster would be overkill.
This is where Azure Container Instances (ACI) comes in. Simply point ACI at the container image you want to run, give it a DNS name, tell it which ports you need to expose and you’re done. ACI will pull down your container image and run it until you tell it to stop. You can see this in action by running a single CLI command from the Azure Cloud Shell:
az container create --resource-group myResourceGroup --name mycontainer --image mcr.microsoft.com/azuredocs/aci-helloworld --dns-name-label aci-demo --ports 80
The great thing about this is that within seconds (yes, seconds) your container instance will be up and running. This is a world-away from VMs, where it would take minutes just to start the VM, let alone fire up and begin using your application. Best of all, ACI charges for usage by the second, so you truly pay only for what you use. If you just need your container to run for a few minutes, you only pay for a few minutes of runtime!
Azure Dev Spaces
Now that we know how to create our cluster with AKS, let’s get back to talking about the developer experience. You’re probably convinced by now that it’s really easy to run and orchestrate containers in AKS. You might be less convinced that containerized applications are easy to create and debug in Kubernetes, though.
The reality is, if I’m deploying my applications as a set of containers then I’m probably doing at least some amount of microservice development. Even if it’s not full-blown microservices, if you’re bothering to containerize your applications at all, then they are probably architected in such a way that the individual services can run independently from one another. This introduces some potential complexity, because you personally are probably only working on a small fraction of all the containers the make up your application. This begs the question: How can you test your containers in the context of the full application without having to deploy your own personal Kubernetes cluster, and without stepping on the rest of your team’s changes? The larger your application gets, the bigger this challenge becomes.
Fear not, though. This is exactly the problem that Azure Dev Spaces solves. By enabling Dev Spaces on your AKS development cluster, everyone in your team can share the same cluster but see only their own changes during development. This requires you to first deploy a “baseline” working version of your entire application – every container – to the AKS cluster. Then, as individual developers make changes to their portion of the application, they can deploy updated versions of their containers.
Here’s the magic part: When you deploy your container with Azure Dev Spaces enabled, each developer gets their own special URL that ensures they are hitting the code in their own updated containers, but they continue to hit the “baseline” containers for the rest of the application. This is true even if other team members have also deployed their own changes! This GIF from the Azure Dev Spaces documentation page is the best visual explanation of how this work:
Behind the scenes, Azure Dev Spaces is creating designated Kubernetes namespaces for each dev space. A namespace is a way of isolating containers in Kubernetes, so it’s perfect for keeping your changes isolated from the rest of your team. Dev Spaces also takes care of the routing needed to ensure you only see your own changes. When you’re ready for the rest of your team to see your changes, simply update the “baseline” version with your new containers. Simple!
Event Driven Kubernetes with KEDA
Many organizations have already adopted event driven application designs leveraging serverless technologies like Azure Functions, which provides a really fantastic development and debugging experience both locally, as well as in Azure. And while it is very convenient to have a fully managed environment for running your functions, sometimes you want a little more control over the underlying infrastructure. This is where KEDA comes in.
KEDA is Kubernetes-based Event Driven Autoscaling. Let’s break that down. Azure Functions can, of course, run in Azure but they can also run in a container on a server that has the Azure Functions runtime installed on it. That server could be, say, an agent node(s) in a Kubernetes cluster. This is nice, because you can run your functions in a container and have full control over the underlying infrastructure. However, the piece missing is auto-scaling of the number of instances. In Kubernetes, it’s possible to auto-scale based on CPU or memory thresholds being reached, but in oftentimes what you really want is to scale before those metrics are impacted.
Azure Functions has a solution for this in the form of a component called the scale controller, which intelligently monitors events and proactively scales out your Functions as needed. KEDA enables you to take advantage of this scale controller to do the same thing in Kubernetes, automatically scaling your containerized Functions based on event metrics from any number of different sources. And because KEDA is extensible, new event metrics sources can be added meaning you can auto-scale based on anything you like.
Getting started with KEDA is as simple as deploying a Helm chart to your Kubernetes cluster, and then deploying the Azure Functions core tools. Just be aware that as of today, KEDA is currently in preview so you shouldn’t use it for your production workloads just yet.
Deploy to AKS with Azure Pipelines
As we’ve seen, creating a Kubernetes cluster and getting started developing applications is made much easier with AKS, ACI and Azure Dev Spaces. Automatically scaling your applications using event driven metrics is also made possible with KEDA. The final piece of the puzzle is automatically deploying your application to AKS. Azure Pipelines makes this incredibly easy with built-in CI/CD pipeline templates.
All you need to get started is a repository that contains your source code and a Dockerfile. Follow the documentation the create a YAML based CI/CD pipeline in Azure Pipelines, and every time you push a code change to your repository Azure Pipelines will build your application and push the new container image to Azure Container Registry. Once the container image is in the registry, the pipeline applies a deployment manifest to your AKS cluster, causing the new image to be pulled down and deployed. That’s all there is to it!
While Kubernetes itself may be complex, getting started with it is not as hard as it might seem. Using Azure services like AKS, Azure Container Instances and Azure Dev Spaces makes setting up a cluster and testing your containers relatively simple. Azure Pipelines adds the ability to quickly setup an automated CI/CD pipeline that will deploy your new container images whenever a change is checked in to your code repository. Finally, the preview KEDA service lets you experiment with custom auto-scaling of containers running in a Kubernetes cluster using the Azure Functions scale controller.
Kubernetes has experienced massive growth since its inception, and it is not losing any momentum. If you’ve been sitting on the sidelines, it’s time to jump in and get started. And there’s no better place to do that than in Azure!