How to bring a serverless flavor to Azure Kubernetes Services (AKS)?
As you might know, AKS is a Microsoft-managed K8s offering. Microsoft manages the API server, and you, the cloud consumer, manage the worker nodes, divided into system and user nodes. These worker nodes are spread across node pools based on uniform virtual machine scale sets backed by virtual machines. But wait, where is my serverless in that story talking about VMs? Well, serverless is mainly characterized by the fact that computing is dynamically allocated when needed. It positively impacts your wallet because you only pay for the actual consumption of resources. If you adhere to this definition, then here are a few ways to bring serverless into your AKS clusters:
- They are using the Virtual Kubelets, AKA Azure Container Instances, as an extension of your clusters. They can be used to run any volatile activity (job, event handling, etc.). In short, they spin additional nodes in less than a minute (usually around 30 secs), only when needed. I would not recommend using them for long-running (days or weeks) operations.
- Using ecosystem solutions such as KEDA (Kubernetes Event-Driven Autoscaling) allows you to dynamically scale from 0 to N based on the actual needs. KEDA and Virtual Kubelets play very well together.
- Leveraging serverless technologies, such as Azure Functions and Logic Apps, which ship as containers.
- Lastly, by fine-tuning each node pool’s autoscaling capabilities
You will taste some serverless flavor if you apply these recommendations to your clusters, as you will end up with more dynamic/elastic clusters.