- 25 Jan 2023
- 3 Minutes to read
Serverless friendliness and Azure Container Services
- Updated on 25 Jan 2023
- 3 Minutes to read
In this container tip, I will give you a consolidated view of the various Azure container services and their friendliness towards a serverless approach.
Today, Microsoft Azure has the following container offering:
- Azure Kubernetes Services (AKS)
- Azure Container Apps (ACA)
- Azure Web App for Containers (AWAC)
- Azure Container Instances (ACI)
Azure Red Hat OpenShift (ARO) Azure Service Fabric
I will park Azure Service Fabric and Azure Red Hat OpenShift because I don’t know them well enough. Satellite container services such as Azure Kubernetes Fleet Manager, Azure Container Registry, etc., are also part of Azure’s container landscape but are not considered for this evaluation, as they serve different purposes.
The below score matrix takes into account what we like about serverless, meaning:
- Cost friendliness
- Nearly zero operational overhead
- Being able to respond fast to peak workloads
Scoring is based on a scale from 0 to 10, 10 being the best possible score.
(1) Providing you spin up ACIs, let them do their job, and destroy/stop them after use
(2) Providing you define scaling rules, which allow you to scale down to zero when there is no work to do
(3) AKS can be expensive quickly, but many best practices allow you to lower costs. You can buy reserved instances, use node pool autoscaling (down to zero), start/stop non-production clusters, build clusters with mixed VMs (standard & spot VMs), etc. However, a production-grade cluster will involve a dedicated system node pool with three nodes and another node pool for the user nodes, which ultimately will still represent a specific fixed cost, no matter what runs into it.
(4) AWAC is based on App Service Plans. It is less cost friendly than ACI or ACA because it incurs a fixed cost, but there is some granularity with the pricing tiers.
(5) ACIs do not have any built-in way to scale. You statically define the required compute. It would help if you involved custom logic in creating extra ACIs when needed. A default limit of 100 concurrent ACIs per subscription can be extended through a support ticket.
(6) ACAs can scale thanks to default and advanced scaling rules. While the scaling algorithm is pretty advanced, each ACA environment is limited to 20 cores (as of 11/22), which is not that much. This limit can be extended through a support ticket, but it is still deficient, and many folks might not pay attention to this until they hit the limits.
(7) AKS has node pool-level autoscaling and component-level autoscaling through HPA and KEDA. Component-level (pod/job) scaling is pretty fast. Conversely, adding new nodes to a node pool can take several minutes before the node is up & running, which is not ideal in peak workload scenarios. However, AKS can also leverage virtual kubelets, which translate into ACIs. These kubelets can be up and running within between 20 and 90 seconds. Overall, AKS has many ways to handle autoscaling, but it requires extra engineering thinking compared to AWAC.
(8) AWAC and Azure Web Apps scale very fast, especially the multi-tenant offerings, whether consumption-based or not. The only exception is the App Service Environment v3, which can take up to 15 minutes to add an instance. All the other flavors scale very quickly.