Microsoft has released KEDA 1.0, a production-ready version of KEDA—an open source component for Kubernetes that provides event-driven autoscaling for containers. Scaling is based on the size of an event or message queue, like Apache Kafka or RabbitMQ.
Kubernetes already has the capacity to scale containers based on demand. However, out of the box Kubernetes only supports autoscaling based on CPU and memory demand. Scaling on other triggers—such as the number of messages waiting in an event queue—requires custom programming (e.g., in the Horizontal Pod Autoscaler).
KEDA (short for Kubernetes-based event driven autoscaling) makes it possible to scale Kubernetes using event queue activity as a metric. KEDA works as a Kubernetes metrics server, providing information about the number of pending events to be processed in commonly used queues—Kafka, Amazon CloudWatch, Azure Event Hubs or Azure Service Bus, Prometheus, Redis Lists, NATS, Google Cloud Platform Pub/Sub, and more. It’s also possible to build your own scaler for a custom queue or message bus or to use an external scaler.
You can use Kubernetes Deployments as the default way to scale actions, allowing a container deployment to be completely zeroed out if there’s no pending demand. You can also use Kubernetes’s container lifecycle hooks or Kubernetes’s job control functions to manage how containers survive for long-running processing events.
Microsoft provides Helm charts to deply KEDA into any Kubernetes cluster, but KEDA can also be deployed manually with kubectl, and it will work in Minikube clusters as well.
As KEDA is a Microsoft creation, naturally it comes with hooks for handling event processing with Azure, such as using Kubernetes-deployed Azure Functions that fire on Azure Storage Queue messages. But KEDA is also intended to work with other common Kubernetes implementations including Red Hat OpenShift and Google Kubernetes Engine, even if it doesn’t support the same breadth of native features as in Azure.