Kubernetes, in all its many forms, is a powerful tool for building distributed systems. There’s one big problem though: Out of the box it’s only designed to offer resource-based scaling. If you look at its history (coming from Google’s internal Borg service as a response to AWS), that decision isn’t surprising. Most of the applications and services it was designed to work with were resource-bound, working with large amounts of data, and dependent on memory and CPU.
Not all distributed applications are like that. Many, especially those that work with IoT (Internet of things) systems, need to respond rapidly to events. Here it’s I/O that’s most important, providing events and messages that trigger processes on demand. It’s a model that works well with what we’ve come to call serverless compute. Much of the serverless model depends on rapidly spinning up new compute containers on demand, something that works well on dedicated virtual infrastructures with their own controllers but isn’t particularly compatible with Kubernetes’ resource-driven scaling.
Introducing KEDA: Kubernetes-based event-driven autoscaling
Microsoft and Red Hat have been collaborating on a means of adding event-driven scaling to Kubernetes, announcing their open source KEDA project at Microsoft’s Build conference back in May 2019. That initial KEDA code quickly got a lot of traction, and the project recently unveiled its 1.0 release, with the intent of having the project adopted by the Cloud Native Computing Foundation.