At Fairwinds, we are dedicated to empowering our open source community. Through our own tooling, we work hard to give back to this valued group, building quality open source projects for a containerized future. Assembling thoughtfully in the open has also allowed us to engage with people in the Kubernetes community, from hobbyists to enterprises, thereby making the tooling stronger, more reliable and feature-rich. As a result, this open source software has empowered our clients and community to make better decisions at every stage of the Kubernetes life cycle.
Even so, certain challenges remain constant on the path to Kubernetes adoption and implementation, including the need for getting resource requests and limits “just right” – providing stability and efficiency in a cluster that utilizes cluster autoscaling.
One of the benefits of using Kubernetes is the ability to scale infrastructure dynamically based on user demand. Cluster autoscaling is a component that increases or decreases the size of a Kubernetes cluster. This is done by adding or removing worker nodes, based on the presence of pending pods. Until now, the most common autoscaler used in Kubernetes was the Kubernetes Cluster Autoscaler.
If you have too many resources, the Cluster Autoscaler (CA) can remove worker nodes and save money. Bringing your cluster's scale to zero is also possible, depending on your organizational needs. The CA is an industry-adopted, open source and vendor-neutral tool. It is part of the Kubernetes project, with implementations by most major Kubernetes cloud providers. Recently, the Karpenter project entered the scene with the goal of solving some of the limitations of the Cluster Autoscaler.
Before thinking about the differences between CA and Karpenter, it’s important to know a little something about Elastic Kubernetes Service (EKS), which is Amazon’s implementation of Kubernetes in the AWS cloud. Clusters consist of a “control plane” and a set of machines called nodes. When a cluster is created in EKS, a control plane is created, as well as an auto-scaling group. EC2 instances in the autoscaling group use a predefined image, and startup scripts join the instances to the cluster. In recent months, AWS has also introduced the concept of managed node groups, where AWS will manage the autoscaling group(s) automatically.
Folks looking for effective autoscaling will soon have more to work with. Karpenter, the open source software now licensed under the permissive Apache License 2.0, was recently announced. As software designed to work with Kubernetes clusters running in AWS, Karpenter observes the aggregate resource requests of unscheduled pods and makes decisions to launch and terminate nodes to minimize scheduling latencies and infrastructure cost.
Karpenter simplifies Kubernetes infrastructure with the right nodes, at the right time. As a capability, this “just-in-time nodes for any Kubernetes cluster” is valuable because it launches the ideal compute resources to handle your cluster’s applications. In this way, Karpenter is designed to let you take full advantage of the cloud with fast and simple compute provisioning for K8s clusters. Karpenter also:
Improves application availability by responding quickly and automatically to changes in application load, scheduling and resource requirements. This ability places new workloads onto a variety of available compute resources.
Minimizes operational overhead with a set of opinionated defaults in a single, declarative Provisioner resource which can be easily customized; no additional configuration required!
With Karpenter installed in a cluster, events within the Kubernetes cluster can be observed and commands sent to the underlying cloud provider’s computer service, such as Amazon EC2. This means Karpenter can observe the aggregate resource requests of unscheduled pods and launch new nodes (or terminate them) to reduce scheduling latencies and infrastructure costs.
One of the main differences with Karpenter is its ability to make api calls directly to EC2. Instead of leveraging autoscaling groups that require a predefined set of instance types, Karpenter can make all the optimal EC2 instance choices to satisfy all of the constants. A new node can be provisioned in just 60 seconds. If you delete a node, Karpenter gracefully handles all the node decommissioning, including cordoning and draining the node and shutting down the corresponding instance.
Still confused? Check out this highly visual explanation on how Karpenter works!
When using an autoscaler like Karpenter, it becomes even more important to get your resource requests and limits right. If your requests are too high, the Karpenter autoscaler might add nodes that are far larger than what you actually need running in your cluster, which would lead to higher costs. If they're too low, then you might run into resource contention issues on smaller nodes.
Goldilocks offers a solution for tuning your resource requests and limits. As the open source project we use in Fairwinds Insights to optimize Kubernetes workloads, Goldilocks removes the guesswork from setting resource requests and limits on applications running in Kubernetes production deployments. It helps engineers identify a starting point for resource requests and limits, while ensuring applications run correctly.
To see suggestions for resource requests on each application, the Vertical Pod Autoscaler (VPA) works well. Goldilocks creates a VPA for each deployment in a namespace and then queries them for information. While VPA can set resource requests, the dashboard in Goldilocks makes it easy to look at all the recommendations and make decisions based on your organization’s Kubernetes environment.
Goldilocks generates recommendations using the Recommender in the VPA. In fact, the Goldilocks open source software is based entirely on the underlying VPA project, specifically the Recommender. We find Goldilocks is a great starting point for setting your resource requests and limits. But given every environment is different, Goldilocks should not be seen as a replacement for tuning your applications to specific use cases.
Goldilocks is open source and available on GitHub. We continue to make a lot of changes, improving its ability to handle large clusters with hundreds of namespaces and VPA objects. We’ve also changed how Goldilocks is deployed, so now it includes a VPA sub-chart that you can use to install both the VPA controller and the resources for it.
A lot of changes we made with Harrison Katz at SquareSpace, based on invaluable feedback from him and the team there. We want to keep improving our open source projects, and welcome your contributions!
Goldilocks is also part of our Fairwinds Insights platform, which provides multi-cluster visibility into your Kubernetes clusters, so you can configure your applications for scale, reliability, resource efficiency, and security. Fairwinds Insights is available to use for free. You can sign up here.