In our recent white paper, 5 Ways to Optimize Your Kubernetes Ownership, we discuss the myriad benefits associated with a robust service ownership model. Aside from empowering developers to take responsibility for the quality of their applications, the Kubernetes service ownership model optimizes five key enablers of business success, including security, compliance, reliability, scalability—and of course, cost optimization.
When you stop to consider how Kubernetes service ownership affects overall cost management, you have to think about configuration. They are inextricably linked because proper configuration of Kubernetes plays a major role in the amount of money organizations spend. Misconfigurations lead to extraneous and totally avoidable costs. This is why service owners must understand how much an application ultimately costs before establishing resource allocation. And then they must determine if the amount aligns with their budget.
Read our newest white paper:
5 Ways to Optimize Your Kubernetes Ownership
So, how much does a Kubernetes workload really cost? The answer is not particularly straightforward. In fact, it can be extremely difficult because Nodes—what you’re ultimately billed for—do not map neatly to the workloads you run them on.
To start, Nodes are ephemeral and can be created and destroyed as the cluster scales up and down—or replaced in the event of an upgrade or failure. To add complexity, Kubernetes bin packs workloads into the nodes based on what it recognizes as the most efficient use of those resources, almost like a game of Tetris. As a result, mapping a specific workload to a specific compute instance remains challenging. Efficient bin packing in Kubernetes is a great cost saver, but when the resources on a given Node are shared across many applications, it’s hard to come up with a good way to divide up the cost.
When teams deploy their applications to Kubernetes, they’re responsible for setting precisely how much memory and CPU should be allocated to their application. This is often where mistakes are made—teams either fail to specify these settings, or they set them far too high.
A developer’s job is to write code and ship quickly, so when they’re confronted with an optional piece of configuration—such as CPU and memory requests and limits—they’ll often simply omit it. This can lead to issues with reliability, including increased latency and even downtime. Furthermore, even if they do take the time to specify memory and CPU settings, they tend to allocate an overly generous amount, knowing their application will function just fine with extra resources at its disposal. From a developer’s perspective, the more compute, the better.
Without Kubernetes cost controls and visibility in place, as well as a solid feedback loop to get that information in front of the development team, Kubernetes will simply honor the developer’s CPU and memory settings, and you’ll find yourself facing a large bill for cloud compute. Even though Kubernetes will do its best to “play Tetris” with all your workloads by co-locating them in a way that optimizes resources, it can only do so much when teams don’t tell it how much memory and CPU they need, or ask for far too much. Just like beating a game of Tetris, you need to make intelligent, well-informed choices to win.
Before Kubernetes, organizations could rely on cloud cost tools to provide visibility into the underlying cloud infrastructure. These days, Kubernetes provides a new layer of cloud resource management, almost like a black box to traditional cloud cost monitoring tools. As a result, organizations need to find a way “under the hood” of Kubernetes to perform proper cost allocation among applications, products and teams.
This level of clarity into cloud resources, typically found through a cost monitoring solution, allows teams to make better decisions around the finances of Kubernetes ownership. Without it, organizations have trouble optimizing compute and workloads in a dynamic environment like Kubernetes. Multiple teams, multiple clusters and a lot of complexity translate into copious amounts of information to review and evaluate when trying to make informed, real-time business decisions.
Organizations that employ a service ownership model empower development teams to own and run their applications in production, allowing Ops to focus on building a great platform. This shift asks teams to make effective decisions as they continue to implement best practices. Kubernetes service ownership helps with efficiency and reliability by providing the necessary feedback directly to engineering teams through things like automation, alerts, proactive recommendations, and toolchain integrations.
Looking for a complete Kubernetes governance platform? Fairwinds Insights is available for free. Get started today.
It’s not just about shipping faster and with less risk. Optimizing Kubernetes configuration so clusters have the right CPU and memory requests and limits helps ensure applications run and scale efficiently—which, in turn, avoids wasted money.
When teams can build, deploy, and run their applications and services, they have greater autonomy and fewer hand-offs with other teams. The full experience enabled by service ownership helps development teams to more deeply understand both the customer impact and the operational overhead of the software they write. Service ownership greatly improves cost management and collaboration, while also reducing the complexity of running Kubernetes.