Cloud infrastructure has changed so dramatically in the last decade it's almost hard to believe. One of my first jobs in tech had a server room in a closet and we handled everything ourselves from the hardware and operating system to the network and even the CD drives we used to update our software. Today we hand more and more of it to the clouds themselves, we don't worry about the server underneath, and we don't worry about the networking. There are no physical drives that we ever interact with and our lives are better as a result.
Interestingly, in removing ourselves from so many tangible bits of all the systems we are much more removed from the actual cost of running this kind of infrastructure. The cloud is to tangible hardware what a credit card is to cash, it doesn't cost more, it's just harder to tell where all the money went when we're not hands on.
Kubernetes took things to the next level. We might have provisioned workloads with automation before Kubernetes, but as the container orchestrator became the way for us to interact with our cloud infrastructure everything became a bigger black box.
The beauty of Kubernetes is we can hand it a workload and let it worry about scaling up and down depending on demand. The negative is that if we misconfigure our workloads before we hand them off to Kubernetes it's easy to experience runaway costs as Kubernetes does exactly what we (wrongly) told it to do.
Kubernetes itself costs very little to run as it's mostly just a control plane doing orchestration of other workloads, however there is cost associated with every new paradigm. In Kubernetes when resource requests and limits are set incorrectly it will lead to either spending more than you need (as workloads are overprovisioned and Kubernetes scales things more than needed) or it will lead to underperformance (as workloads run out of memory or become CPU constrained). It can also lead to workloads being over or under prioritized as Kubernetes does its best to make sense out of what's being handed to it.
Without good tooling to have visibility into what a cluster costs, or how much a workloads costs it's easy for a developer (especially in an advanced service ownership environment) to wildly overprovision things and for the platform team to not have the insight to address runaway costs until it's too late and the bill has been driven up.
I hear stories much too often about a team finding out much later—when they've finally received a cloud bill—that they have a workload misconfigured and costs skyrocketed.
Fairwinds Insights, a Kubernetes guardrails platform, offers a single pane of glass across all your clusters to see which clusters and which workloads are costing you the most. You can also track trends over time so you can see where things got out of hand and how to address them.
Fairwinds Insights is available to use for free. You can sign up here.
Great tooling enables great teams. The promise of the cloud is that your spend can actually match your need. Don't let things get wildly out of control before you take control of your Kubernetes infrastructure—Fairwinds Insights.
Watch how Clover uses Insights to control costs.