Kubernetes workload cost allocation is hard. To illustrate it, let’s just take a simplistic view of the problem.
In this first example, you are running Application A on a single compute instance. It is very straightforward to understand the cost of Application A because we know the cost of the node it is running on.
In this next example, you are running a containerized version of Application A on a single compute instance. Again, it’s straightforward to calculate the cost of the instance i.e. the cost of running Application A.
Once your many applications have been containizered and you are running Kubernetes clusters, you’ll be running different instances of the workloads (applications) on different nodes. In this example, on one node, you’ll run Application A and B, another node, Application B and the final node, an instance of Application A, B and C. Additionally, the dynamic nature of Kubernetes scheduling means where containerized workloads are run can always change.
All this change means it is really hard to break down costs in Kubernetes by workloads. Even if you do, it won’t stay consistent. As the Kubernetes environment grows, the number of clusters increase and node count goes up, it only gets harder to pinpoint costs.
As Kubernetes usage expands within an organization, if workloads are not configured properly with the right resource requests and limits, spending can become out of control. Consider the “noisy neighbor” problem outlined in this Fairwinds blog post. Therefore, spending must be monitored to avoid wasting resources, especially in a market where cost saving measures are increasingly important.
While cloud cost monitoring tools exist (and there is a big industry around it), these traditional tools do not recognize the multiple layers of Kubernetes, the dynamic environment and the nuances between workloads and nodes they are running on.
Most DevOps teams lack visibility into what’s actually happening within clusters, what is configured properly and if they are over provisioning. If your organization has adopted a FinOps model, where the finance team is working alongside DevOps and developers, many might even lack the know-how to diagnose the problems with Kubernetes cost management.
Because Kubernetes is complex and teams lack visibility into spend, Fairwinds has spent significant time enhancing cost features within our Kubernetes governance software, Insights.
Platform engineering teams need the ability to do two things:
Allocate and showback costs in business-relevant contexts
Create engineering feedback loops to enable a culture of service ownership and cost avoidance
The enhancements to Fairwinds Insights allow platform engineering managers to use actual cloud spend and workload usage to understand historical costs incurred across multiple clusters, aggregations, and custom time periods.
Fairwinds Insights is available to use for free. You can sign up here.
Unlike traditional cost allocation solutions, Fairwinds Insights includes policy enforcement capabilities so Platform Engineers can automate metadata standards needed for cost allocation.
Some example use cases for the latest feature include:
Cumulative Workload Cost Reporting: View historical workload costs based on usage to provide a summary of cost incurred over a predefined period of time: last day, week or month(s).
Accurate Cost Allocation Across Teams: Allocate cumulative, historical workload usage to different teams. Cost is typically divided by workload and namespace. Use the cost allocation feature to “bill” the respective teams and breakdown cost by Node, Shared (e.g., a group of common system namespaces like kube-system ), and Idle capacity.
Report on the Cost of a Namespace Across Multiple Clusters: Report on the overall cost of a namespace that is found across multiple clusters. For example, to understand the total cost of the my-app namespace across the business, get a sum of the cost of that namespace in both the staging AND production clusters.
This latest enhancement addresses these use cases and more. Users of the software can adopt a FinOps approach to Kubernetes and benefit from:
Multi-cluster: streamline workflow by leveraging Fairwinds' scalable SaaS architecture, cost data for all your clusters is calculated and aggregated in a single place
Multi-aggregation allocation: allocate costs across multiple dimensions, such as cluster, namespace, kind, etc.
Right-sizing recommendations: identify cost savings by tuning CPU and Memory requests
Historical reporting: report cost across custom time periods with up to 13 months of data retention
Cloud billing integration: integrate your AWS Cost and Usage Report (CUR) for accurate cost allocation using your actual cloud spend
To learn more about Fairwinds Insights, book a call with our team.