I have served as a Chief Technology Officer for application security and data security companies for over fifteen years, and during that time I have seen many of the challenges that companies encounter as they try to secure critical data and infrastructure. The accelerating adoption of Kubernetes and Linux containers presents many new challenges and opportunities for development, DevOps, information technology, and security teams.
As with any new technology, there are many potential gaps created as Kubernetes adoption grows and teams learn and master the approaches needed to secure these new kube and Linux environments. With Kubernetes clusters and Docker containers, it represents a whole new way of deploying software. Suddenly developers need to understand APIs, microservices, and workloads; there are new metrics to track and lots of new functionality and configurations to understand. The software development lifecycle has changed, and containerized applications are increasingly the norm rather than the exception. On top of the challenges introduced by these innovative technologies, many of the traditional security frameworks and processes no longer work or need to be significantly revised to protect applications and data in this new realm.
As an organization, Fairwinds is focused on Kubernetes management and enablement. Today, I am immensely proud to announce the general availability of Fairwinds Insights. This first release marks the culmination of turning our years of expertise with Kubernetes config into Kubernetes management software that helps validate configurations to improve security, reduce costs, save time, and run reliable workloads. Fairwinds Insights continues to enable successful Kubernetes deployments by protecting and optimizing mission-critical applications.
We are always making updates to Fairwinds Insights to improve the experience for our users and make it easier to have successful Kubernetes deployments. We build and leverage open source software, such as Prometheus, Polaris, Nova, and Goldilocks, in our Insights platform to make the most of Kubernetes container orchestration capabilities. And we do it in a way that enables you to leverage the public cloud providers you choose, such as AWS, GCP, and Azure, as well as virtual machines or bare metal infrastructure, so you can deploy your applications and services in the way that makes the most sense for your organization.
Fairwinds Insights was designed to solve three main challenges for Kubernetes teams: secure, cost-efficient, and reliable workloads.
As the cloud-native world has continued to mature, most organizations — and the people running them and building cloud-native applications — understand the benefits of the Kubernetes ecosystem and why it is important to break down silos and minimize friction between development, security, and operations teams. While Amazon Elastic Kubernetes Service (EKS), Azure Kubernetes Service (AKS), and Google Kubernetes Engine (GKE) all provide simple ways to automatically deploy, scale, and manage containerized applications, dev, sec, and ops teams continue to struggle to maintain consistent, secure Kubernetes-based applications.
In part, this is because Kubernetes environments are so complex and because the ecosystem around kube continues to mature, bringing new plugins, add-ons, and flexibility. Fairwinds Insights helps organizations automatically enforce Kubernetes best practices by creating automation for policy enforcement. The Insights platform increases the visibility DevOps teams have into Kubernetes environments by providing a dashboard view of all Kubernetes clusters, which helps teams understand misconfigurations that are causing security and compliance risks.
Insights also reduces the time required for vulnerability management because it provides built-in vulnerability scanning and in-depth troubleshooting information. It also helps teams running containers identify misconfigurations and vulnerabilities because it integrates with command line interface (CLI) tools, such as kubectl (which is the default CLI tool for Kubernetes) and other tools that development teams commonly use. Insights can help with notifications and routing tickets to the person or team responsible for resolving those issues.
Kubernetes automatically adapts CPU and memory resource usage settings to enable auto-scaling, but many organizations struggle to figure out how to set those CPU and memory resource usage settings. Trying to figure out optimal settings can eat into engineering time and result in over-provisioning cloud capacity or under-performing applications. As a result, some teams simply do not set requests or limits or set them too high during initial testing to make sure everything will work — and then they never go back to adjust the settings. The key to making sure that scaling actions work as intended is to set your resource limits and requests on each workload based on metrics so that each one runs efficiently and reliably.
We built an open source project, Goldilocks, to help teams allocate resources and get resource usage settings right. That project is built into Insights, and it helps organizations understand their resource usage, resource costs, and how to apply best practices around efficiency. Goldilocks uses the Kubernetes Vertical Pod Autoscaler (VPA) to evaluate the historical memory and CPU usage of workloads and the current usage to make recommendations on how to set your resource requests and limits. Basically, Goldilocks creates a VPA for each deployment in a namespace, queries it, and displays recommendations in a dashboard. Insights helps teams remove the guesswork by automating the recommendation process. No more trial and error, Insights includes Goldilocks so you can increase your Kubernetes cluster efficiency and reduce cloud spend.
Traditional monitoring tools do not provide everything required to proactively identify changes needed to maintain reliable Kubernetes workloads. While provisioning is much simpler, it takes time to learn how to optimize Kubernetes, particularly if you are using technology that predates cloud-native applications and configuration management tools. If you use older monitoring solutions and layer Kubernetes management tools on top of them, that makes optimization, reliability, and scalability harder to achieve.
One way to improve Kubernetes reliability is to set configurations right. This is easier said than done. One of the challenges of cloud native architecture is understanding and truly embracing the ephemeral nature of containers and Kubernetes pods. Once you understand this ecosystem better and are familiar with provisioning and container orchestration, you can analyze your metrics to make better decisions about setting requests and limits for CPU and memory. This allows the Kubernetes scheduler to do its job. Fairwinds Insights’ built-in tooling continuous audits the CI/CD pipelines and runtime environments to identify and even correct Kubernetes misconfigurations wherever they occur in the software development lifecycle.
Over the last five months, we have beta tested Fairwinds Insights with dozens of customers who are actively using it for Kubernetes management. We have ensured that Insights integrates with deployment tools, supports different open source Linux distributions, and can scan container images for vulnerabilities. Insights also provides Kubernetes monitoring for diverse Kubernetes environments and leverages open source Kubernetes tools as part of its commitment to the Cloud Native Computing Foundation’s (CNCF) community. We have received great feedback and validation of the problem we have solved:
Provide actionable answers to “Am I doing this right?” for platform engineers, developers, and site reliability engineers.
As companies embrace DevOps to shorten the time to market for delivering new software and services, the reality is that the process is never as seamless as hoped or envisioned. Steps missed lead to misconfigurations that cause security gaps, cost inefficiencies, and unreliable apps and services. And while this new way of building and running applications enables faster provisioning and self-service, it also requires your monitoring and observability tools and strategies to change.
Our Fairwinds Site Reliability Engineering (SRE) team consistently saw the same problems across the Kubernetes community and wanted to fix them. That is exactly why many of our open source projects were created:
Polaris, an open-source policy engine that validates and remediates Kubernetes resources
Goldilocks, a utility that helps you get resource requests and limits just right
Nova, a CLI that cross-checks Helm charts to locate outdated and deprecated charts in your Kubernetes clusters
Pluto, locates deprecated Kubernetes API versions in Infrastructure-as-Code (IaC) repositories and Helm releases, provides version information unavailable in the Kubernetes API server
RBAC Manager, an operator that supports declarative configuration for role-based access control (RBAC) with new custom resources to simplify authorization in Kubernetes
We maintain these open source projects and invite collaboration to improve our open-source tools and add functionality as needs change and Kubernetes continues to mature. These tools are all part of the value that Insights delivers, and we combine it with other open-source tools such as Trivy, an open source vulnerability scanner, and Prometheus, an open-source monitoring solution that provides metrics and alerting. Insights uses all the data collected to provide a Kubernetes dashboard of aggregate CPU and memory usage.
Fairwinds Insights deploys easily using a YAML file and runs across the entire development lifecycle. It includes integrations to some of the most ubiquitous solutions today, such as:
A Slack integration to ensure users get notifications about critical changes to their Kubernetes clusters
A Datadog integration that feeds in data from Insights, creating an event whenever an Action Item is discovered or fixed to help teams correlate issues with attempted fixes.
A PagerDuty integration that creates incidents for Action Item incidents using automation rules
A Jira integration allows Insights users to create Jira tickets from Action Items manually or using automation rules
An Azure DevOps integration allows users to create Work Items from Action Items
By combining our expertise with trusted open source tools, Fairwinds Insights identifies the missed steps at the handoff between development, security, and operations and uses automation to put guardrails in place that keep Kubernetes running smoothly and efficiently. With Fairwinds Insights, Platform engineers, DevOps teams, developers, and site reliability engineers now have a Kubernetes management platform that helps them to improve their cloud native workflows in a secure, efficient and reliable manner.