The idea behind Kubernetes policies is that you will be more successful if you put guardrails in place for your development teams to ensure that they are adhering to Kubernetes best practices. Creating policies can help you make sure that your developers are not doing anything in Kubernetes that is very insecure, inefficient, or unreliable. Once you have decided on your policies, you may want to look at open source Kubernetes policy engines to make sure that your developers are deploying everything according to your organization's standards.
An open source policy engine allows you to enforce all of your policies at a high level across your organization. Policy engines can run in a passive mode where you can audit your environment or audit your infrastructure as code to see how compliant you are. You can see how many of these policies you have adopted, and which teams are out of compliance. You can also run policy engines in a more active mode, so you can block things that do not meet a certain level of pre-defined requirements.
A policy engine in the Kubernetes context focuses on application configuration. It can do more than that, but that is the key use case. With the advent of infrastructure as code and developers having a lot more influence into operations, configuration has become a core part of the development process, especially when it comes to getting applications to run on Kubernetes. A policy engine operates on a configuration manifest, whether it is in YAML, Helm, or Terraform.
Organizations are leveraging cloud and containers so that they can get to market faster and ship apps and services faster; Kubernetes makes it easy to run containers and manage them in the cloud. A policy engine is important because it makes it much easier to manage how you are deploying to Kubernetes and enforce policies consistently.
The teams deploying to Kubernetes are typically application teams and development teams whose primary job is to write code and features. A policy engine helps you ensure that those teams can ship reliably and consistently. It can put a feedback loop in place that tells your developers how to improve their application configuration. It can also identify a problem with your current configuration that could lead to a negative business impact, such as potential downtime. For example, information from a policy engine might indicate the need for a liveness or readiness probe or flag a compliance issue if you don't configure Kubernetes in a certain way.
Most importantly, a policy engine can support the goal of moving fast and shipping applications faster by ensuring that your Kubernetes environment is not fragmented, that every application is not uniquely configured in production. It empowers your team to reliably get their apps to run without introducing unnecessary risk.
Kubernetes is a complex ecosystem and there is a lot to learn. You cannot expect every developer to be a K8s expert or every development team to have a Kubernetes expert on it. As you move past deployments with just one or two clusters, you need some guardrails in place to help development teams to make sure that they're shipping code that won’t lead to unreliable applications, insecure deployments, or cost-overruns.
Companies moving from the proof of concept stage with Kubernetes to expanding to amulti-tenant Kubernetes strategy may be moving to an environment where they have multiple apps or multiple teams deploying on a single cluster. Making sure that the shared resources in that cluster are not going to be negatively impacted when individual app teams configure their workloads requiresgovernance. You need to make sure that you have policies in place that are enforced so that the cluster operates normally, and one team is not impacting another team.
As different teams and different companies pursue a multi-tenant strategy, you may also find that your individual teams are at different levels of maturity in Kubernetes. One team might be following practices that are different from another team. Sometimes that's because configurations are being copied and pasted from team to team, which may mean that teams are propagating a lot of mistakes across different workloads. A policy engine can help you prevent those issues from the pull request phase all the way into production.
The most important thing is to find a policy engine that can grow with you as your organization matures its Kubernetes deployment. Here are three basic requirements you should keep in mind:
It must be easy to adopt, especially at the beginning.
Look for an engine that has an easy syntax to work with or some out-of-the-box policies that you can implement immediately.
Look for a policy engine that supports custom policies; you do not want to outgrow your policy engine in six months.
There are diverse types of policy engines out there and there is no single one that is perfect for all organizations.
There is a lot that goes into designing a good policy engine and a good strategy for deploying those policies across your organization. The first thing to consider is what you want to evaluate for. Are there security issues, liability issues, consistency issues, cost issues? Do you need to beSOC2 compliant or PCI-DSS compliant? You might have specific policies for those use cases.
Look at which policy engines come with the policies you need already built in. Which ones will support the policies you want to apply? Once you have a good sense of the policies you want to apply, figuring out where and when they should apply is particularly important.
Do you have policies that need to be applied in production but not in your development and staging environments?
Do you want to apply these policies on infrastructure as code, but not to enforce them in every single production environment?
Do the policies need to apply to system level workloads versus application workloads?
Do you have policies that only apply to groups of resources? For example, app teams versus cluster add-ons.
You also need to consider what you need in terms of integration points. Integration points give developers and DevOps engineers feedback related to policy issues as early as possible in the development process. A great policy engine solution will integrate as early as the pull request all the way through to the runtime and admission control. You can even use a policy engine to change files or change configurations from one configuration to another.
Finally, you want to think about automation. What do you want to do with a policy violation or a policy once it is triggered? Once one of these policies gets triggered, you may want to send an alert to Slack, open up a Jira ticket, or open up a PagerDuty alert if it's a critical vulnerability. Integration points and automation for your policy engine can make it easy to reach your engineers where they are already paying attention and ensure that they can see when something is not working and are taking action.
There are a few well-known open source policy engines available, such as Polaris, OPA, and Kyverno. All three projects make policy more accessible to Kubernetes users. They are all well-maintained projects, and which one you choose depends on the use case and your organization.
In any policy engine you are considering, make sure it can write custom policies. Look for one that has a public policy library, because you do not want to have to write every single policy from scratch. Every major engine has either built in some policies or a community that has written and pushed policies to GitHub, so you can pull them into your environment.
Look for a policy engine that supports the concept of admission control. You do not want an engine that only shows what is and is not conforming to policies, but one that can block things at the time of admission or can even mutate them at the time of admission to conform to your policies. A mutating admission controller, which can modify resources to ensure they conform to your policies, is a good additional feature to look for.
Polaris makes it easy to understand the issues in your cluster install. It provides a dashboard that multiple teams can review to see open issues. It also allows you to shift as far left as possible and automate fixes to the underlying YAML code. Very few people realize that there are a few common policies and practices that need to be enforced across every organization, tuned to the needs of the business. Polaris preloads baseline policies and best practices that everybody gets wrong, which results in Kubernetes deployments that are reliable, secure, and cost efficient. Polaris is part of the Fairwinds Insights platform, which has a free tier available for smaller clusters.
Open Policy Agent (OPA) is a broad policy engine that is not specific to Kubernetes exclusively; it is available for use with Kubernetes, Envoy, Terraform, Kafka, SQL, and Linux. It offers policy-based control for cloud native environments. OPA has a declarative policy language, Rego, which you need to use to write policies. OPA is a Cloud Native Computing Foundation (CNCF) project that has graduated. OPA is included in the Fairwinds Insights platform because it is a standard that people like to leverage.
Kyverno is specific to Kubernetes; it is in CNCF as an incubating project. In Kyverno, policies are managed as Kubernetes resources. That means you can use tools such as kubectl, git, and kustomize to manage policies.
Organizations need a policy engine that is easy to use, one they can get up and running quickly, and one that can be applied at various stages of Kubernetes maturity. When choosing a policy engine for your organization, look for a policy that can run in a shift-left context, so you can scan policy at the earliest phase of development. Policy engines can help your organization apply and conform to Kubernetes best practices, so your deployments are secure, cost efficient, and reliable.
Watch the webinar “Best Open Source Policy Engines for Kubernetes” to learn more about different policy engines, the policies you may want to start with, and much more.