Securing Kubernetes workloads is a critical aspect of increasing your overall cluster security. The goal is to ensure that your containers are running with minimal privileges — just enough that they can take the actions necessary to function effectively, but no more. The most common container privilege issues include privilege escalation, running containers as root, and not using read only file systems whenever possible. In this post, I’m going to talk with you about the Insights Action Item: privilege escalation should not be allowed. We'll discuss what that means and the steps you need to take to mitigate this Kubernetes security misconfiguration.
If you’re working on hardening your Kubernetes environment, you need to spend some time investigating which containers allow privilege escalation and then make changes to prevent it so you can limit the impact of a container compromise. I’m going to use Fairwinds Insights, which allows you to see exactly which workloads in your clusters have privilege escalation allowed. If you want to try it yourself, you can walk through it in our sandbox environment or sign up for the free tier, which is available for environments up to 20 nodes, two clusters, and one repo.
In the Insights user interface, I selected two containers running in my test cluster that fall under this action item. For you, that will look something like this:
As you can see, Insights identifies this as a High severity Action Item in the Security category. You can also see which cluster it was identified in, as well as the name, namespace, and container.
If you click on one of the items listed under Title, a window opens to provide more information. You can see that there is a detailed description of the Action Item and a link to the Kubernetes docs for reference.
Below that, Insights displays the remediation steps and examples.
As you can see in the description for this action item, it includes information about what the allowed privilege escalation setting does. The setting controls whether a process can gain more privileges than its parent process. In this case, it controls whether the container can spawn new processes that have more privileges than the container itself does. The mitigation steps for this particular action item are fairly straightforward.
As you can see in the example above, we need to add the allow privilege escalation false entry to our security context. In your own environment, you will need to edit your source code, using whatever methods you already use to populate and create the manifests that you are deploying to your clusters. In my example video I used a test environment, therefore my YAML file is local.
To make the change, I need to edit the awesome-pod
YAML on my local machine. Mitigating this security vulnerability is as simple as adding a securityContext
map and then setting a key, allowPrivilegeEscalation
and then setting that value to false. It should look something like this:
apiVersion: v1
kind: Pod
metadata:
name: awesome-pod
spec:
containers:
- name: main
image: alpine
Command: [“/bin/sleep”, “999999”]
securityContext:
allowPrivilegeEscalation: false
Next, I need to save this file. In the video, I do a k apply -f awesome-pod.yaml
on the awesome pod to deploy the pod. (I have kubectl aliased to K on my system.)
Make sure you test your change to ensure nothing breaks, because your workload may currently be reliant on its ability to escalate privileges. If you take a look at the awesome-pod in my video, you can see that the awesome pod is running with a securityContext and one of the values in that security context is allowPrivilegeEscalation: false.
You can also look for it in Fairwinds Insights again and you will see that if you search for “Privilege escalation should not be allowed” it no longer identifies this as an issue in that workload configuration.
It’s easy for security misconfigurations and vulnerabilities to slip into production environments in Kubernetes. Unfortunately, even minor misconfigurations can create significant security holes if you don’t find and address them. Because these Kubernetes configurations aren’t enabled by default, it’s important for you to explicitly set them. Here’s a full list of K8s configurations that are important to check to address security concerns.
According to the 2023 Kubernetes Benchmark Report, the number of workloads open to privilege escalation has increased over the past year. In 2021, 42% of organizations locked down the majority of workloads. In 2022, that number dropped to 10%. To avoid these types of misconfigurations, it’s important to regularly scan Kubernetes clusters to make sure that they are properly configured. Using an Admission Controller, such as Polaris, can help developers prevent misconfigurations, particularly if it is part of the CI/CD process and scanning containers for misconfigurations before they are checked into production.
Fairwinds Insights operationalizes Polaris checks because it provides the findings and keeps a historical record of the results across all your clusters. It also offers clear and actionable remediation guidance. Using the Insights platform, you can track and prioritize security, efficiency, and reliability issues, collaborate across teams, and apply Kubernetes best practices automatically as applications move from development to production.
Watch this short video to learn how you can check your Kubernetes configuration for privilege escalation.