With the adoption of containers, software packaging is increasingly shifting left, which means (depending on your organization) that developers are taking on responsibility for the containerization of applications. Developers may also be responsible for some parts of Kubernetes configuration. As that process shifts left, developers need support to make the right decisions for the organization in order to run Kubernetes securely and efficiently.
Many companies are adopting cloud native technologies to deliver speed to market. For businesses seeking to compete in today's marketplace, it’s important to ship new features and meet customer needs where they are — and increasingly those needs are being met through software.
For all the benefits gained from cloud native technologies, moving to containers and Kubernetes doesn’t come without potential challenges. According to a recent Cloud Native Computing Foundation (CNCF) survey, there are three key challenges that typically emerge during this type of transformation.
Source: CNCF SURVEY 2020
Tied for first place, complexity and the cultural change involved in moving to cloud native technologies. These types of changes often mean changing the development process and potentially shifting some of that responsibility to different teams, forcing engineers to learn new concepts and Ops engineers to adapt to an “everything as code” mindset.
The third challenge is related to security considerations with cloud native technologies. We're dealing with new concepts and technical considerations that change how you think about security, especially when you run containers and Kubernetes technology in the cloud, or if you’re using it in a multi-cloud or hybrid cloud scenario. The complexity around all of this causes security teams to take a step back to really understand the new threat landscapes with cloud native technology.
Security needs to be a partner with dev and DevOps, and so they not only have to come up to speed on the new changes, they also have to get visibility into where those risks may be. The new types of questions that emerge when it comes to the actual container technology itself, such as understanding what known vulnerabilities (Common Vulnerabilities and Exposures (CVEs) are in those containers and understanding the ways Kubernetes can be configured to be insecure, unreliable, or inefficient.
Moving to Kubernetes and containers introduces a lot of new decision points; last year, an article highlighted that 69% of reported Kubernetes incidents were actually related to misconfigurations. To successfully deliver products to market, you need to have a collaborative environment to resolve misconfiguration issues quickly. Remember: everything in Kubernetes is configuration-driven and security is not built in by default.
Organizational complexity is another important factor that comes into play. There are different personas involved along the way, and they each have different questions that have to be answered, so let's put ourselves in their shoes:
In these environments you need to build processes and put guard rails in place in order to meet the needs of these different personas.
For all these teams, configurations are a consideration as they seek to build and deliver applications and services to market. What kind of technical implications impact security and efficiency for organizations moving to containers and Kubernetes? There are a few different layers in the stack where you need to look out for misconfigurations.
You can help prevent common misconfigurations from being deployed by using policy and governance. Implement policy to check for security misconfigurations, such as vulnerabilities in underlying Kubernetes clusters and add-ons. It’s important to scan and monitor the infrastructure constantly to find and patch new vulnerabilities as necessary. Policies and governance can also help you with cost optimization by ensuring the efficiency of your resource usage, for example, checking CPU and memory settings to make sure that your applications have enough compute resources, but aren’t consuming more than necessary.
When you create guardrails that prevent mistakes from being pushed to production, you can also give feedback at the right times to the developers and service owners who are making these decisions about configuration. A few examples of ways you can use policies to create guardrails include only allowing images from trusted repositories, ensuring CPU and memory requests are set, and requiring health probes. There are different ways to implement policy and governance and make your policies stick, and your choice may depend on the size of your organization, the maturity level of your Kubernetes environment, and other considerations. Regardless of how you proceed, you’ll need visibility across teams and clusters and a way to effectively and consistently manage policies in order to run Kubernetes securely and efficiently.