Kubernetes is more than just a technical tool — it's a critical element of modern IT infrastructure that supports containerized applications at scale. However, understanding who "owns" Kubernetes within an organization isn’t always a simple question. It involves recognizing the roles and responsibilities for different aspects of Kubernetes that are distributed among various teams within the organization (depending on organization size, of course). This diverse ownership can help to ensure that Kubernetes clusters operate efficiently, securely, and are aligned with business objectives as long as these teams are clear about roles and responsibilities and communicate well. So, what are the specific teams involved, their roles, and the skills they need to effectively manage Kubernetes environments?
Kubernetes is famously complex and as more organizations deploy production workloads on K8s, the applications and services delivered on it are vital to business success. Here are the key teams involved in managing Kube and the essential contributions they make to ensure it runs smoothly:
Kubernetes is a critical way DevOps teams ensure agility, scalability, and resilience in their operations. Its automation capabilities make container orchestration more streamlined, increasing productivity. By integrating Kubernetes into the CI/CD pipelines, DevOps teams can automate deployments and monitor ongoing operations more easily, in part by using infrastructure-as-code to control environments, configurations, and deployment of applications.
DevOps teams typically have a high degree of proficiency in automation tools, such as CircleCI, Jenkins , Terraform , and Argo CD ; scripting languages; and Kubernetes-specific configuration management tools, including Helm and Kustomize .
SREs are responsible for ensuring that the Kubernetes infrastructure is highly available, reliable, scalable, and performant. That means that they handle:
SREs monitor resource utilization to identify bottlenecks and fine-tune resource allocation as well as analyzing application performance metrics and working with DevOps teams to improve application performance. SREs also establish alerting systems to ensure they are aware of any issues with Kubernetes clusters or the applications running on them so they can diagnose and resolve issues quickly.
Security teams face unique security considerations in Kubernetes environments due to its dynamic nature and distributed architecture. Kubernetes is not secure by default , so traditional security teams continue to play a critical role in establishing security best practices , such as defining and managing access controls, implementing secure container image practices, setting up network policies, and ensuring that a vulnerability scanning tool is in place. They must have an understanding of the core security concepts of Kubernetes , including container image security, pod security policies, network policies, and Kubernetes API security.
Security teams can also conduct threat modeling exercises to identify potential vulnerabilities in the K8s environment and work with DevOps and SRE teams to assess risks and implement security controls to mitigate them. They’ll also need to understand compliance requirements for containerized environments and implement controls to ensure they meet those standards, helping to meet those requirements and pass security audits.
Kubernetes offers built-in networking functionality, but still requires configuration . The networking team can collaborate with the DevOps and SRE teams to design a secure and efficient network topology for the Kubernetes cluster. This might involve defining pod network ranges, service types (ClusterIP, NodePort, LoadBalancer), and ingress configurations for external traffic access.
Kubernetes also may operate on top of existing network infrastructure, so you need to ensure that there’s proper communication between the cluster and the physical network. This may include setting up network policies, firewall rules, and virtual network overlays (CNI plugins) to manage traffic flow within the cluster and to external resources. They can help ensure that all Kubernetes networking aspects, from ingress controllers to service meshes, are optimally configured. To do this, they’ll need an understanding of Kubernetes networking architecture, including DNS for services, load balancing, and network troubleshooting. Depending on how your organization is structured, the networking team may define network policies to restrict pod communication, implement network segmentation to isolate workloads, and secure ingress configurations to control external access to the cluster, or work with the security team to ensure these best practices are implemented. They may also help with monitoring and performance optimization, making recommendations for network configuration adjustments or changes in resource allocations.
When deploying apps and services in Kubernetes environments, developers must design and build applications for that environment, focusing on aspects such as microservices architecture and containers. Devs will need to build containerized applications, building Dockerfiles that define the environment and dependencies needed for their application to run in a container. They’ll also need a basic understanding of core Kubernetes concepts, including pods, deployments, services, and ingress concepts for exposing applications externally.
They’ll also need to write YAML manifests that describe their deployments, services, and other resources required to run their application within the cluster. Kubernetes uses these manifests to configure and manage the application. Dev teams also collaborate with DevOps to integrate code changes with the CI/CD pipeline. Developers can help debug their applications by using logs and K8s tools (such as kubectl) to identify and fix any issues within their applications and implement logging strategies to help SREs and other teams troubleshoot issues. Dev teams should always be aware of security best practices for containerized applications, including following secure coding guidelines, minimizing privileges within containers, and using trusted container image registries.
Kubernetes is great at managing containerized containerized applications, but in ephemeral environments, persistent data requires special considerations. Data management teams collaborate with DevOps and SRE teams to choose appropriate storage solutions for different types of data needs, such as selecting persistent storage options, including hostPath, network attached storage (NAS), or cloud-based storage depending on the applications requirements in terms of performance, scalability, and cost. Data management teams may help to configure storage classes within Kubernetes to define different storage options for devs.
This team needs to understand persistent volume management, because Kubernetes uses Persistent Volumes (PVs) and Persistent Volume Claims (PVCs) to manage persistent data. Data management teams may provision PVs based on storage class definitions, monitor their health, and work with developers to ensure that applications are accessing persistent storage using PVCs.
In addition, a lot of data teams use tools outside of the cluster (such as Google BigQuery or Dataproc) for data analytics and processing. So they'll also be responsible for configuring and running pipelines between Kubernetes workloads and these other services.
This team will also implement data backup and recovery strategies for any persistent data stored in the K8s cluster while aligning to data security best practices, such as encrypting data at rest and in transit, restricting access to PVs based on role-based access control ( RBAC ) principles, and implementing auditing mechanisms to track data access.
Understanding the diverse roles involved in Kubernetes management can help organizations assess and increase collaboration and efficiency across these teams. Here are a few other ways organizations can make sure they’re running K8s as efficiently as possible:
Owning Kubernetes in an organization means more than just adopting the technology. To make the most of all that K8s offers, you need multiple teams to contribute their specialized skills and knowledge. A collaborative environment enables organizations to align Kubernetes operations with strategic business outcomes.
Need help getting started with, managing, or maintaining your Kubernetes clusters? Fairwinds provides Managed Kubernetes-as-a-Service , a people-led service that allows you to accelerate your time to market and deploy applications on production-grade Kubernetes.