As Kubernetes continues to mature, so do the tools we use to manage it. In this blog post, we'll explore the process of upgrading from Kubernetes Operations (kOps) to Amazon Elastic Kubernetes Service (EKS), focusing on the technical aspects and considerations involved.
Currently, many organizations are running Kubernetes on Kubernetes Operations (more familiarly called kOps). However, Kubernetes itself is continually being updated. One example of a recent significant change is the complete removal of Canal CNI (Container Network Interface) support in Kubernetes 1.28. This change presents a challenge, because it requires teams to either do either a live in-place migration to a new CNI or to build new clusters with a replacement CNI, then move the workloads over. As teams approach this upgrade, it's important to consider different options, plan for a smooth transition, and choose a platform that simplifies future upgrades.
kOps was an early popular choice for many organizations who were early Kubernetes adopters, because it was easy to do the initial cluster set up and manage clusters effectively. Since kOps was first deployed, however, a lot of new managed Kubernetes services have emerged that make it easier to make the most of Kubernetes without all the heavy lifting.
While the CNI change is the immediate catalyst for this transition for many organizations, moving to EKS offers several benefits:
Some organizations may be concerned that moving CNIs will be more disruptive with higher mean time to resolution (MTR) for issues that come up. However, in this case it’s safe to build new clusters with the replacement CNI because you can keep everything running smoothly in kOps until you’re sure the new EKS clusters (and CNI) are working as intended.
Here's a more detailed look at the migration process from kOps to EKS. We’d recommend using Infrastructure as Code (IaC), such as Terraform, to ensure consistent configuration across environments. This will also allow you to catch anything that drifts from your IaC due to manual changes via the user interface (UI) or otherwise. The guide below assumes you are using the AWS EKS terraform module for cluster creation.
The Terraform sample code block below is the output from the AWS EKS module; it would create an EKS cluster named my-eks-cluster running Kubernetes version 1.31, associated with a specific IAM role, deployed in specified subnets, and with comprehensive control plane logging enabled.
resource "aws_eks_cluster" "main" {
name = "my-eks-cluster"
role_arn = aws_iam_role.eks_cluster.arn
version = "1.31"
…
vpc_config {
subnet_ids = var.subnet_ids
}
…
# Enable control plane logging
enabled_cluster_log_types = ["api", "audit", "authenticator", "controllerManager", "scheduler"]
}
Define node groups with appropriate instance types and scaling configurations.
The Terraform sample code block below is the output from the AWS EKS module that would create a EKS managed node group named default-01 running on our my-eks-cluster Kubernetes cluster with minimum, maximum, and instance types defined.You can also configure other details, such as attached disks and image type.
resource "aws_eks_node_group" "example" {
cluster_name = aws_eks_cluster.main.name
node_group_name = "default-01"
…
scaling_config {
desired_size = 1
max_size = 10
min_size = 1
}
…
update_config {
max_unavailable = 1
}
instance_types = [“m5a.large”]
}
Deploy a compatible CNI. AWS-VPC-CNI is the default, but you may want to consider alternatives, such as Calico, depending on your organization’s requirements.
The Terraform sample code block below is the output from the AWS EKS module that deploys the AWS-VPC-CNI plugin on every node in the cluster. This is responsible for IP address management and network interface configuration for pods in our cluster.
resource "aws_eks_addon" "vpc_cni" {
cluster_name = aws_eks_cluster.main.name
addon_name = "vpc-cni"
addon_version = "v1.19.2-eksbuild.1"
resolve_conflicts_on_update = "PRESERVE"
}
Configure IRSA for workloads that require AWS IAM permissions.
This sample annotation associates the ServiceAccount with an AWS IAM role. When a pod uses this ServiceAccount, it can assume the specified IAM role and inherit its permissions. This allows pods to securely access AWS services without needing to manage AWS credentials within the pod or use instance-level IAM roles. It's a more granular and secure way to manage permissions for containerized applications running in EKS.
apiVersion: v1
kind: ServiceAccount
metadata:
name: my-service-account
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::ACCOUNT_ID:role/IAM_ROLE_NAME
Once all workloads are successfully migrated and validated, plan for the decommissioning of the old kOps cluster.
Migrating from kOps to EKS is not an insignificant undertaking, but it offers numerous benefits, including faster and easier cluster upgrades, increased security (the control plane for EKS is inaccessible to any users), and workload identity management via IRSA. By following this technical guide and carefully planning each step, organizations can ensure a smooth transition from kOps to EKS, setting themselves up for improved Kubernetes operations in the long term.
If you’d like to move from kOps to EKS, but don’t have bandwidth for the project, Fairwinds can help. Fairwinds has extensive experience with this migration, and can make the move simple for your team.