It’s hard to believe, but Kubernetes, our favorite container orchestration tool, turned ten this year! It feels like just yesterday when it was just an internal project at Google spinning up its first pod, and now it's at the heart of cloud native architecture. In the spirit of our many years managing, optimizing, and deploying Kubernetes for our clients, let's celebrate the 10th anniversary the best way we know how: with a song (sung to the tune of Come on Eileen by Kevin Rowland & Dexys Midnight Runners).
Video not loading? Click here :-)
We wrote and performed it together! I'm singing and doing electronic drums, Robert Brennan (our former VP of Product Development) is playing guitar, James DeSouza (Senior Software Engineer) is on recorder, and Brian Bensky (Site Reliability Engineer) is on bass.
Picture this: it's the middle of the night , and your SRE team is snoozing peacefully. Suddenly, the PagerDuty alert goes off, interrupting a good night’s sleep yet again. Poor old SREs! They wake up, stumble over to their desks, and get to work troubleshooting, still asleep despite a mug of strong coffee. Meanwhile, it feels like half of their containers have decided to play dead in a dramatic fashion, sending applications into a tailspin and leaving end users frustrated and confused. In the post-mortem (‘cause who doesn’t love a little root cause analysis?) everyone's trying to determine what's to blame. If you’ve ever been on-call this probably sounds familiar because, let's be honest, who hasn’t had a bad tech day (or night)?
Trying to find a way to make it all easier, someone decides to set up EKS . "Oh, it'll be a breeze," they say. "It's just Kubernetes," they say. But instead it takes everything to figure out how it works! Full of features and seamless integration with the Amazon ecosystem, there are still a lot of knobs to turn to get a cluster to production-ready status. Figuring out Amazon EKS may make your head hurt. But it’s worth it to get that scalability and availability!
Kubernetes promised us a dream. Containers that heal themselves? Automations and scheduling so everything always runs on time? Sounds like a breeze! But let’s be real. Learning Kubernetes is like learning a whole new tech language, one where "pod" doesn’t mean a group of whales (despite the Docker logo ) and "nodes" take on a whole new meaning.
They were beaten down and buried under a mountain of technical debt, resigned to the stress and instability of trying to build in-house automations to auto scale, manage failover, and enable load balancing. With the advent of Kubernetes, CTOs embraced the chaos of ephemeral environments and we all came out on the other side with K8s clusters that could run forever, thanks to our orchestration skills and capabilities.
Kubernetes, now you're full-grown, and you've shown us the sheer power of proper scheduling and controllers that handle everything. You've grown up so fast, running our workloads like a pro, surrounded by an awesome cloud native community . But everything continues to change, so keep an eye on your APIs and add-ons when you upgrade because it’s all just so much to keep up with!
Kubernetes, it’s a dream, handling everything from controllers to nodes, making it all seem like we’re just deploying seamlessly. Containers crash, but in pods, they come back, right? We run into trouble sometimes – containers in CrashloopBackoff, resource contention – but you’ve made our lives easier, enabled us to deploy our workloads more smoothly, and our failures... well, a lot more recoverable. Thanks for ten years of enabling us to orchestrate our containers! And here's to many more years of scaling, scheduling, and even some midnight PagerDuty alerts.
Happy 10th Anniversary, Kubernetes! We’re looking forward to the next decade of innovation, headaches, and a dash of orchestration magic.