A long time ago, before Kubernetes, I was managing an early-stage security product built entirely in the cloud. We used a SaaS business model to differentiate ourselves from legacy, on-premise incumbents.
Today, many companies choose the cloud “by default” for their new products. But back then, it wasn’t an obvious choice. We chose the cloud based on the promise of lower upfront costs, increased security, and scalability -- all critical to our product strategy. Our company was new to cloud technologies, so there was a lot of learning around best practices and patterns along the way.
The first year of the product was a fun ride. We had 10x’d our usage and found product/market fit within very large, global enterprise accounts. The SaaS value proposition was working, but our minimally-viable infrastructure was hitting its limits. Some of the design decisions we made for our first 10 customers didn’t hold true for the next 100. We initially leveraged PaaS services and some home-grown automation to get the software to market, but supporting sales and our ambitious growth plans meant allocating a significant portion of roadmap to infrastructure investments.
We faced the task of balancing feature demands of our early adopter customers and the internal need to scale reliably and efficiently. A reliable and scalable infrastructure was a basic expectation of our customers, and running efficient workloads was necessary for maintaining high gross margins. Simply put, we could not avoid these infrastructure investments; but, as product owner, I wanted to “solve” this problem as quickly and inexpensively as possible so our engineering team could focus on new features that made us unique. Execution risk was our main risk factor for getting to market and staying competitive.
As a product executive, does the above story sound familiar? How many times have you faced this challenge in your career?
When it comes to software, everything takes longer and requires more resources than you originally plan for. Learning best practices is often done through trial and error, in ways you cannot easily predict or forecast. The question is - does that learning process ultimately give you an unfair advantage in the market? That is, can you learn something new and unique that delights your customers and differentiates your product?
Companies are facing similar questions as they move to Kubernetes. Kubernetes is the next paradigm shift for cloud infrastructure, enabling businesses to embrace cloud-native application development. While Kubernetes provides enormous flexibility, the technology is still very new and the learning curve is steep.
Today, spinning up a Kubernetes cluster is increasingly automated. Technically, you can get your application to run on Kubernetes in fewer and fewer steps. This is a positive thing for the community and technology, but many teams underestimate the difference between a proof-of-concept and production-grade infrastructure.
Managed Kubernetes services like GKE on Google Cloud, EKS on AWS, and AKS on Microsoft Azure are important building blocks for production-grade Kubernetes infrastructure. Even though the word “managed” is used in the name of these services, product executives should understand these aren’t the “whole product” solutions you’re probably expecting.
As highlighted by Gartner, the scope of the shared responsibility model is often underestimated by many companies leveraging cloud services. Your team can quickly absorb roadmap time working on things like:
I wish a Kubernetes managed service like the Fairwinds Managed Kubernetes existed when I was managing that early-stage security product. Kubernetes Managed Services are cost-effective ways to reduce execution risk so your organization can become successful with Kubernetes, containers, and continuous deployment best practices.