It somehow never quite works out that way. Inexplicably, workloads become tied to particular providers in particular locations. The math of "save 20ยข an hour on compute" falls flat when the statement finishes with "...and spend $2000 moving data to that provider so those slightly cheaper containers have something to chew on." Data gravity means that where your data lives is invariably where the rest of your infrastructure is based.
Circa 2012, maintaining a layer of cloud agnosticism made sense. It was not at all clear who the major players were going to be, what the economics would look like in the future, and how painful data migrations would become. Over half a decade later, we have answers to a lot of those questions. Without exception, every environment I've had the privilege of working with that had attained provider agnosticism did so at tremendous cost-- either in actual dollars, technical overhead, or operational complexity. The costs of maintaining provider agnosticism pale in comparison to the amount of work it takes to actually pull off a provider migration down the road when the need arises-- which it rarely does! For all of the lip service paid to being able to migrate from one cloud provider to another, remarkably few companies have actually done so-- and the ones that have find their stories shouted from the rooftops by the receiving provider.
If your infrastructure design principles mandate being able to deploy workloads to multiple providers, you're limited to the common features between those providers. This isn't entirely a bad thing-- every provider of note offers a VM instance, a load balancer, a managed database, and (if AWS will go ahead and ship what they've announced!) a Kubernetes offering. That gets you most of where you need to be, provided that your application conforms to "traditional" architectural models.
What you're giving up is the differentiated services that the providers are racing to deploy. Google's Cloud Spanner has no equal in other providers; if you need an ACID compliant relational database that's world spanning, you can use Spanner, or you can go ahead and build your own. "That doesn't sound hard. I could build that in a weekend!" Yes, Hacker News. I see you.
The growing suites of serverless technologies are still highly tied to the cloud provider that built them. The events that invoke functions, how those functions are written, the constraints around them (language selection, resource limits, concurrency options) are all different; multi-cloud serverless is still something of a myth, though Fairwinds is making strides in this area as we speak.
Lastly, the machine learning tooling that rides on top of your data lakes needs to be close to that data. If you're building your own (stop that immediately!) you're redoing a lot of work that the providers have put into making those systems as accessible as they are; is that where you want to spend your innovation energy? Training models on different hardware offerings for different providers is just about the worst use of expensive engineering time that I can imagine; don't start down this path!
In 2018, there isn't a clear "avoid at all costs" cloud provider. GCP, AWS, Azure-- they're all very respectable choices that nobody would blame you for choosing. They offer broad suites of services, they understand how businesses of varying scale work, and they probably won't go out of business before this article is published. There's no mistake in picking one of them.
To my mind, the mistake lies in trying to pick them all.