Who gets to play in the cloud?
From killer apps to the long tail of enterprise workloads, and how the cloud divide produces competing futures for the AI age
I recently attended a workshop about gov-tech relations in the age of digital capitalism and had a co-authored paper about the cloud computing sector circulated as part of a discussion about the political economy of AI (and for the most comprehensive reading list on this topic that I’ve seen, check out Henry Farrell’s AI syllabus).
My paper, which I’ll share more about in a future post, discusses the rise of Microsoft, Amazon, and Google as the three cloud giants underpinning the AI revolution. One question that kept coming up during the workshop was why was it these three and not, for instance, Meta or Apple that had pursued building a public cloud. I didn’t have an immediate answer at the workshop, but have been thinking about this question since, and so that’s what my short post today will try to explain.
What I’ll also argue in this post is that in the AI era, tech players that do not have their own cloud platform (e.g., Meta or AI labs like OpenAI) represent a markedly different kind of tech firm than those that do. I think, in fact, they represent two starkly different visions for how AI will diffuse and who the winners and losers will be—I’ll return to this point in the second half of this post.
Why did Amazon and not Meta or Apple become cloud giants?
Today, we take for granted that a small handful of firms—Amazon, Microsoft, and Google, trailed by Oracle and IBM—dominate the US cloud market. But fifteen years ago, as the cloud business model was just beginning to mature, it was far from clear which companies would prevail. A quick look at this shortlist reveals two key prerequisites for entering the cloud business: deep pockets and established enterprise relationships. Cloud infrastructure is capital-intensive, demanding massive upfront investment in servers, networking equipment, and data centers. During the brutal price wars of the early 2010s (which I wrote about here), only firms with vast cash reserves could afford to compete; the rest were quickly driven out.
Equally important was an existing base of enterprise customers. Microsoft’s decades of experience selling IT management tools like Windows Server and productivity software like Office gave it a built-in distribution network and credibility among corporate clients. Google, though better known for consumer products, was also aggressively expanding its Workspace suite—Gmail, Docs, Spreadsheets, and Calendar—into enterprise offerings. Oracle and IBM, meanwhile, were already synonymous with enterprise software and database systems.
So even as Meta and Apple were among the most cash-rich companies of the 2010s, neither had meaningful experience serving enterprise clients. Apple never viewed other businesses as part of its customer base; its strategy was to dominate the consumer market as a tastemaker and premium hardware producer. Even as Macs gained popularity among individual users, the corporate computing world remained firmly tied to Windows. Meta, for its part, repeatedly tried to break into the enterprise market throughout the 2010s—with workplace collaboration tools, internal communication platforms, and enterprise versions of its social network—but each effort fizzled. Eventually, the company abandoned the idea altogether. This reveals an important point about the cloud computing business—that its success depends not on access to millions (even billions) of individual users but on landing the top (ideally Fortune 500) companies as customers.
If being cash-rich and deeply embedded in enterprise networks were the key ingredients for success, then why was it Amazon—an online retailer that was neither cash-rich nor enterprise-oriented at the time of AWS’s founding—that pioneered and now dominate the cloud computing industry? Part of the explanation lies in Amazon’s retail DNA. Unlike Microsoft or Google, which built their businesses on high-margin software and advertising, Amazon came from a world of razor-thin profits and relentless scale. Retail had trained the company to operate on low margins and high volume—to make money not by maximizing profit per unit, but by driving operational efficiency and selling vast quantities.
That discipline—obsessing over cost control, logistics, and customer satisfaction—translated naturally into running a lean, scalable cloud operation. Selling books might yield only a few cents of profit per transaction. For Amazon, then, selling virtual servers was considered an upgrade: a capital-intensive but still higher-margin business than selling books. This is in part why Amazon—not Google or Microsoft—was most likely to pioneer the cloud. For Google or Microsoft, the cloud business looked comparatively less attractive from the outset; why chase a lower-margin segment when your core products are already cash cows? Only after Amazon had established the market and demonstrated its potential did the cash-rich, enterprise-oriented giants begin to pile in.
It’s worth noting, as a brief aside, that China operates quite differently in this realm. Huawei and Alibaba had started their cloud business early on (just a couple of years after Amazon pioneered the whole sector). Yet new players are constantly showing up to rival them. JD.com entered in 2020. ByteDance in 2021. The reason, much like Amazon’s own trajectory, is that Chinese tech firms have a far higher tolerance for low margins. And because the competitive environment is so cutthroat, selling compute at slim or even nil profits is considered an acceptable business venture. In other words, unlike their US counterparts, which tend to discipline themselves around “core competencies” and high-margin lines of business, Chinese firms routinely expand into down or upstream, or even entirely unfamiliar sectors, as long as there is money to be made.
Divergent AI diffusion trajectories: enterprise workflows vs killer app
Meta/Apple’s decision not to enter the business was made well before the AI boom years. If they had anticipated the rise of LLMs and the compute-intensity of AI models, they may have chosen to build public cloud services in preparation for this new era. Even today, Apple continues to hold back on entering the AI-capex game, so as far as being a major player, it is effectively out of the running. But Meta, even without a cloud business, has kept up with the hyperscalers in buying NVIDIA GPUs and building powerful data centers across the country. But the window of opportunity for building a cloud platform with a robust ecosystem has closed, and this has major downstream effects on how each of them will position themselves in the AI era.
Specifically, whether cloud computing sits within a firm’s arsenal will determine how they see AI diffusion happening: whether it can happen broadly and heterogeneously or whether they must instead bet on a “killer app.” For Meta, without a cloud to monetize downstream usage, the burden for such a killer app is far greater since it has no other way to make good on its infrastructure costs. So its best bet is to use AI to further entrench its position on its existing product lines and extract even more from its already captive userbase. Newer AI labs—OpenAI, Anthropic, and xAI—face a similar pressure as they race to build flagship applications that can be widely used.
Cloud hyperscalers, by contrast, do not need a killer app at all. By providing the infrastructure—and along with it, the vast ecosystem of cloud services—through which the rest of the economy adopts AI, they can offload the burden of experimentation to their customers and partners. Microsoft, for example, does not care whether it invents the defining AI application of the decade—so long as whoever does builds it on Azure. So rather than intervening only at the application layer, Microsoft’s key value add is providing the scaffolding that would empower their vast ecosystem of SaaS companies to build AI into their own products.
Cloud-first companies might, in fact, even be hostile to the idea of a single killer app. A single dominant AI application would concentrate value at the application layer, compressing the diversity of downstream demand and reducing their leverage as cloud providers. Such a future would risk turning cloud providers into commodity infrastructure rather than a strategic platform on which innovation can happen. As such, cloud firms are much more likely to push a model of AI diffusion where thousands of enterprises integrate AI in idiosyncratic, business- or industry-specific ways—each requiring custom workloads, proprietary models, and a mix of different cloud services.
In short, the biggest divide in today’s AI landscape is between two fundamentally different diffusion strategies: betting everything on a killer app or quietly embedding AI across the long tail of enterprise workloads. Those choices will produce very different industrial structures—and very different winners.



This is a brilliant articulation of the structural diffrences between cloud incumbents and pure-play AI firms. Your point about Amazon's retail DNA introducing it to low-margin, high-volumen operations really resonates as the critical differentiator. The real insight isn't just historical though: it illuminates why we're seeing such divergent investment patterns today, where hyperscalers can afford to lose on models while stil winning on infrastruckture, whereas OpenAI/Anthropic cannot. This framework explains the entire stack dynamics we're observing.
I would make the argument that datacenters are already commoditized (compute is pretty fungible from hyperscalers or neoclouds), and that the value comes from the platforms that hyperscalers operate. And if you take this view, both killer app and quietly embedding AI approaches lead back to reinforcing existing hyperscaler platforms.