When AI Eats Software
The hidden risk of the SaaS sell-off and why software might be worth protecting.
We’re at another peak in the AI hype cycle. If the hype of the past few years was a rising tide that lifted all (tech) boats, what’s different now is that the technology for the first time since it debuted in 2022 seems to be dividing the industry into clear winners and losers. And, as of the start of this year, software—the category as a whole—has been branded the AI economy’s first clear loser.
Over $800 billion in software stocks has been wipe from the market. Salesforce’s shareprice dropped over 30% from its peak this year. Atlassian’s price halved. Even Microsoft—with one of the most diversified portfolios and a major AI player itself—is down 15%. All this, even despite many of the hardest-hit software companies showing strong earnings throughout 2025.
This sell-off is shocking because software has been, for the past few decades, a key input of American dynamism. It has powered productivity gains across industries, from logistics to finance to retail to healthcare, creating some of the country’s most valuable firms as well as empowering some of the most powerful legacy players across the economy. Entire sectors have reorganized production around the software revolution. And it has become a central feature of white-collar work as a whole.
The problem with software, however, is that it is also the domain in which AI is most straightforwardly competent. With the release of Claude Cowork, Google’s Antigravity, and most recently OpenAI’s Codex, the power of AI generated code has been put on full display, making investors worried that these advancements would make software much cheaper to build.
Software has historically been expensive because coding skills have long been in short supply. So when software firms aggregate that labor, build a product once, and sell it many times over, they spread development costs across thousands of customers willing to pay a hefty price for the convinience of not having to build it themselves. If AI tools make it significantly cheaper to write and maintain code, it may undermine the economics of this business model.
From here, several related outcomes are plausible. First, large enterprise customers may decide it is cheaper to build “good-enough” in-house solutions rather than rely on pricey vendors for their software needs. Second, software makers facing new cost pressures, might engage in a race to the bottom, cutting costs and selling their software on the cheap. Third, a new industry of vibe-coding consultancies may emerge, specialized at building budget solutions for individual clients rather than general-purpose software. These scenarios aren’t mutually exclusive. But in each case, dedicated software makers will lose their leverage, and the ecosystem they collectively sustain will begin to thin and erode.
I won’t speculate on more on what the future will look like. But what I plan to do in this post is explain what it would mean for AI to eat our software ecosystem; and why we may actually want to prevent that from happening.
The Positive Externalities of an Ecosystem
The obvious purpose of an ecosystem of dedicated software makers is that it supplies buyers with a vast portfolio of software to use. For a single company who has a particular software need, the diversity offered by the ecosystem can seem superfluous and may well be replaced by a half-competent in-house vibe-coder. But an ecosystem has many positive externalities beyond providing clients with basic software.
One externality comes from the fact that software ecosystems comprise different products that can integrate with each other and so their combined utility is greater than the sum of its parts. These products will often have shared standards and technical conventions, allowing members of the ecosystem to specialize deeply while remaining compatible with one another. Improvements in one corner of the ecosystem can even ripple outward and enable advances elsewhere. A better database architecture can make new analytic tools possible. A new breakthrough in distributed computing may reshape how we design high-performance computing workloads.
Another externality is that the software-dedicated members of the ecosystem will also invest in shared infrastructure. Open source is the obvious example of this dynamic. Projects like PostgreSQL (database), Spark (big data processing), or Kubernetes (infrastructure management) are actively supported by software companies because, even though no single firm can fully capture the returns from maintaining them, these shared foundations expand the overall software market and provide a stable substrate on which each company can build differentiated products.
In fact, part of the reason AI is so good at coding is because it can draw from the vast stock of open source software that has accumulated within the ecosystem over decades. A vibe-coder can quickly develop a working application because their coding agents draw from existing frameworks, mature libraries, and best practices. Ask an AI to spin up a webapp and it will default to tools like React and Django—projects of sustained collaboration and collective investment. Without an ecosystem of software-dedicated firms, software progress will slow and will, past a certain inflection point, hamper AI coding progress with it.
That is not to say that all software companies are valuable contributors to the ecosystem. Many do not contribute to open source and are freeriders of these positive externalities. But the problem with AI eating software is that it will eat with it the field of software engineering.
Imagine a world where software-dedicated firms no longer exist and all software is created on-demand by in-house vibe-coders. Building software will then simply be an exercise in using the innovations of the past rather than creating them anew. And that is because the incentives for advancing the field of software will also vanish. A piece of software that works only for one firm is relatively easy to build. But building a system that can operate across industries, regulatory environments, and millions of users is not. And it is precisely the pressure to build highly scalable general-purpose software—with near perfect uptime, stringent security, and impeccable performance standards—that drives the technical advances and novel innovations in software.
Consider a small regional bank that decides to build their own software solution in-house. If the software can meet the basic needs of the small bank, the project is a success, and the incentives stop there. The bank, who is itself hardly interested in advancing the frontier of software engineering, does not need to design for 10X growth and extreme reliability. It does not need to architect for thousands of software integrations or defend against the full spectrum of security attacks. Yet without these pressures, there is also no need to produce truly exceptional software that pushes the frontier forward.
Software is a Discipline, Not a Cost-Saving
The AGI boosters would offer a counter: software innovation in the age of AI does not require an ecosystem of dedicated software makers. With the self-recursive improvements it can make on its own, code generation will be so powerful that it could conjure entire frameworks or generate novel algorithms without the prompter even needing to know about it. Perhaps so, in some more distant future. But that’s far from the capabilities we have today.
In the meantime, what is to be done? How do we make sure that AI doesn’t flood the economy with mediocre webapps at the expense of dismantling the ecosystem (and with it the discipline of software) needed for advancing software innovation?
As I’ve suggested, one of the possible futures is that enterprise customers start building in-house solutions rather than buying from software-dedicated firms. Software-dedicated firms, for their part, may start selling their products on the cheap to remain an attractive option. In this scenario, AI is primarily used as a cost-cutting function—something to replace engineers, drive down wages, and churn out code faster. The aggregate outcome is the thinning of the discipline of software engineering and ultimately a blow to software innovation in the long run.
But AI is itself not necessarily the danger. Even in keeping AI central to software’s future, another outcome is possible. What’s key is how firms choose to deploy it: whether AI is used to deepen engineering expertise or to discipline engineers and drive down costs. To walk the latter path is of course much easier and a surer gaurantee of short-term profits. That also means avoiding it will require making a deliberate choice.
For that, firms on both sides of the market have a role to play. Enterprise customers should resist the temptation to retreat into cheap, in-house builds simply because AI makes them feasible. Software companies for their part should not freak out and cut technical staff in the name of cost savings but should instead continue to invest in their engineers. And even as AI coding is used by these engineers, the goal in using it should be to deepen their expertise and tackle ever-harder software problems. AI should, in other words, raise the ceiling of engineering not simply lower the cost of doing it.
As Marc Andreessen predicted, software has indeed eaten the world. But software’s spread was never automatic. Just as Moore’s Law depended on generations of engineers and firms relentlessly pushing the limits of chip design and production, the ubiquity of software rests on the cumulative work of hundreds of thousands of engineers who developed new algorithms, best practices, and even programming languages—each contributing to software becoming more scalable, more reliable, and more useful year after year. If software’s capabilities had frozen in 2011—the year of Andreessen’s prediction—it would not have eaten much of anything.
In the end, whether AI compresses the software industry into a thinner, cheaper version of itself or pushes it into a new era of ambition is not technologically determined—it is a political choice. It will require employers to continue investing in engineering staff and investors valuing long-term capability over short-term profits. It will also require policymakers to put in the appropriate guardrails to incentivize the correct behaviors that can keep the ecosystem (and the discipline) intact. None of it is easy, but that choice is up to us.


