It Would Be Good if the AI Bubble Burst

As a software engineer, it is impossible to ignore the strangeness of the current moment. On one hand, the new generation of language models are genuinely useful tools. I use them myself to get a head start on boilerplate code, to brainstorm solutions to tricky logic problems, or to rephrase documentation. They are a legitimate, if incremental, step forward in developer productivity. But on the other hand, a bizarre and unsustainable culture has sprung up around this technology. It has become less a field of engineering and more of a speculative gold rush, complete with a quasi-religious fervor that is completely disconnected from the reality of what these tools can actually do. The hype has reached a fever pitch, and from where I stand, a collapse of this bubble would be the best thing that could happen to the future of software.

The core of the problem is the mythology. You have chief executives and venture capitalists acting like high priests, delivering sermons from conference stages about the imminent arrival of this ill-defined "Artificial General Intelligence". They speak of needing seven trillion dollars to build out the world's chip industry, a figure so fantastically large it ceases to have meaning. This is not engineering anymore; it is a new religion, promising salvation or apocalypse, and it has convinced the market to suspend all disbelief. For those of us who actually work with the models daily, this narrative feels like a fantasy. We see their limitations up close: their propensity for confident nonsense, their inability to reason abstractly, their brittleness when pushed outside their training data. They are incredibly sophisticated mimics, not nascent minds. This chasm between the messianic marketing and the mundane reality of a probabilistic text generator is the empty space inflating the bubble.

This all serves to justify a bonfire of money that is truly mind-boggling. The idea that a few tech giants will spend over three hundred billion dollars this year, an amount equal to Portugal's entire economy, on this one pursuit is obscene. It is a staggering waste, a profound misallocation of resources on a global scale. We see reports that a titan like Amazon is getting back maybe twenty cents for every dollar it pours into this hole. Meanwhile, the cost to train the next frontier model is ballooning past a billion dollars. Think of all the other real, tangible problems that talent and capital could be solving, from cybersecurity to medical diagnostics to logistics, all while the brightest minds are instead chasing marginal gains on a chatbot's ability to write a sonnet. This is not innovation; it is an arms race fueled by speculation, where the goal is to create a bigger model, not necessarily a better or more efficient product.

Beyond the financial waste, the technical progress itself is showing signs of hitting a wall. Anyone who uses these tools seriously knows the dirty secret: they are not improving at the breakneck pace they once were. We are spending exponentially more in energy, data, and hardware for improvements you can barely notice and feel more like "vibes". The jump from GPT-2 to GPT-3 was a revolution; the jump from GPT-4 to its successors feels like a minor revision, yet it costs orders of magnitude more to achieve. The industry is banking on a paradigm of "scaling laws," the belief that simply making models bigger will unlock new capabilities, but a growing consensus among researchers is that this path has its limits. We are building taller and taller towers on the same shaky foundation, ignoring the fact that what we likely need is a new architectural blueprint altogether. The market, however, is pricing these companies as if revolutionary breakthroughs are guaranteed every quarter, creating a valuation structure so top-heavy and precarious it rivals the dot-com era's worst excesses.

This is why a collapse of the AI bubble, a great and painful popping sound heard around the world, would be the best thing to happen to actual AI development. We have seen this movie before with the dot-com crash. When the speculative money vanished in 2000, the internet did not die. The "zombiecorns" with no business models did. What was left behind was a treasure trove of valuable infrastructure, from fiber-optic lines to data centers, all suddenly available at a fire sale. That newly affordable foundation is what allowed the real, sustainable internet giants to emerge. A burst AI bubble would do the same. The massive server farms built by the hyperscalers would not disappear. The open-source models, the software frameworks, and the research papers would all still be there. The wreckage of the boom would become the building materials for a more sober and sustainable future.

The post-bubble world would reward a different kind of engineer and a different kind of company. The goal would shift from building God in a box to solving a specific problem for a specific customer. With the cost of entry lowered by the infrastructure fire sale, smaller, nimbler teams could compete. We are already seeing hints of this, with reports of Chinese firms developing highly competent models for a tiny fraction of what American giants are spending. A crash would accelerate this trend, forcing a focus on efficiency and cleverness over brute financial force. Innovation would become decentralized. Instead of one monolithic quest for AGI, you would see a thousand different applications of applied AI flourish, each tailored to a real-world need with a real business model. The industry would have to wean itself off the venture capital drip and learn to create products that people will actually pay for because they provide tangible value, not because they promise a sci-fi future.

In the end, a collapse is not an ending but a purification. It would strip away the hype, the religion, and the unsustainable economics that currently define the field. It would be a painful reckoning for the speculators, but a liberation for the builders. The immense intellectual and capital resources currently being funneled into a narrow, speculative tunnel would be released to flow into countless other productive streams. The technology, which is genuinely limitedly useful, would finally be freed from the impossible burden of returning impossible returns. It could simply be a tool, and we could get back to the real, unglamorous, but ultimately more rewarding work of using it to build better things.