What is AI actually worth?
The artificial intelligence boom is obviously the latest historical example of a market bubble, but it is a particularly strange one. It is a bubble inflated by a potent mixture of managerial fantasy, quasi-religious fervor, and a torrent of capital with nowhere else to go. While proponents and industry thought leaders herald a new economic dawn that rivals the industrial revolution, a closer inspection reveals an industry built on shaky technological foundations, propped up by unsustainable spending and fueled by a narrative that borders on the absurd. The current trillion dollar valuations are not a reflection of imminent revolution but a testament to the financial world's infinite capacity for self-deception, promising an inevitable and brutal correction.
At the heart of the hype lies a technical reality that is far less impressive than the marketing material suggests. The scaling of large language models, the very engine of this new wave, appears to be hitting a wall of diminishing returns. Barring a fundamental architectural breakthrough, we are simply throwing more data and more computation at models for increasingly marginal gains. The entire process of benchmarking has devolved into a subjective beauty contest, where defining "progress" is a challenge in itself. Outside of tasks with binary answers, determining which model is superior is a matter of vibes, opinion, a game of whack-a-mole where improvements in one domain create deficiencies in another. The metrics are less science and more sophisticated hand-waving, designed to produce impressive charts for product launches and investor decks rather than a true measure of capability.
This technical ambiguity provides fertile ground for a seductive fantasy that has completely captured the American professional managerial class. Sold a bill of goods by an army of management consultants, these non-technical executives envision a near future where the messy, unpredictable, and costly business of human labor is automated away. They dream of a corporate utopia run by a small cadre of elite directors, where profits soar unburdened by salaries, healthcare, or labor disputes. This is, of course, a fiction, but it is a powerful one for managers who view their employees as expendable cogs in a machine. This delusion is the primary driver of corporate AI spending, a massive R&D slush fund dedicated to the proposition that workers can and should be replaced.
The result of this managerial pipe dream is a tidal wave of corporate "innovation" projects that are profoundly disconnected from the actual work of the company. The corporate mandate is to "have an AI strategy," which in practice means building internal chatbots and RAG systems for data retrieval. These projects, often several degrees removed from any core business activity, deliver little tangible value but are plausible enough to be sold internally as progress. Some recent studies put the failure rate of these projects at somewhere 95% failure rate. It is a perfect execution of the Politician's Fallacy: something must be done, this is something, therefore we must do it. Companies are burning through capital not to solve real problems, but to be seen as participating in the trend, creating expensive Potemkin villages of artificial intelligence to appease their boards and shareholders.
Then come the management consultants, the high priests of corporate anxiety, arriving with the solemnity of undertakers to monetize the C-suite's deepest insecurities. Armed with glossy slide decks and a proprietary vocabulary of synergistic nonsense; they sell a seductive fantasy whispered into the ears of executives who view their workforce as a costly liability. Their gospel preaches a boardroom utopia where the messy, expensive, and occasionally unionizing problem of human labor can be neatly excised from the balance sheet, replaced by a flawless, uncomplaining algorithm. They expertly exploit the profound technological illiteracy of their clients, presenting AI as a magical black box that will transmute operational costs into pure profit. The result is a flood of multi-million dollar "transformation" roadmaps whose primary strategic function is to guarantee a lucrative, multi-year follow-on engagement for the consultants themselves. They are not advisors but architects of plausible deniability, providing executives with a beautifully formatted, jargon-filled permission slip to light a bonfire of shareholder money in the name of innovation.
This corporate delusion is amplified and sanctified by the bizarre cultural terrarium of the Bay Area, where the AI labs function as monasteries for a new rationalist cult. Its acolytes, a socially anxious cohort of the terminally online, believe they have replaced messy human intuition with pure logic, yet have merely swapped out old gods for a silicon one. They signal their allegiance through a coded lexicon of Bayesian priors and expected utility, a verbal handshake that confirms they are among the enlightened few, forever detached from the lived experience of anyone who has ever touched grass. This is not merely a subculture; it is a full-blown eschatological movement for intelligent people who do not realize their own subconscious religious impulse has led them to cosplay as prophets. Self-described "super-forecasters" issue prophecies from their blogs, breathlessly predicting the arrival of a Machine God by 2027, a digital rapture that will solve all problems for the worthy. If you wish to gaze into the abyss of their thought process, you need only read their scripture, a prose so dense with self-importance and abstract nonsense that it acts as a cognitive barrier to the uninitiated.
So a post-singularity world may be a world of fun and plenty for the people who are currently ignoring the problem, whilst being a living hell for a significant fraction of current existential risk reducers (say, the least generous half). You could take this possibility into account and give even more to x-risk in an effort to avoid being punished. But of course, if you're thinking like that, then the CEV-singleton is even more likely to want to punish you... nasty. Of course this would be unjust, but is the kind of unjust thing that is oh-so-very utilitarian. It is a concrete example of how falling for the just world fallacy might backfire on a person with respect to existential risk, especially against people who were implicitly or explicitly expecting some reward for their efforts in the future. And even if you only think that the probability of this happening is 1%, note that the probability of a CEV doing this to a random person who would casually brush off talk of existential risks as "nonsense" is essentially zero.
If that passage reads like the unholy union of a paranoid sci-fi novel and a first-year philosophy student's bong-rip, congratulations, you are sane. This is the intellectual backbone of the movement: absurd, labyrinthine thought experiments, ripped from the pages of pulp fiction, taken with the deadly seriousness of divine revelation. It is, in essence, nerd Scientology for the Bay Area, a perfect trifecta of one part tax-optimization, one part cynical in-group signaling for career advancement, and one part the wide-eyed fervor of true believers who genuinely think they are building God in a server farm.
When one examines the actual applications emerging from this frenzy, the picture becomes even more bleak. Outside of text generation, the most prominent use cases are either unprofitable, a net negative for society, or both. AI image generation is a fascinating toy, but it is an expensive, easily commoditized novelty with no clear path to profitability. The viral video generators producing surreal clips of talking animals and historical figures are an amusing distraction, but they are an engine for creating digital slop, not economic value. The only entity making real money from this "slop economy" is Nvidia, which happily sells the digital shovels for this fool's gold rush. The most likely outcome is a future where social media is choked by an endless stream of algorithmically generated content, strangling the very platforms that host it. And maybe we should celebrate that!
Even the most legitimate application, AI-powered coding assistants, is a financially dubious proposition. While these tools can genuinely augment programmer productivity, the companies providing them are deeply unprofitable. They are also highly dependent on a small number of model providers (namely Anthropic, whose models are genuinely much better than the others at coding tasks) and constrained by the same scaling limitations plaguing the entire field. The technology may well eliminate a swath of entry-level programming jobs, but the dream of a fully automated software engineer, capable of complex, creative, and system-level thinking, remains a distant pipe dream. Like so much in the AI space, a kernel of a good idea has been inflated into a revolutionary fantasy that the underlying technology simply cannot support.
The entire edifice is held aloft by a geyser of capital so vast and indiscriminate it makes a firehose look like a precision instrument. The American venture economy is sitting on mountains of "dry powder," billions in undeployed capital aching with the impatient, frantic energy of a cornered rat. With the smoking craters of crypto and the metaverse still fresh in memory, the investment class has stampeded toward the last plausible growth story, driven by the moronic TINA mantra. So-called strategic allocation is really more a desperate flight to the only mirage in the desert. But if you're running money professionally, sitting in cash is a non-option, so when the music is playing you have to dance.
This initial flood of nervous money then attracts the final stage of the financial food chain: the gullible leviathans. Here come the SoftBanks and the gulf state sovereign wealth funds, the largest and dim-wittedest pools of money on Earth, ready to blast their petrodollars and printed money at anything that glows with the faint promise of "the future." Their due diligence process has the intellectual rigor of a slot machine, a mix of geopolitical posturing and pure, unadulterated FOMO. This creates a magnificent, self-licking ice cream cone of a bubble. The capital class funds the unsustainable cash burn of AI labs, which generates hype, which attracts the sovereign wealth funds for a later, larger round at an insane valuation, which validates the VCs' initial bet. The goal is not to build a profitable business but to keep the music playing long enough to pass the flaming bag of promissory notes to the next, even wealthier, idiot. The supposedly decentralized investor class is more like a herd of lemmings, engaging in a spectacular display of malinvestment that would make a Soviet central planner blush.
None of this is to say that all of AI is malinvestment or fraudulent. Legitimate and fascinating research is being done, and Nobel breakthroughs around protein folding advances to demonstrate the potential. Probabilistic text models have real, if niche, applications. But they are not reliable, consistent, or accurate enough to perform the vast majority of knowledge economy tasks. There is no breakthrough on the horizon that promises to give them these properties. The industry is filled with people role-playing that such a breakthrough is just around the corner, either because they are incentivized to believe it or because they are members of the religious movement and believing in AI singularity provides a psychological escape from the mundane precarity of modern existence. The reality is that artificial intelligence is a real, durable and lasting industry that will inevitably be part of he future, but it is one whose true market capitalization is measured in the tens of billions, not trillions. The vast sums of money currently being allocated are fuel for a bonfire, and when the flames die down, all that will be left is the sobering ash of a market that finally, and painfully, came to its senses. And hopefully it doesn't take the chunk of the S&P500 with it.