AGI Discourse as Kayfabe
The discourse around artificial intelligence, particularly AGI and the singularity, has become a carefully orchestrated performance. Like in American professional wrestling, it's an elaborate show where everyone plays their designated role in service of a shared narrative in the desert of the real. In fact, the parallels to wrestling's concept of "kayfabe" - the practice of maintaining an artificial narrative as real - are striking. The AI world has its narrative structure and cast of recurring characters: prophetic tech visionaries cosplaying as heroes, concerned ethicists and doomer critics cast as villains, and cheerleading investors hyping up the drama from the sidelines. While there is some real technology and innovation underneath it all, the public narrative has evolved into something closer to theater than serious discussion - an insufferable melodrama where maintaining the illusion is more important than the reality.
In the internal world of this scifi-wrestling narrative, the optimistic technologists play the role of Faces, promising AGI is just around the corner if we just believe hard enough (and keep the funding flowing). Opposing them are the Heels, the designated villains who warn of AI risk and existential threat, conveniently driving more attention and urgency to the field. The Authority Figures - policy makers and tech leaders - make dramatic proclamations about regulation while carefully maintaining the status quo. Meanwhile, the Managers - VCs and investors - hype up their AI champions while stoking rivalries between competing approaches.
Just like wrestlers who maintain character even outside the ring, many AI personalities seem permanently locked into their designated roles in the narrative. The true believers must always believe, the critics must always believe in apocalyptic doom, and everyone must maintain the shared fiction that transformative AGI is perpetually 1-2 years away and that amazing technology is already here, when the reality is that what we've built is far far from the story being told.
This kayfabe serves everyone's interests: startups get funding, VCs get deal flow, media gets engagement, critics get attention, policy makers get to look proactive, and the audience gets an exciting story about humanity's future. The fascinating part is that the actual truth or feasibility of the claims hardly matters - what matters is maintaining the shared narrative. Like wrestling fans who are "smart marks" - aware it's staged but choosing to believe anyway - the AI community has largely decided that the collective fiction is more valuable than reality. The story of impending AGI, with all its hopes and fears, has become a self-sustaining mythology that generates its own gravity, pulling in resources and attention regardless of technical fundamentals.
Just as wrestling occasionally "breaks kayfabe" to acknowledge reality, there are moments when the AI hype facade cracks. A demo fails spectacularly. A promised capability proves hollow. A deadline passes without delivery. But these moments are quickly absorbed into the larger narrative - after all, the show must go on. None of this is to say that AI technology is entirely unreal or unimportant. There is actual technical work happening in labs and companies but the signifgance and reality is far exaggerated. It's wrapped in layers of storytelling that often overshadow the underlying reality.
Perhaps most telling is the conspicuous absence of any serious effort to discuss this elaborate performance. Those with the technical acumen to definitively puncture the narrative are often the same ones whose careers, funding, or reputations depend on its perpetuation. Meanwhile, the informed observers - be they academics, engineers, or industry veterans - maintain a knowing silence, recognizing that the emperor's sartorial choices are best left undiscussed. There exists an unspoken understanding among the cognoscenti: the performance benefits all, and those who truly understand the field's limitations can quietly profit from the shared delusion while the true believers provide the necessary evangelical fervor to keep the entire enterprise afloat.
Baudrillard would appreciate how perfectly AI discourse exemplifies the simulacra - where the map precedes the territory and the simulation becomes more real than what it simulates. We find ourselves in a desert of the real, where breathless AGI predictions and existential risk narratives constitute a hyperreal spectacle that has long since severed its tether to technical truth. The audience, like Baudrillard's consumers, no longer distinguishes between the sign and the signified - the kayfabe has become more meaningful than the underlying technology it purports to represent.
This spectacle of AI progress has become a self-referential system where images mediate our relationship with actual technology. We no longer interact with AI developments directly, but through a carefully curated stream of prophecies, pronouncements, and proclamations. The real has been replaced by its representation, and these representations have taken on a life of their own, accumulating into a pseudo-world that demands to be viewed and consumed.
In this ecosystem, relationships between people and technology are increasingly mediated by spectacles, creating a passive acceptance of what appears rather than what is. The AI industry has become the art of turning techno-prophecy into commodified images that serve primarily to perpetuate themselves and the economic interests behind them. Each dramatic pronouncement about AGI represents another layer in this consensual hallucination - a simulacrum of innovation that exists primarily to perpetuate itself. In this carnival of hyperreality, as in wrestling, the spectacle is the point - and that's precisely how the ringmasters prefer it.