AGI Discourse as Kayfabe

As most people know, the AI industry leaders who talk about Artificial General Intelligence are not sincere ... they are engaging in a kind of performative theater that serves their own interests. But it's a type of theatre that goes far beyond mere marketing. Ironically, both French philosophers and American pro-wrestlers have precise terms for this phenomenon: Jean Baudrillard called it simulacra while wrestling insiders call it kayfabe. The concept of simulacra describes representations so detached from reality they may become copies without originals, ultimately supplanting it. While kayfabe is the deliberate staging of fiction as authentic. Both concepts reveal how narratives often function as carefully constructed spectacles where participants knowingly play roles, prioritizing the unreal over the real.

The discourse around artificial intelligence has become what Baudrillard would recognize as a perfect simulacrum, a copy without an original, where the representation has entirely supplanted the reality it ostensibly depicts. Like professional wrestling's kayfabe, the industry practice of maintaining fictional narratives as real, AI discourse operates as an elaborate performance where participants knowingly play roles that benefit everyone involved.

None of this is to deny that real work that occurs in these companies. The models do do some interesting and useful things. But these actual developments exist in an increasingly tenuous relationship to their representation in the public sphere. The territory, real AI research with genuine but limited capabilities, has been almost entirely obscured by the map, a hyperreal landscape of predictions, narratives, and branded futures.

The definitional vacuum around AGI creates a unique performative space, a stage where the very absence of clear meaning becomes the mechanism that sustains the narrative. This void isn't merely an obstacle to progress, but rather serves as the essential backdrop against which the performance unfolds. AI researchers can gesture toward building AGI precisely because nobody knows exactly what AGI would look like or how to recognize it. The uncertainty and ambiguity aren't bugs in the system, they're features that enable conceptual arbitrage: the ability to shift meanings and goalposts while maintaining the illusion of coherent progress. The emptiness at the center becomes the space where imagination and spectacle can flourish unencumbered by concrete definitions or measurable outcomes.

Just as wrestling fans become "smart marks"—aware the performance is staged but choosing to believe anyway—the AI community has largely decided that the collective fiction is more valuable than technical reality. This is Baudrillard's third order of simulacra made manifest: the sign no longer masks or denotes a reality, but becomes its own hyperreality. AGI predictions, existential risk narratives, and breathless announcements of "breakthroughs" constitute a self-referential system where the simulation precedes and ultimately replaces any underlying technological substance. We inhabit a space where the maps have consumed the territory.

Like wrestling, AI's kayfabe requires a stable cast performing predictable roles. The Faces, prophetic technologists and startup founders, promise transformative AGI is perpetually eighteen months away, their vague-posting on Twitter carefully calibrated to suggest profound insights while revealing nothing concrete. The Heels, AI safety advocates and doom-sayers, warn of existential risks, their apocalyptic pronouncements generating the very urgency that justifies massive investment in the field they critique. The Authority Figures, policy makers and tech platform leaders, make theatrical proclamations about regulation while ensuring the fundamental power structures remain untouched.

Baudrillard argued that in a hypermediated world, simulations often precede and shape reality, rather than merely representing it. This inversion is starkly evident in AI discourse. The narrative of imminent AGI (not necessarily current capabilities) increasingly dictates research priorities, funding allocations, and talent acquisition. Startups chase trending buzzwords for investment, while researchers frame their work in terms that garner attention, making the simulation the engine of perceived progress.

This phenomenon aligns with Baudrillard's concept of the 'precession of simulacra,' where the map (the model, the representation) precedes and dictates the territory (reality). Consequently, AI capabilities are often evaluated not by their actual functions, but by their perceived role in the grand narrative of AGI progression. A new language model, for instance, might be hailed as a 'step toward AGI' less for its specific, demonstrable abilities and more for "vibes" because the prevailing narrative demands such an iteration. The show must go on, the spectacle must be maintained at any cost.

The effectiveness of this system stems from its widespread utility; it's less a conscious deception and more a self-reinforcing framework shaping discourse, thought, and funding. Venture capitalists secure deal flow by investing in 'the future.' Media outlets gain engagement through sensational narratives of utopia or apocalypse. Researchers obtain grants by aligning their work with the AGI storyline. Even critics benefit, gaining platforms by highlighting existential risks that, in turn, call for more AI safety research.

Crucially, the individuals with the technical expertise to potentially debunk the prevailing narrative are often those whose careers are intertwined with its perpetuation. This fosters what could be termed an 'epistemic conspiracy'—not a mustache-twirling, deliberate, coordinated plot, but a self-organizing silence where the Emperor's sartorial choices are best left undiscussed by all who depend on the emperor's continued reign.

We witness the transformation of genuine activity into passive spectatorship. AI development is increasingly experienced not through direct technical engagement but through consuming a carefully curated stream of announcements, demonstrations, and pronouncements. The technology itself becomes secondary to its representation; the spectacle becomes the primary reality.

When the performance occasionally breaks kayfabe, when a demo fails, a deadline passes unmet, or promised capabilities prove hollow, these moments are quickly absorbed into the larger narrative. Failure becomes "learning," delays become "responsible development," and limitations become "opportunities for future research." The show must go on because too many livelihoods depend on its continuation.

Particularly revealing is the deliberate cultivation of mystique through strategic ambiguity. Vague social media posts, cryptic conference presentations, and manifestos rich in metaphor but sparse on specifics point beyond mere communication failure to intentional obfuscation. This shift signals participation in distinct language games, where the operational rules and objectives of communication adapt to the context. In the particular language game shaping AGI discourse, the emphasis appears to move away from goals like precise truth-seeking or empirical validation. Instead, the primary objective becomes the active maintenance and reinforcement of a compelling narrative, effectively prioritizing evocative storytelling and atmospheric effect over verifiable substance.

Indeed, a certain level of absurdity seems intentional. When industry figures tweet about 'machine gods,' researchers claim 'emergent capabilities' without clear definitions, or founders invoke 'sparks of consciousness' lacking operational criteria, they engage in performative mystification. The aim isn't necessarily clear communication but fostering an atmosphere where boundless possibilities seem perpetually imminent.

The discourse around AGI represents Baudrillard's theory fully realized: a simulacrum that has entirely consumed its referent, a map that has replaced the territory so completely that we can no longer distinguish between representation and reality. The spectacle of AI progress has become more meaningful than any actual technological development it purports to represent.

Much like in professional wrestling's audience, participants in the AGI discourse exhibit a spectrum of belief. Some are akin to die-hard fans, genuinely convinced by the narratives of imminent existential stakes; they wholeheartedly drink the Kool-Aid and embrace the kayfabe as literal reality. Others, however, may engage more cynically or strategically. For this latter group, participation can function as a form of sophisticated in-group signaling: by subtly acknowledging the performative nature of hype—perhaps with a knowing nod to its exaggerated claims—they demonstrate they are "in on it," using the shared understanding of the spectacle as a currency within their circles, all while outwardly playing their prescribed roles.

The AGI discourse thus fully embodies Baudrillard's theories: a simulacrum consuming its referent, a map replacing the territory so thoroughly that distinguishing representation from reality becomes challenging. Within this ecosystem, our relationship with technology is increasingly mediated by spectacle, fostering passive acceptance of appearances over substance. The AI industry excels at transforming techno-prophecy into commodified images that sustain themselves and the economic interests underpinning them. Each dramatic AGI pronouncement adds another layer to this consensual hallucination. Ultimately, in this carnival of hyperreality, much like in professional wrestling, the spectacle itself is the main event—a state of affairs the ringmasters seem keen to maintain.