The AI Backlash is to be Expected

The rising tide of skepticism, criticism, and outright hostility towards artificial intelligence, particularly noticeable in left-leaning circles, should come as no surprise to anyone. While the potential applications of AI are still being researched and debated, the context in which it is emerging, the nature of its proponents, the audacity of their claims, and the ethical shadows surrounding its development have created a perfect storm for backlash. This reaction, while sometimes prone to overstatement, is a natural and largely predictable consequence of the AI industry's own actions and the broader technological landscape of the past decade. It stems not merely from Luddite fear, but from a well-earned distrust cultivated by recent history and the specific characteristics of the current AI boom.

Firstly, the ground for skepticism towards grand technological promises has been thoroughly salted over the last ten years. We have witnessed a parade of innovations heralded as transformative, only to reveal themselves as predicated on shaky foundations, outright fraud, or producing deeply damaging externalities. Social media platforms, once envisioned as tools for connection and democracy, have become engines of polarization, misinformation, mental health crises and even genocides. Cryptocurrencies, pitched as decentralized financial liberation, devolved into speculative bubbles rife with scams and environmental concerns. The much-hyped metaverse remains a clunky, sparsely populated digital ghost town. The gig economy, sold as entrepreneurial freedom, often translated into precarious labor with eroded worker protections. And hovering over all tech hype is the cautionary tales of Theranos and WeWork, a stark reminder of how easily venture capital enthusiasm and bold claims can mask malfeasance. Against this backdrop, the emergence of AI, accompanied by similarly breathless predictions of revolutionizing everything, inevitably triggers societal antibodies wary of the next over-promised, under-delivered, and potentially harmful technological wave. The tech industry has undeniably done some horrible, downright evil, things over the last decade; the benefit of the doubt has been squandered by predecessors, leaving AI to face a far more critical reception which is entirely deserved.

Adding significantly to this distrust are the often extreme and unsettling philosophical and political viewpoints espoused by many leading figures within the AI development sphere, particularly in Silicon Valley. It is jarring for the general public to learn that individuals wielding influence over hyped technology harbor beliefs far outside the mainstream Overton window. Many prominent AI proponents openly subscribe to long-termist or effective altruist philosophies that can lead to startling conclusions, prioritizing hypothetical future generations or digital consciousness over the pressing needs of currently living humans. There exists a genuine, albeit niche, belief among some tech elites that humanity's destiny lies in creating artificial successors—superintelligent AIs or posthumans—effectively passing the torch of existence to non-biological entities. This eschatological vision, viewing AI not just as a tool but as the next stage of evolution or even a path to digital godhood, is deeply alienating and frankly alarming to those grounded in humanist values and concerned with present-day social and ecological challenges. Such pronouncements make the entire enterprise feel less like responsible innovation and more like a scifi-fantasy cult pursuing a bizarre, unrelatable endgame.

This sense of alienation is further compounded by the political ideologies prevalent among many tech and AI figureheads, often fused with a potent strain of self-aggrandizement reflecting beliefs far outside the societal mainstream. While Silicon Valley is not monolithic, a strong current of libertarian, anarcho-capitalist, and anti-democratic sentiment runs through its influential core. This isn't merely abstract political theory; for some prominent figures, it appears rooted in a self-conception of themselves as Nietzschean supermen or Randian heroes—exceptional individuals seemingly believing they operate beyond conventional ethical constraints, destined to forge progress through sheer will and intellect, unburdened by the concerns or consent of the masses. Consequently, their championing of radical deregulation, minimal government, and market-based solutions for nearly all societal problems clashes sharply not just with mainstream political thought. When figures shaping tech's future express disdain for democratic processes as inefficient obstacles, champion elitist notions of governance, or advocate for economic systems that exacerbate inequality, it naturally breeds suspicion, appearing as the logical extension of a worldview that holds certain individuals as inherently superior and unbound by traditional social contracts. The perception arises that AI is being developed to further concentrate power and wealth in the hands of a techno-elite who see themselves as operating outside, and perhaps above, the democratic and egalitarian principles valued by civil society. This perceived anti-democratic, anti-egalitarian streak, amplified by an air of self-appointed exceptionalism, makes the industry profoundly unlikeable and untrustworthy.

The sheer hyperbole and often prima facie absurdity of the claims made by AI evangelists also contribute significantly to the backlash. Statements about "building god," "curing all diseases," achieving immortality, or ushering in an era of unimaginable abundance sound less like realistic technological roadmaps and more like messianic prophecies. Crucially, there is no clear, demonstrable path from the current state of AI—largely sophisticated pattern-matching engines like LLMs—to these grandiose outcomes. This disconnect fuels comparisons to previous manias, amplified by the quasi-religious fervor surrounding the technology. When faced with claims that seem unmoored from scientific reality, coupled with the unsettling philosophies mentioned earlier, it is natural for observers to suspect either profound delusion or deliberate misrepresentation on a massive scale. The industry's reliance on such extreme rhetoric makes it difficult to take its more modest, potentially realistic claims seriously, poisoning the well of public trust.

Furthermore, the very foundation of many current AI models is built upon practices that invite ethical and legal challenges, feeding directly into the backlash. The training of large AI models has involved the ingestion of vast amounts of data scraped from the internet, including copyrighted text, images, and code, often without the explicit consent or compensation of the original creators. While the legal definition of "fair use" is being actively debated, the perception among artists, writers, and many others is one of profound unfairness—large corporations profiting from the wholesale appropriation of creative work. This perception of intellectual property theft is particularly galling when paired with simultaneous pronouncements about AI's potential to displace workers in precisely those creative fields. The narrative becomes one of an industry harvesting the collective work of humanity without permission, only to turn around and automate the jobs of those whose work they used. This combination of perceived IP theft, data exploitation, and job displacement threat is a potent recipe for generating resentment and opposition.

Beyond the ethical and philosophical objections, the current AI boom exhibits worrying hallmarks of a classic economic bubble, adding another potent ingredient to the cauldron of public skepticism. The sheer volume of capital flooding into AI—evidenced by soaring valuations Nvidia, frantic investment rounds for startups promising the next breakthrough, and massive infrastructure spending by tech giants—mirrors the speculative manias of the past, from dot-com exuberance to the pre-2008 housing market. There is a growing concern that much of this represents significant malinvestment, with capital chasing hype and future promises rather than demonstrable, sustainable business models or clear paths to profitability for many ventures built on current AI capabilities. Given the scale of these investments and the central role of major corporations and pension funds, the potential deflation of this bubble carries non-trivial systemic risk. A significant downturn in AI-related valuations or a wave of startup failures could ripple through interconnected markets, impacting everything from cloud computing providers to the wider stock market, reinforcing the perception that AI, like previous tech waves, might bring not just disruption, but also dangerous market instability fuelled by unrealistic expectations. This speculative frenzy, detached from immediate economic realities for many applications, only serves to heighten suspicion and contributes to the sense that the AI project is driven more by financial engineering than by economically relevant innovation.

Crucially, beyond the sociological and political dimensions of the backlash, there lies a substantive argument that the negative externalities of current AI technologies may genuinely outweigh their demonstrated benefits, potentially justifying far stricter controls or even pauses in deployment. The significant environmental footprint associated with training and running massive AI models, demanding vast amounts of energy and resources, cannot be ignored in an age acutely aware of climate change. Simultaneously, these systems threaten to profoundly exacerbate the post-truth epistemic crisis, flooding digital spaces with synthetically generated content—often described as "not even wrong slop"—that further erodes shared reality and makes discerning reliable information incredibly difficult. The potential for AI to power hyper-personalized, automated anti-democratic disinformation campaigns at unprecedented scale is a palpable threat. Added to this is the fundamental issue of unreliability, manifested in the persistent problem of "hallucinations" where models confidently invent false information, presenting potentially existential risks to businesses or individuals who might rely on their outputs for critical decisions. When weighing these substantial and already visible harms against the currently realized, dependable use cases—which, while existing, often fall short of the revolutionary hype—a strong case can be made that the negative externalities presently dominate the equation, lacking sufficient compensatory value to rationalize the mounting societal and environmental costs.

However, it is also true that the backlash, like many reactions, can sometimes manifest as an overcorrection. While the skepticism is understandable, claims that current AI technologies have "no uses" or are purely hype ignore the lived experiences of many. There exists a significant segment of the modern economy engaged in what anthropologist David Graeber termed "bullshit jobs"—roles in middle management, PR, corporate consulting, content marketing, and administration that often involve generating reports, presentations, and communications that feel devoid of intrinsic value; essentially, producing human-generated "slop." For individuals in these roles, AI tools like can demonstrably streamline the production of this output, automating tedious tasks like drafting emails, summarizing documents, or generating boilerplate content. One might morally object to the existence of these jobs or the value of their output, but that is distinct from denying the reality that AI is being adopted to make their execution more efficient. Similarly, the widespread uptake of AI coding assistants for generating boilerplate code, suggesting solutions, and speeding up development workflows is undeniable among programmers. One must also realize that all of the researchers in acadamia working on this stuff are not labouring under some false consciousness. As the 18th century philosopher David Hume noted, we must be careful not to confuse an "ought" (these uses are trivial, undesirable, or symptomatic of a flawed economy) with an "is" (these uses exist and people are adopting them). While the advent of transformers will almost certainly not herald the arrival of artificial general intelligence (whatever that means?) or solve world hunger, and the long-term profitability of most of these businesses remains uncertain, they do represent some real, existing use cases. Denying their existence entirely weakens the credibility of the broader critique.

It is also crucial to acknowledge the environment in which much of this backlash manifests and intensifies: online social spaces. Platforms like Bluesky and LinkedIn, and various forums are notoriously poor venues for nuanced discussion on complex topics. Instead, the dynamics of engagement optimization often reward hyperbole and hyper-partisanship. Strongly worded, pithy, maximally critical (or maximally boosterish) takes generate more likes, reposts, and replies—both from an in-group eager to signal agreement and from an out-group provoked into flame wars, which ironically also boosts visibility. Consequently, the discourse surrounding AI in these spaces frequently flattens complex issues into binary conflicts, devoid of caveats or acknowledgements of conflicting truths. This environment actively discourages any rational debate, which is precisely why I don't engage with these spaces anymore. Given these dynamics, it is entirely predictable that many online communities will likely maintain a staunchly anti-AI posture for the foreseeable future.

As a counter-critique, however, there is often a noticeable gap between impassioned rhetoric and the articulation of concrete, legally precise policy solutions. While the posturing against AI's perceived dangers is loud, the detailed "how" of addressing these concerns frequently remains vague. For instance, does the opposition translate into a desire to outright ban the development or use of certain types of neural networks? Such a proposal faces immense practical hurdles and likely insurmountable First Amendment challenges, given that algorithms are essentialy just math. Conversely, proposing substantial fines and robust enforcement mechanisms against companies deploying AI trained on copyrighted works without consent or licensing is a far more legally tractable, though still complex, avenue. The real policy debate needs to engage with further concrete possibilities: should there be mandatory public disclosure and auditing requirements for the datasets used to train large commercial AI models to address bias and IP concerns? Could regulations establish clear liability frameworks, determining who is legally responsible when autonomous systems cause harm? Perhaps policy should focus on economic mitigation, such as imposing targeted taxes on automation that demonstrably displaces human labour, with revenues dedicated to the social safety net or worker retraining programs. These are the kinds of specific, actionable proposals needed to move the conversation beyond aimless generalized outrage towards concrete bills than could be enshrined in law.

I won't tell you what to think on AI, you have to make up your mind around complex ethics issues like eating meat, carbon footprint, and social media. But I will say that the backlash against artificial intelligence, especially within circles sensitive to economic inequality and corporate overreach, is not a random or irrational phenomenon. It is an entirely predictable consequence of a decade scarred by tech disappointments and malfeasance, amplified by the unsettling ideologies and political stances of AI's leading figures. The industry's penchant for reality-bending hyperbole, coupled with legitimate concerns about intellectual property, data privacy, and job security, has created an environment where distrust is and should be the default position. While some critiques may overstate the case by denying any utility to current AI tools, the core reasons for the backlash are deeply rooted in the industry's own presentation and practices. By cultivating an image characterized by unlikeable leaders, bizarre eschatological pronouncements, anti-democratic leanings, and ethically questionable methods, the AI industry has largely engineered its own public relations problem. It should not wonder why it faces skepticism and hostility; it is, in many ways, a backlash of its own making.