OpenAI’s Sam Altman, one of the key figures driving the development and acceptance of artificial intelligence (Florian Gaertner/picture alliance / photothek.de/Newscom)

In the last days of 2025, among tech commenters in rarefied corners of X, a new consensus emerged: AGI was here. By AGI, they meant artificial general intelligence—a term of art usually taken to denote computer programs that can match or exceed human capabilities at most economically valuable tasks. The immediate cause for the excitement was a series of updates to Anthropic’s AI coding tool, Claude Code, allowing it to complete complex programming tasks far more reliably, and with far less human supervision, than before. Taken merely as another productivity boost for software engineers, this would not be especially noteworthy. But understood as a stepwise improvement in a general system capable of performing almost any kind of computer-mediated work, it looked like a watershed: the automation not merely of coding as a specialized skill but potentially of any work performed with a computer.

An Anthropic engineer posted that Claude Code was, itself, written almost entirely by a previous version of Claude Code. Its human overseers focused on “foundational architectural and product decisions,” while the AI implemented the solutions. Even the most in-the-weeds engineers on his team hardly wrote their own code anymore; they instead directed a team of agents. The long-foretold transformation of work seemed to have arrived. So, too, perhaps, had the man–computer symbiosis first described over 60 years ago by J. C. R. Licklider. “I have to constantly model the mind of [AI agents] living inside my laptop, and in doing so I become more like them. . . . I think in context windows. I become a cyborg, a hive mind of human and clauds,” wrote one anonymous tech poster on X.

And yet as excitement about the singularity grew in and around San Francisco, legislators nationwide began to push back against AI. On the left, Senator Bernie Sanders called for a moratorium on the construction of AI-powering data centers, to “ensure that the benefits of these technologies work for all of us, not just the wealthiest people on Earth.” On the right, Florida Governor Ron DeSantis introduced a bill that would let local communities veto the construction of data centers, and James Fishburn, a candidate running to succeed him, made opposition to “data centers that jack up our energy bills” a core tenet of his campaign platform. In California, progressive politicians endorsed a wealth-tax ballot initiative that threatened to fracture the state’s entire AI-startup ecosystem.

This hostility is increasingly shared by the general public. Recent polling finds that a growing share of Americans believe that AI will negatively affect society, whether by reducing jobs, spreading deepfake content, or outright eliminating humanity.

What should we make of this contrast? It is hardly surprising that prevailing opinion among San Francisco’s technological vanguard is out of step with politicians and the broader public; but the degree of dissonance should give us pause. Here are two entirely disjointed views on a new technology: one claims that it will radically but positively transform the world; the other maintains that it will despoil and, potentially, destroy it. The first animates a frantic and well-capitalized innovation arms race; the second an increasingly organized bipartisan reactionary political movement.

One obvious conclusion is that these two positions are on a collision course. We should expect more legislation to restrict AI and potentially even violent resistance to the technology. The successful recent protest to scrap plans for a data center in New Brunswick, New Jersey, is just the latest in what will be a series of similar efforts likely to be organized by a coming wave of technophobic activist groups. Still, while cooler heads may yet prevail, it is useful to ask why the anti-AI fervor is so strong. (See “The Surprising Heart of the Data-Center Boom.”)

The shared conviction underlying these efforts is that the tech industry—and AI specifically—takes much more than it gives: that it extracts land and other resources in exchange for insufficiently useful technology, and that this disparity must be offset by taxes or outright bans on development. This characterization of the industry is not new. At the highest level, it is just a special case of long-running suspicion of all wealthy industrialists.

One of the most articulate proponents of the AI extractivist narrative is Karen Hao, a former Wall Street Journal correspondent whose recent book Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI argues that leading AI companies, especially OpenAI, are building a “modern-day colonial world order” predicated on extracting natural resources and exploiting underpaid labor. To make her case, Hao interviewed hundreds of current and former employees at OpenAI, Anthropic, Meta, Google DeepMind, and Microsoft; traveled to Kenya and Chile to speak with data annotators and environmental activists; and drew on what she describes as “an extensive trove” of internal OpenAI correspondence. The result is peculiar: an extraordinarily detailed history of the company, interspersed with reflections on “data colonialism,” the treatment of former Google researcher Timnit Gebru, and America’s legacy of slavery.

On the scale of scientific history, “artificial intelligence” is a relatively recent term. It dates to 1956, when 20 researchers gathered at Dartmouth College for a summer workshop devoted to “the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.” John McCarthy, one of the organizers, coined the phrase “artificial intelligence” partly to distinguish the project from narrower fields like automata theory and cybernetics and partly because he believed that the boldness of the term would attract the most ambitious researchers.

The branding decision worked. The Dartmouth workshop, now widely regarded as the founding moment of artificial intelligence, brought together researchers who would make seminal contributions to the field, including Claude Shannon and Marvin Minsky. Over the following decades, AI split into two broad camps: the symbolists believed that intelligence would emerge from explicitly coding symbolic representations of knowledge into machines; the connectionists argued that it would come from building systems that could learn from data.

Symbolism dominated through the 1980s, though a small group of connectionists—led by Geoffrey Hinton and, later, his protégé Ilya Sutskever—continued refining neural networks. For years, their work attracted little commercial interest. That changed in the late 2000s, when Hinton and his graduate students demonstrated that neural networks could dramatically improve speech recognition, translation, and image classification, areas of obvious value to firms like IBM, Microsoft, and Google. Google soon hired Hinton, Sutskever, and Alex Krizhevsky to bolster its AI research, applying neural networks to products such as Google Translate and Gmail’s autocomplete and to more ambitious efforts, including its self-driving car project, Waymo.

In one sense, this is a familiar story of commercial interests shaping the direction of scientific research. Hao, however, sees something darker: a calculated effort by tech giants to entrench what Shoshana Zuboff has called “surveillance capitalism”: the commodification of personal data for corporate gain.

Hao didn’t begin her career as an AI skeptic. She became disenchanted with the industry in 2019, when she read an investigation reporting that facial-recognition systems had been trained on millions of Flickr images without users’ consent. “I began to notice how the aggressive push to collect more training data was leading to pervasive surveillance not just in the digital world but the physical one as well,” she writes. Later, after speaking with an activist in Johannesburg who claimed that facial-recognition cameras were “restricting the movements of black people,” Hao concluded that “the very revolution promising to bring everyone a better future was instead, for people on the margins of society, reviving the darkest remnants of the past.”

The rapid growth of AI has led legislators nationwide to begin pushing back, criticizing the technology’s energy demands and potential effects on the vulnerable. (Patrick Assalé/Alamy Stock Photo)

This conviction colored her subsequent reporting on OpenAI. Though many other “frontier” companies were racing to build state-of-the-art AI, Hao saw OpenAI as the product of distinct advantages: early access to a deep-pocketed billionaire; a “unique ideological bent”; and its CEO Sam Altman’s “singular drive, network, and fundraising talent.”

The deep-pocketed billionaire was Elon Musk, whom Altman persuaded in 2015 to finance his ambitious bet by emphasizing the danger of Google’s control over superintelligence. The unique ideological bent was toward artificial general intelligence, rather than narrower systems designed to perform a limited set of tasks. And Altman’s singular drive was, by most accounts, indisputable. “Sam is the most ambitious person on the planet,” a former OpenAI employee told Hao.

Through sheer force of will, Altman persuaded Musk to provide the first billion dollars needed to launch the company, recruited what were arguably the most talented engineer and AI scientist of their generation—Greg Brockman and Ilya Sutskever, respectively—and raised enough additional capital to let his team work on foundational research for years before releasing a public product.

The results were remarkable. In rapid succession, OpenAI advanced computer image generation, identified the “scaling laws” showing that AI systems improve predictably with more data and computing power, and released a series of ever more capable large language models (LLMs), culminating in ChatGPT—the fastest-growing consumer app in history.

Yet despite his gifts for fundraising and recruitment, Altman proved a polarizing leader. The company split into two informal camps: an applied team and a safety team. The applied team, led by Brockman, focused on commercializing research for the broadest possible user base. The safety team, led by Dario Amodei, concentrated on mitigating the “existential harms” that could arise from rogue AI. Amodei eventually led an internal revolt and departed, along with several colleagues, to found Anthropic. Sutskever—who, Hao reports, once burned a wooden effigy at a company retreat to dramatize what OpenAI should do to its AI if it proved “actually lying and deceitful”—helped lead the infamous one-day ouster of Altman in 2023. He left shortly thereafter to launch his own venture, Safe Superintelligence.

In Hao’s telling, this furious quest to build machine intelligence came at the expense of the vulnerable at nearly every turn. The annotated data used to train early versions of ChatGPT were labeled largely by workers in countries such as Venezuela and Kenya, who were paid low wages by American standards. Early facial-recognition systems sometimes struggled to identify darker-skinned individuals, prompting accusations of racist “algorithmic bias” from former Google researcher Gebru. (Hao dwells on Gebru’s activism at Google but does not mention the company’s overcorrected “woke AI” model, released in 2023, which notoriously refused to generate images of white faces.) Meantime, the data centers powering today’s AI models consume vast quantities of electricity and water—resources that, Hao reports, are in some cases drawn from “under-resourced” communities in Latin America.

The point about water consumption has since come into question. Hao issued a correction after a reader pointed out that she relied on documents that misreported the amount of water that a data center in Chile used by a factor of 1,000. And last year, Microsoft unveiled new data center designs that consume zero water for cooling.

The electricity point is more substantial. Even the most enthusiastic proponents of AI would concede that America’s current energy grid will strain under the electricity demands of superintelligence. Anthropic recently released an energy report warning that operation costs for frontier AI models could soon require more than double New York City’s peak electricity demand. With sufficient political will, we could build our way out of this problem—whether on the ground or, as Musk recently suggested, in space, where power is far more abundant.

I suspect the divide between Hao and the AI industrialists runs deeper than concerns about workers’ comp or water use in rural Chile. Fundamentally, the extractivist posture is focused on minimizing—and ideally, eliminating—the harms that result from new technologies. The technologist posture, by contrast, tends to focus on the benefits of new technology (or, in some cases, on the harms that would follow from not building it).

Extractivists typically have the rhetorical upper hand. They can point to concrete harms and interview sympathetic subjects with stories of victimization at the hands of faceless corporations. Technologists, on the other hand, can point to AI’s benefits in the aggregate (e.g., universal tutoring) or transformational individual use cases (a diagnosis identified by an LLM that had eluded doctors)—but to many audiences, this pales beside a well-rendered story of exploitation. The heads of frontier labs also do no credit to their cause when they muse about an imminent, AI-driven disappearance of millions of white-collar jobs.

The technologist’s most powerful technique against the extractivists is the use of counterfactuals. If we had chosen not to develop AI systems that detect early-stage cancer better than non-AI-assisted radiologists, would that omission be a moral harm? How would that harm compare with the moral cost of hiring inexpensive workers to annotate vast data sets, some of which include graphic depictions of violence?

In fairness to Hao, she often makes a subtler claim: that AI’s development was path-dependent and that alternative, smaller-scale approaches to building the technology were crowded out by the dominant approach of major labs. This may be true on a narrow time scale, but many companies are now trying to commercialize alternatives to LLMs. Ultimately, we should expect the most performant approach to win.

Or perhaps we will curtail the technology entirely. Calls for moratoriums on data centers—or for local vetoes on AI infrastructure, punitive taxation of AI firms, and aggressive federal constraints on model development—are no longer fringe demands. They are becoming bipartisan talking points that, if adopted wholesale, could slow or even stop AI’s development.

The real question, then, is not whether AI has costs—it plainly does—but whether a politics organized primarily around preventing harm can coexist with a civilization that still seeks to build. In the debate over data-center construction, we are indeed deciding how much electricity to allocate or how much water to conserve. But we are also deciding whether the risks of progress outweigh the risks of stagnation, and whether we are still the sort of society willing to find out.

Donate

City Journal is a publication of the Manhattan Institute for Policy Research (MI), a leading free-market think tank. Are you interested in supporting the magazine? As a 501(c)(3) nonprofit, donations in support of MI and City Journal are fully tax-deductible as provided by law (EIN #13-2912529).

Further Reading