On March 13, the European Union passed the world’s first comprehensive law for regulating artificial intelligence. Many are touting the EU’s Artificial Intelligence Act as a landmark win for ensuring “safe” and “human-centric” AI, and efforts are now afoot to promote the framework as a model for other nations. 

The new law establishes a “risk-based approach” for regulating the deployment of AI systems within four tiers of risk: minimal, limited, high, and unacceptable. This framework seems sensible enough at first glance, until one takes a closer look at what falls into these categories. In fact, while the law is expansive in scope and aggressive in application, it does surprisingly little to address the most catastrophic forms of AI risk and focuses instead on the so-called equity issues of bias and discrimination.  

“High risk” use cases, for instance, include the use of AI in areas like credit scoring, education, résumé filtering, and human resources, lumping together mundane applications that could amplify “discriminatory outcomes” with sectors, such as medical devices and critical infrastructure, where a flawed AI system could be literally fatal. 

Under the EU AI Act, these supposedly “high-risk” systems will now be subject to strict obligations before they are allowed on the market, ranging from the disclosure of datasets and detailed usage documentation to the adoption of formal “quality management” systems. While most providers will have the ability to self-certify, the legal risk from noncompliance ensures these burdens are real. A subset of “high-risk” use cases will even require the sort of premarket approval that governments typically reserve for untested new drugs. Such intense requirements could be justified for, say, generative AI models capable of designing biological weapons—not job-recruitment software. 

At the heart of the law is the EU’s confusion about AI as a technology category. Algorithms and machine learning are nothing new and are in most cases indistinguishable from basic software or statistics. Whether an employer discriminates with the aid of machine learning or out of personal prejudice makes no difference. For use cases that risk perpetuating bias, existing anti-discrimination laws should suffice. 

Some of this confusion can be attributed to the EU AI Act’s origins, which date to well before the launch of ChatGPT and the subsequent acceleration of AI capabilities. While the law has gone through many revisions since its initial drafting four years ago, its basic approach reflects thinking from those early days. What makes the current wave of AI different is scale. When regulation ignores that critical dimension, its definition of AI is all but guaranteed to be ridiculously overbroad.  

EU members heard these criticisms in the waning months of debate and, to their credit, added a section dealing with “general purpose AI.” By this, they mean powerful AI models like ChatGPT that exhibit general language, vision, and reasoning capabilities and may soon approach the status of artificial general intelligence (AGI). To access the European market, developers of these AI generalists (also called “foundation models”) will need to comply with the EU’s copyright directive, submit a detailed summary of the content used in training the model, and draw up technical documentation of its capabilities. The largest such models—around the size of GPT-4 and up—are considered “systemic” and are required to undertake additional precautions, such as adversarial testing and the adoption of cybersecurity protections. 

Applying special scrutiny to the developers of AGI-like systems is the most reasonable and well-targeted provision of the EU AI Act. The provision echoes the White House’s Executive Order on AI and entails an approach that the U.S. would be wise to codify into law, as well. Unfortunately, the makers of even the most well-aligned general-purpose AIs can still expect to face enormous challenges complying with the rest of the EU AI Act’s dubious “high risk” provisions. 

While many narrow AI applications exist for scoring résumés and so forth, generalist systems like ChatGPT can do that and more. How general-purpose models behave is highly sensitive to how users prompt them to behave, implying that there is no single, canonical way to measure a model’s bias in any given “high risk” use case—and that’s assuming that “bias” in this context is even well defined

The EU AI Act is deaf to these issues of technical feasibility. Instead, it simply directs the makers of general-purpose AIs to “cooperate with such high-risk AI system providers to enable the latter’s compliance.” How this is supposed to work in practice is anyone’s guess, at least until the European Commission issues its policy guidance. Nevertheless, with fines of up to €35 million or 7 percent of a company’s global annual revenue for failure to comply, no one should be surprised when U.S. developers start to delay the release of their latest models in Europe, if they release them there at all. 

As a result, the passage of the EU AI Act may even undermine AI safety by hindering the rapid and iterative deployment of the defensive forms of AI needed for institutional adaptation. Worse still, in the EU’s attempts to export its framework abroad (including through its digital envoy in California), it risks conflating its blunderbuss approach to AI regulation with AI safety itself, potentially polarizing the politics of AI in the U.S., where it’s most important to get regulation right. 

A smarter law would narrowly regulate for truly catastrophic risks and impose oversight on AGI labs, postponing more comprehensive forms of regulation for a time when the fog has cleared. Nevertheless, the problems with the EU AI Act run much deeper than any single law. They are symptomatic of the EU’s habit of regulating as if we have reached the End of History—a fabled time when the basic structures of society have reached their final equilibrium, leaving technocrats to humdrum tasks like squeezing efficiencies out of harmonized standards for energy-saving tea kettles and the like. If the last four years of AI progress tell us anything, it’s that history is far from over. 

Photo: J Studios/DigitalVision via Getty Images


City Journal is a publication of the Manhattan Institute for Policy Research (MI), a leading free-market think tank. Are you interested in supporting the magazine? As a 501(c)(3) nonprofit, donations in support of MI and City Journal are fully tax-deductible as provided by law (EIN #13-2912529).

Further Reading

Up Next