By now, you’ve no doubt heard that Google released an image generator, as part of its Gemini AI chatbot, that often refused to depict white people. Users would ask for pictures of “Vikings,” and get pictures of black and Asian Berserkers. And so on. The permutations were endless. (My favorite was a laudably diverse depiction of the English Civil War.) When Google shut the image-maker off, public attention shifted to how Gemini’s text responses, too, are remarkably politically correct.

There are some weighty ironies here. The AI “safety” experts had raised alarm about “implicit bias” in AI, only for Google to release an almost parodically racist chatbot. Analysts had warned that we must beat China to AI supremacy, and now we see an American AI subject to Chinese Communist Party–style thought control.

Google’s immediate mission is to regain trust. (If its chatbot is this slanted, some users will ask, what about its search results?) Presumably these are tense days at the company.

You might suppose that conservatives should be upset as well. But the Gemini fiasco could bring some welcome news.

First, Google will likely correct course to some degree. This is good. The company employs many of the world’s best AI researchers, who are working on projects that could benefit everyone, such as AI geared toward developing groundbreaking new pharmaceuticals. We should be glad that Europeans continue to rely on thriving American tech companies, not the other way around. (Hostility to innovation is part of what makes Europeans so poor, relative to Americans.) If the Gemini debacle shakes Google into curbing some of its internal woke excesses, all the better.

Second, Google’s having rushed out a half-baked AI product illustrates Joseph Schumpeter’s timeless wisdom about “creative destruction.” As users slowly embrace AI-generated answers to their queries instead of relying on traditional searches, Google will either reinvent itself or fall behind more nimble competitors. Either way, society benefits. Google, in other words, is scrambling precisely because it is under threat.

Third, Gemini’s botched rollout makes it slightly less probable that the government will strangle AI in its cradle. The market can punish a company that force-feeds users images of female NFL players, but the government is liable to lock that approach in as the default.

While the “anti-white chatbot” affair will pass, fights over AI outputs will continue. An AI’s responses can be improved, but they can’t be “fixed.” When the topic is sensitive or value-laden, an AI’s answers will always disappoint or offend someone. Consequently, no company will succeed at building the one AI chatbot. These products will continue to proliferate, with distinct ones catering to specific needs, tastes, and worldviews. This raises the possibility of balkanization, the demise of shared reality, and rising civil strife—a danger we should take seriously.

Here again, though, there is a positive angle worth considering. AI could, somewhat paradoxically, serve both tradition and progress. It could enable parents to teach children at home, in keeping with their social or religious values. (Picture an AI tutor devoted to math, chemistry, and Augustine, Aquinas, and Dante.) This trend could allow distinct communities to prosper—a thousand semi-isolated (but AI-empowered) ideological villages blooming. Over time, heterodox thinking could grow. Which could lead to something truly interesting: an explosion of ideas that are genuinely, startlingly new.

Photo by Jakub Porzycki/NurPhoto via Getty Images

Donate

City Journal is a publication of the Manhattan Institute for Policy Research (MI), a leading free-market think tank. Are you interested in supporting the magazine? As a 501(c)(3) nonprofit, donations in support of MI and City Journal are fully tax-deductible as provided by law (EIN #13-2912529).

Further Reading

Up Next