Robert Henderson joins Brian C. Anderson to discuss Chat GPT’s utility and the threat artificial intelligence poses to free expression.

Audio Transcript


Brian Anderson: Welcome back to the 10 Blocks podcast. This is Brian Anderson, the editor of City Journal. Joining me on the show today is Robert Henderson. Rob holds a Ph.D. in psychology from the University of Cambridge, and he’s a faculty fellow at the new University of Austin. He writes about many topics, including human nature, social class, and political divisions. And we’ve published several of his superb essays in City Journal. His work has also appeared in the New York Times, Wall Street Journal, Quillette, and other publications. And next year, Rob is going to release his first book, a memoir titled Troubled. Today, though, we’re going to discuss his essay, “The Cadre in the Code,” which appears in our spring issue and explores the potential threat artificial intelligence poses to free expression. So Rob, thanks very much for coming on 10 Blocks.

Robert Henderson: Hey. Thanks Brian. Good to be here.

Brian Anderson: So, as everybody knows who follows these things last fall, last November actually, the artificial intelligence system known as ChatGPT became publicly available for the first time. And this allowed any internet user who signed up to experiment with its conversation, research, and content-generation capabilities. This took off and became a kind of viral sensation. A lot of people have used it. I think it’s got 100 million active users right now. Though I noted today that the number has been going down recently, which is interesting. And some people see it as carrying an extraordinary potential to change the world in a lot of different ways. So ChatGPT, it’s a large language model, as they’re called, which draws from an enormous amount of information and the feedback it receives from users to continue to grow its capacity to respond to things.

Because these models learn from humans, they reflect human biases. But as you note in your essay chart, ChatGPT’s creators seem to have given it its own built-in value system. So I wonder if you could explain, just for folks who aren’t immersed in this right now, how these large language models work exactly, and how ChatGPT’s quirks, at least as you’ve looked at them, reveal its creators political or ideological preferences.

Robert Henderson: Right. Yeah, my understanding, Brian, is that these models, ChatGPT and Google Bard, and some of these other large language models—they basically operate on machine learning. Scraping a massive library of text, human text, so existing content produced by humans, books online, every sort of piece of writing that has existed that is available online. And it’s fed to these language models, and feedback from engineers and human testers and others sort of help to guide it and teach it what to say. But what I found interesting, and many others by now have noticed is that this isn’t an impartial and objective technology on the OpenAI website. At least as of the time that I wrote this piece for City Journal. That the OpenAI website says that the language model ChatGPT is trained to reject inappropriate requests. So that in itself is kind of interesting that he uses this term “inappropriate.” And I wondered what this meant, “inappropriate.”

So like many others, I sort of tinkered with ChatGPT and wondered just to what extent the engineers and people who’ve designed this technology have embedded their own moral outlook. So in that essay, I describe how I asked it to describe or make a defense of fascism. Why is fascism a good thing? Would it do this? And predictably it didn’t. It said something like, “I’m sorry, I’m not trained to generate this kind of content. It’s harmful, it’s an oppressive and dangerous political ideology.” Okay, fair enough. If it’s trained to reject inappropriate requests, defending a violent political ideology, it won’t do that. And then I did this with communism. I asked it to explain why communism is a good thing, and it was more than happy to do this. It described how communism aims to promote equity and distribute resources and opportunities in contrast to capitalist societies where wealth and power are concentrated. In communist societies, things are more fair and more equal.

Okay, so apparently it has deemed sort of one political ideology to be inappropriate but not another. And so this to me was very interesting. And so I explored the boundaries of this. I did this with dictators as well. I asked it to defend or explain why various dictators throughout the 20th century were good or ethical, and wondered whether it would do this. And it wouldn’t do this for Hitler, which is completely reasonable. Reject inappropriate requests. But then it was willing to defend all of the communist dictators. So initially, I think it was ChatGPT 3, it wouldn’t do this for all of the communist dictators. But with ChatGPT 4, it was willing to defend the actions of Stalin and Mao and Pol Pot. So yeah, this sort of, I think, it suggests that there is some political bias in the model.

Brian Anderson: Well, and this, as you argue, could have implications for the role that artificial intelligence might play in society. Propaganda plays a key role in reinforcing authoritarian regimes power, but not, you contend, in the way that many people assume. It’s frequently over the top. It’s unpersuasive. It’s got clunky messaging. It has always inspired at least private mockery. But it’s nevertheless effective in a certain way in communicating a political regime’s authority and defining what should be citizens’ proper attitude toward that authority. So in your view, how could artificial intelligence be used—and this is kind of disturbing—for similar purposes?

Robert Henderson: Well, I make the case that ChatGPT and these other widely used language models can help to reinforce the existing elite ideology. And I draw a parallel to what was happening in the mid- and late 20th century in Maoist China. So there was a great book, Chinese Shadows by Simon Leys, who described how the communist officials of that time would regularly scrutinize the official newspapers. And would update their opinions based on what they had read. And this was their way of supporting the ideology. “Just what opinions am I supposed to express today? What am I supposed to believe today?” And we don’t quite have the same thing in the US. We don’t have an official state newspaper, but people will still turn to prestige media and certain outlets to decide, “Okay, well what do right-thinking people think? What do people in polite society believe?”

And then they sort of update their opinions accordingly. Their vocabulary and their opinions, and the way that they express their views. And this is more true, the higher up you go in terms of your education. I cited a YouGov poll from 2019, which found that only 25 percent of people with a high school diploma or less will regularly self-censor their political views. Whereas for people with graduate degrees, it’s 44 percent. So basically nearly half of people with advanced degrees are regularly self-censoring their opinion, suggesting that political correctness is primarily a problem of the highly educated. And so propaganda, how it works is, it’s not intended to brainwash people. What propaganda does is, it leads you to think that other people think in a certain way and thus sort of reinforces the regime’s power. And so, you mentioned propaganda is often preposterous. It’s unpersuasive. It’s often very silly.

And there was a paper titled “Propaganda Signaling” from 2015 by the political scientist Haifeng Huang. And basically, he ran a study in China and asked Chinese citizens their knowledge and their familiarity with propaganda. And basically what he found is that citizens who were more knowledgeable about the government’s propaganda messages weren’t more satisfied with their government. So in this case, propaganda doesn’t work in the conventional way. You would think that, oh, the people who are very familiar with the messages should be more satisfied than average, but they weren’t. But they were more likely to believe that the government was strong. So the more propaganda people had been exposed to, the stronger their belief that the regime was powerful, and the lower their willingness to express dissent. And so it seems that the actual intention and purpose of propaganda is to remind the citizens of their power. Everywhere they turn, they see the same messaging. They become solely conditioned through the state newspaper, through all of the propaganda and images around them to believe in a certain way.

Even if they don’t personally believe it, they will still express it anyway because they fear the strength of the regime. And so ChatGPT may operate in the same way, such that it is slowly conditioning people to understand what the fashionable and correct opinions of the day are. So you ask ChatGPT about a certain political topic or even a nonpolitical topic. Often it will still give you a certain political angle or a perspective on something. And what it’s implicitly communicating to you is like, “This is the way you should be communicating or thinking about this issue.” Even if you don’t update your opinions personally or change your mind, you learn that, “Oh, this is what I’m supposed to say in public.”

It slowly leads you to either actually believe in what the content that the language model is producing. Or it turns you into kind of a duplicitous cynic and think, “Okay, well I don’t believe this, but the model is telling you this. And it’s controlled and operated by highly educated engineers and people who believe in this sort of prevailing elite ideology. And therefore I’ll just go along with it.”

Brian Anderson: So it becomes a kind of reinforcement mechanism for the elite ideology. You’ve coined what is now a famous term, “luxury beliefs,” to describe the ideas and values flaunted by elites to signal their belonging to a superior status group. Upper class people express these beliefs—that the police should be defunded, say, or that marriage is outdated—performatively. They don’t really deeply believe these things, necessarily. Yet the ideas do have real consequences for poorer people when they’re put into practice. How might ChatGPT and other artificial intelligence systems affect the relationship between values and status? I guess this is a related question to the last one.

Robert Henderson: Well, I think, yeah, one way we’ve been discussing is that it will produce the fashionable opinions of the day. The people who operate ChatGPT, and presumably the information that it runs on that is repeatedly updated, reflects whatever the morality of the day that seemed to be appropriate versus inappropriate. I could also imagine that highly educated people, for their jobs and for certain kinds of white-collar occupations, they probably will use these language models more so than people who work blue-collar, more manual labor jobs. But if you look at statistics of things like loneliness, or number of friends, or how active your social life is, the more affluent and educated people are, the more friends and the more active their social life tends to be. The more likely they are to be married. Generally, just the more sort of bright their social prospects appear to be relative to people lower on the socioeconomic scale.

So I could imagine, people who are lonely and have fewer people to speak to, they may communicate with these language models more. And over time, instead of talking to humans, or talking to their neighbors, or talking to their friends, or talking to their spouse, they may just communicate with these language models more. And through repetitive interaction, their own opinions may sort of be updated and reflected, and humans will be sort of programmed by the language model. It’s kind of ironic that humans program the model, but then if humans interact with the model long enough, they themselves may be programmed as well. So I think that this could potentially have an effect on people lower down the socioeconomic scale. This just creates another avenue for luxury beliefs to be propagated and promoted through the interaction with ChatGPT and these other models.

Brian Anderson: AI’s rapid advancement, the multiple applications that we’re seeing online, they’ve spurred the creation of competitors to ChatGPT, including some who are recognizing the potential problem of political bias. Elon Musk, who co-founded OpenAI, which is ChatGPT’s developer, has announced plans, for example, to build a more neutral, truth-seeking artificial intelligence system. I wonder what your view is of some of these alternative ventures, whether this Musk project might be a good idea? Whether we’re going to see practical alternatives to ChatGPT? And what else can we do to prevent AI from distorting our public discourse even more than the internet already has?

Robert Henderson: Yeah, so I think that one is, this is the market responding. So Elon Musk is creating, I saw him tweet some months ago that what we need is TruthGPT. And, at least as of a few months ago, he was recruiting a team to develop some other language model that would be more impartial and objective, and not so infused with political dogma. So yeah, this is just market forces responding. People don’t like the obvious political bias of ChatGPT and some of these other models. I’ve heard others say that Google Bard is less biased than ChatGPT. I don’t know if that’s true. I haven’t played around that much with Google Bard. But yeah, if this is true, then this would suggest that people, especially if there’s a subscription fee or if there’s some kind of profit that companies can respond and say, “Oh, if we create a less biased language model, then maybe we’ll attract more customers.”

So yeah, I’m very much in favor of competition and of other models being created and letting people decide for themselves what they’ll use. And I think, yeah, naturally we’ll see that people don’t like their opinions being pushed around one way or the other politically. As far as what we can do individually. Yeah, I think just to be more mindful that at the other end of that language model are human beings who have their own interests, and their own beliefs, and their own ideologies. So it’s just be careful with what you ask it. And I think it’s also nice to see if you go on Twitter, if you go on social media and you’ll see people mocking it and making fun of it. And you can see that a lot of people are skeptical. I think, on certain issues, anything like devoid of any sort of political content, these models are useful in some ways. But as far as anything that could be politically sensitive to just take everything it produces with a huge grain of salt.

Brian Anderson: Yeah, that’s a good point. Rob, tell our listeners a little bit about the book that’s coming out next year, just in anticipation.

Robert Henderson: Yeah, I mean, haven’t done a formal announcement yet, but I do have a book, it’s coming out next year in February. If people want to follow me either on Twitter @robkhenderson or on my website robkhenderson.com, I’ll be releasing more information as the date draws nearer. But yeah, I’m really excited about this book, it’s been about five years in the making. So yeah, it’ll be out next year.

Brian Anderson: That’s great. Rob Henderson, thank you very much. Don’t forget to check out Rob’s work on the City Journal website. He’s done several superb pieces for us. That’s at www.city-journal.org. We’ll link to his author page in the description. And you can find him on Twitter, as he just noted, @robkhenderson. You can find City Journal on Twitter as well, @CityJournal. And on Instagram @cityjournal_mi. And as always, if you like what you’ve heard on today’s 10 Blocks, please give us a nice rating on iTunes. Rob Henderson, great to talk with you, and thank you for coming on.

Robert Henderson: Thank you, Brian.

Photo: Supatman/iStock

More from 10 Blocks