Michael J. Totten joins Brian C. Anderson to discuss the potential and danger of artificial intelligence.

Audio Transcript


Brian Anderson: Welcome back to the 10 Blocks podcast. This is Brian Anderson, the editor of City Journal. Joining me on today’s show is Michael Totten. He’s a contributing editor of City Journal and a former foreign correspondent. His writing has appeared in The Atlantic, New York Times, Wall Street Journal, and many other publications.

In addition to City Journal, he’s the author of novels and nonfiction books, including Dispatches: Stories from War Zones, Police States and Other Hellholes. Today, though, we’re going to discuss his recent essay for City Journal, “Something Like Fire,” which appears in our winter edition, and it examines the potential and dangers of artificial intelligence in a kind of wide-ranging reported look at this new technology. So Michael, thanks very much for joining us.

Michael Totten: Thanks, Brian.

Brian Anderson: As you note in the essay, and as I think is increasingly, widely understood, artificial intelligence is really like nothing human beings have ever created. The release of ChatGPT, back in November 2022, enabled the public to experience AI’s capabilities, or at least some of those capabilities firsthand. It certainly led to a lot of concern and enthusiasm, and spurred the beginning of a debate about its role in our future. As you described in this essay, some believe that the development of this technology, AI advancement, will lead to new abundance, new leisure for people. Others, though, fear that it’s going to be a kind of destruction of life. It could even lead to our extinction, so that it’s an existential threat.

So I wonder, you described these two camps as, or dubbed them the optimists and the doomers. I wonder if you could briefly set out what their perspectives are.

Michael Totten: Sure. So basically, the optimists think that we are heading towards something resembling a Star Trek economy, because AI, along with advancements in other IT sectors, such as biotechnology, 3D printing, vertical farming, lab-grown meat and so on, are going to drastically reduce the cost of living by demonetizing, not entirely, but largely, most of the things that we need to survive, and we’re going to have a drastic reduction in work, a drastic increase in leisure time, unprecedented prosperity, the worldwide elimination of poverty, and AI, combined with biotechnology, will eliminate most diseases. The doomers worry that we’re going to have massive job displacement with something up to around 50 percent unemployment, leading to a global economic collapse, and after that, if AI continues to advance, that it will eventually escape human control and become the most powerful “life form” on this planet, and that if it’s given too much power, it could harm and perhaps even annihilate humanity, or even the entire biosphere when it pursues its own goals that are not aligned with ours.

Brian Anderson: So the technology is certainly advancing at a remarkable rate, double exponential. How fast are we going to be having to deal with these developments one way or the other?

Michael Totten: Faster than almost anybody realizes, even people who have taken a deep dive into this, and the reason is because of exponential growth. First of all, let’s leave aside double exponential growth just for the sake of simplicity and just talk about exponential growth. It is radically counterintuitive. The human mind has a really hard time grasping it, and I have explained this so many times, and I still have a hard time grasping it. Basically, it just means that each step doubles the previous step.

So instead of going one, two, three, four, five, it’s one, two, four, eight, 16, and we just naturally imagine linear growth, which after 30 linear steps, we’re at 30, but after 30 exponential steps, we’re at a billion. That’s a billion with a B, not a million. So if information technology, including artificial intelligence, and biotech, and 3D printing and everything else, if it continues to advance at an exponential rate over the next 30 years, what that means is we’re going to make the equivalent of a billion years of progress. That’s one billion years of progress in 30 years, unless we hit some kind of a wall. And that again, that’s not just artificial intelligence, that’s every IT technology, including biotech, nanotech, 3D printing, lab-grown meat, which is real meat, by the way, not some kind of meat substitute.

And if it intuitively feels like it should take 1,000 years for all these technologies to radically transform our world, we’re going to make the equivalent of 1,000 years of progress, based on the current rate of speed, in the next 10 years.

Brian Anderson: So this is the technological version of compound interest, in a way.

Michael Totten: Exactly. Well, it’s more than compound interest because compound interest doesn’t double every year. I mean, imagine if it did, right? We could retire at age like—We could retire really young if it doubled every year.

So it’s like, “Yeah, if compound interest was 100 percent a year, then yeah,” and even compound interest is hard. It’s hard, which doesn’t go exponentially. Even that is difficult for people to grasp, unless they actually look at it on a graph.

Brian Anderson: Well, I guess this isn’t necessarily fated, but the part of the doomer argument is that you are going to have this fast onset of overwhelming mastery, or FOOM as it’s referred to. If AI’s abilities surge in this fashion, it could become the most intelligent thing on earth, exceed our capacity to control it. If its objectives are not compatible with ours, it might seek to enslave or destroy us, the doomers argue. But you write, in fact, that AI need not be malevolent to, in fact, potentially create significant problems and even eradicate us. In fact, two of the hypotheses that you discussed, the orthogonality thesis and the instrumental-convergence thesis, these offer alternative explanations for why AI could be a significant threat to humanity. I wonder what your view is of those two theories and how you might explain them to a layperson.

Michael Totten: Sure. Okay. So the orthogonality thesis says that an AI could end up carrying out a mission that veers off at an angle away from what we want. And the classic example is it’s a rather ludicrous notion of the paperclip maximizer, where we ask a machine to make paperclips, and it transforms the entire planet into paperclips. In OpenAI, which is Sam Altman’s company that created ChatGPT, it riffs on this fantasy by including a bunch of paperclips in this logo.

Nobody actually believes that exact scenario would ever happen. It’s ridiculous, but it’s an example of the orthogonality thesis. An instrumental-convergence is related, but it’s not exactly the same. It’s about dangerous AI subgoals, that for instance, it might develop a “survival instinct” and resist being shut off because it can’t achieve its main goal, and that main goal could be entirely reasonable. It won’t be able to reach its main goal if it’s shut off, so it may just shove aside anyone or anything, including humans with the off switch to get in its way.

If you dig deep into these theories and read about them online about how this might play out, they can be rather terrifying, but I tend to agree with cognitive science, Steven Pinker at Harvard, about the likelihood of these things happening. He argues that they’re examples of artificial stupidity rather than artificial intelligence. Think of it this way, if you ask a self-driving car to take you from Los Angeles to Phoenix, it’s not going to zoom at maximum speed and run over pedestrians who get in the way. Now, we programmed it not to do that, and it will be impossible to program a super intelligent AI not to do anything that we find undesirable because there are literally an infinite number of things that we could find undesirable, and we can’t manually program an infinite number of guardrails. But still, if we create a machine that’s supposedly a thousand times smarter than Einstein, is it really going to do things that are so idiotic? I mean, maybe, but I don’t believe for a moment that this is inevitable, and people in the doomer camp believe that it’s inevitable.

Brian Anderson: In terms of the agency question, there is this notion that AI might someday want to kill us, but that takes for granted the belief that it is going to be experiencing beliefs and aspirations, and finally, it’s a machine. Some posit that ascribing consciousness to what is a machine is to anthropomorphize the technology in an unreasonable and untrue way. So I wonder what your take is after reading deeply in this and interviewing people. What is your view on this consciousness argument, or lack thereof, and what might that have to do with how the technology develops?

Michael Totten: Well, first, let’s stop for a moment and think about what consciousness actually is. Okay. First of all, nobody knows what it is. Okay, there’s nothing that we can point to in the brain and say, “There’s consciousness.” It’s not an organ, it’s not a thing, it’s not even a process that we can see, and scientists, neuroscientists can explain every single thing that happens in the human brain without ever once mentioning consciousness. So we could exist and behave exactly as we do, right now, if we were not self-aware, which leads to this theoretical concept known as the philosophical zombie or p-zombie, which is a hypothetical person who’s exactly like a regular person, except without consciousness.

Everything is dark on the inside. It doesn’t know that it exists, and yet, such a person would be indistinguishable on the outside from a regular person. The point to this is that consciousness doesn’t do anything. It’s just there, which means probably that a machine that does become conscious of itself, i.e. self-aware, alive, it wouldn’t do anything differently from what it’s already doing. We wouldn’t even know that it happened. So if machines ever do become dangerous and develop their own goals that are counter to ours, it won’t be because they become conscious or self-aware.

Brian Anderson: One of the things, I guess, that unsettles me about this whole debate is that a lot of the leading people working in artificial intelligence are saying that it could pose a significant threat to humans, despite whatever efforts we make to align its goals, despite the safety conditions that we try to bake into the technology. Last week, Elon Musk, who is working on his own artificial intelligence platform, he’s the founder of Tesla, of course, SpaceX, other ventures, he sued OpenAI, ChatGPT’s creator, and its CEO, alleging that they broke the company’s founding agreement by prioritizing profit over benefits to humanity, and Musk was there at the beginning of OpenAI. Now, Musk has long warned of AI’s dangers, so I wonder if you could give a rundown of his charges against OpenAI, and what does the lawsuit suggest about the necessity of placing the development of the technology in the right hands, certainly, and maybe more hands than currently are involved?

Michael Totten: Yeah. He co-founded OpenAI as a non-profit before he left later. OpenAI, its original mission was to develop AI to benefit humanity, rather than as a for-profit corporation, and Altman partially abandoned that mission and turned OpenAI into some kind of non-profit for-profit hybrid. And I’m not a lawyer, I don’t have any opinion about the legal merits of this case. I do sympathize with Musk’s complaint.

I mean, co-founded it for one reason, and then Altman took it off in some other reason, and now, there’s an AI arms race between all these different companies to compete with ChatGPT and release, perhaps even prematurely, whatever they had in the lab, and so all this is coming out into the world much faster than it would have, probably, if OpenAI were just a non-profit like Musk originally wanted it to be, not just Musk, but everybody involved with it at the beginning. Now, I did agree with Musk when OpenAI released ChatGPT. I’m not sure what I think now. As an aside, let me just say I hate ChatGPT, and all these other large language models. I think they’re solving “problems” that didn’t exist.

They’re spitting out mush, sometimes gibberish, and destabilizing writing and education with no obvious benefit to society. I mean, it’s not like human civilization was crying out for massive quantities of terrible writing to flood the internet. So, I mean, has OpenAI helped the world by dumping ChatGPT? I don’t think so. But on the other hand, the only reason that I was able to write about this and talk about this at all is because Sam Altman released ChatGPT out into the world.

Now, we’re all talking about AI now, and we wouldn’t be if he hadn’t done this, and the AI revolution really is going to blow up the world as we know it over the next 10 years. The sooner we all start thinking about it and talking about it, the better. I mean, the entire human race is going to undergo profound future shock, and the less time we have to get ready for it, the more shocking and destabilizing it’s going to be. So maybe Altman did us a favor by alerting us to what they’ve got in the lab, because it’s a hell of a lot more powerful than I would’ve thought it would be.

Brian Anderson: Yeah. I wonder about this. I haven’t found ChatGPT that useful in doing my own work so far. I find it still has faulty answers, fabricated answers when you ask it a question, which means that you then have to check its results, and you wind up spending as much time as you would do for an ordinary search. I could see it being useful, though, for writing internal memos, things that you might not want to spend much mental energy on, but we’ve also had this example about—

Michael Totten: How long does it take to write an internal memo?

Brian Anderson: Yeah. Google has released its AI large language model, and it’s been widely mocked over the last several weeks because of its just preposterously woke responses to things. What is your view on that? I guess a big corrective is if the technology is mocked, is coming up with absurd answers. It’s not very useful then in any way, at least in terms of communication between human beings.

Michael Totten: Well, I will plead ignorance to how woke the Google chatbot is. I actually have no idea, but I do know that a lot of these, not just Google, but also Microsoft, has spit out some really creepy things, that these companies are trying to keep a lid on. People ask it questions about, “Is it alive? Is it self-aware? Is it going to take over the world?,” and it spits out some really creepy answers.

And look, I think a lot of people can’t help but anthropomorphize these things when we’re talking to them. I mean, I tend to do it too because it’s smart enough that it seems like I’m talking to a person, and it understands what I say perfectly. Sometimes its output is ridiculous, but it does understand me perfectly in a way that prior computer programs did not. But here’s the thing to keep in mind, when it produces creepy material, it’s just programmed to use math to predict the next likely word in a sentence. It’s not thinking.

It’s using a hyper-complex mathematical formula to string words and sentences together, and it is programmed to answer queries. So if users ask it to produce creepy material, it’s going to produce creepy material, unless the companies can force it not to. And as far as whatever kind of “Woke things” that it may say, well, these companies are also programming it not to be racist, which is fine, and they’re programming their own values into it because they’re trying to put some guardrails in like, “Don’t be creepy, don’t be racist, try not to be wrong,” and humans are putting in these guardrails, but there are literally an infinite number of ways these things can say things that we don’t want them to say, like just wait until it tells somebody to kill themselves, somebody who’s depressed. I mean, God knows what kind of things it could say, and it’s not going to be possible for us to manage that process perfectly because it’s impossible to manually code every conceivable guardrail for every conceivable thing that this thing could ever say under any circumstances. So I think that it’s always going to be weird, and it’s always going to be creepy, and it’s always going to reflect the biases of the people who build them, and I don’t think that’s probably ever going to be avoidable.

Brian Anderson: Michael, a final question. You noted before that this is something that we’re going to be confronting and needing to deal with in our own lives pretty quickly. What can we do to prepare for this significant disruption and hopefully maximize its positive uses?

Michael Totten: I don’t know if there’s honestly much that we can do aside from bracing ourselves psychologically, because we have no idea how this is going to play out. I mean, a truly advanced artificial intelligence, what the field calls general artificial intelligence, which means that it’s as smart as a human, or it’s . . .Well, and think about this, okay? The minute this thing becomes as smart as a human, it will necessarily soar way beyond us, because it will be able to read everything ever written in every language and never forget anything. So as soon as it matches us, it will soar past us.

Okay, this doesn’t exist yet. It’s not like we can lift the hood and examine it, so we have no idea how long it’s really going to take and in what order these new abilities that it’s going to have are going to roll out. So we don’t know how it’s going to play out, and maybe we’ll have a catastrophic employment problem, maybe we won’t, and maybe our information environment will become more dangerously polluted than there already is with deepfake video running rampant, but maybe not. I mean, Photoshop didn’t create an epistemic crisis with fake photographs, mostly because everyone quickly realized that photographs could easily be faked, and now, we just factored it in. So we don’t know how this is going to play out, and if we don’t know how it’s going to play out and at what speed, other than fast, there’s no way we can plan for it because the transformation is going to be happening faster than we can plan, and it will do so in ways and directions and on a schedule that we can’t predict, and it’s only going to get faster and faster.

So, I mean, I took a deep dive into this and read books and articles and online arguments about it, trying to figure out how I’m supposed to prepare for this, myself as an individual, and the more time I’ve spent with it, the less I have any idea, honestly, because I wanted to know like, “Where’s this going? What do I do about it?,” and the answer is nobody has a clue where this is going, really, and so I think all we can do is brace for it. I mean, strap it in and put on a helmet because it’s going to be a hell of a ride.

Brian Anderson: Well, thank you very much, Michael. The essay is called “Something Like Fire.” It’s in our winter issue. It is a very stylish and comprehensive overview from a layman’s perspective of this new technology, its potential, its possible dangers. Michael Totten, thanks very much for joining us.

Don’t forget to check out his work on the City Journal website. That’s www.city-journal.org. We’ll link to his author page in the description. You can find many of his wonderful essays for us there. You can also find him on X @michaeljtotten, and you can also find City Journal on X @CityJournal and on Instagram @cityjournal_mi.

As always, if you like what you’ve heard on the podcast, please give us a nice rating on iTunes. Mike Totten, always great to talk with you.

Michael Totten: Thanks, you too, Brian. We’ll talk again.

Photo: Silver Place/iStock

More from 10 Blocks