TechFreedom’s Internet policy counsel and director of appellate litigation Corbin K. Barthold joins Theodore Kupfer to discuss digital authoritarianism in China, the possibility of decentralized social control in the West, and the new era of Twitter.

Audio Transcript


Teddy Kupfer: Welcome back to 10 Blocks. This is Teddy Kupfer, an associate editor of City Journal. And I'm joined on the show today by Corbin K. Barthold. Corbin is the internet policy council and director of appellate litigation for TechFreedom. He's written a number of pieces for CJ about the state of the tech industry and about civil liberties in a digital world. We're going to talk about those topics and his writing on them today. So thank you very much for joining, Corbin.

Corbin Barthold: Great to be here, Teddy. Thanks for having me.

Teddy Kupfer: Let's start with your article in our autumn print issue, which we ran online recently, on the Chinese social-credit system and whether something like it can happen in the United States. In China, there is this apparatus constructed by the state, the notion being to reward good citizenship, to punish bad citizenship. People's behavior is tracked. The system is still, I would say in a developmental phase, but those who follow the rules receive rewards. You get a better ticket on a train, your dating app profile receives some benefits, you can get cheaper hotel rooms. But if you do the wrong things, if you behave badly, you will suffer. Can something like this happen in the United States? That's the question you ask. What is the answer?

Corbin Barthold: The answer is a qualified no, or at least a "I'm not too worried yet." Now as a matter of just preliminary brush clearing, it's probably a good idea to separate out, in terms of the situation in China, the apparatus for surveillance and social control in general versus an actual social-credit system. And the surveillance and control is quite far along, and maybe we can circle back to that. The social-credit score aspect is very haphazard at this point. It's still in an experimental stage where local governments are encouraged to tinker with it. Some areas have it pretty extensively, some areas barely at all. Will that become a national system where you sign in, or there is just a score that exists that is your national score? I don't know. That could actually be quite far off. With that said, should we be worried about such things here? And we should be worried.

I mean, what got me thinking about this article were actually three discreet cases. I really started thinking about this back around February. I think most of your listeners are familiar with the Canadian trucker situation where the government of Canada temporarily de-banked citizens partaking in the trucker protest. There is a law, or a bill, I should say, floating around called the EARN IT Act that is a big threat to end end-to-end encryption here in the United States. And then although it's not actually in the article, Texas Attorney General Ken Paxton sending civil investigative demands to Twitter, trying to tell the government all about its content moderation policies.

And what those got me thinking about, I don't really think of it as necessarily a left-right distinction. I think of it as a centralize versus decentralize arms race that is always occurring with technology. And it's also kind of a dialectic where technology is coming and we need to debate how to use it. We can't just end technological development. Are the centralizers gaining enough control that there could be a tipping point where we see something along the lines of . . . Well, to take the extreme example in Xinjiang in China where the way that you walk can track your movements, the government could check your phone at any moment to see if there's any evidence of your religious affiliation. And you're ultimately at risk of being silenced at best and sent to a reeducation camp at worst.

And when you put it in those terms, if anything, I think in America we're seeing the opposite trend. Martin Gurri, clearly a friend of City Journal, his fantastic book, The Revolt of the Public, talks about the dynamic where in America, internet and technology on balance is actually a force for decentralization. It's decentralizing our narratives. It's actually sending us all into micro realities. And if you look at the revolt in our country over masking, that's a good microcosm of the fact that we, at the moment at least, remain a very independent-minded people. We are very decentralized, we have very strong ideas, and it seems implausible at the moment that centralizing authorities could get enough power to stamp that out with something like a social-credit system.

Teddy Kupfer: An argument you hear made on the political right a lot is that yes, the internet has been a force for disunion, fragmentation, rebellion, and that ultimately, its decentralizing tendencies should help stave off this kind of top-down totalitarianism. But what about the tendency of certain decentralized private institutions to act as if they were being coordinated from above? There's this notion of a "distributed conspiracy" that's emerged in dissident-right circles. It's often overstated, but the idea basically means that even without a central administrator telling them what to do, power-wielding institutions in American life can team up, whether it's to declare support for this or that cause or to unperson somebody.

And so you can think of examples like PayPal removing one's ability to transact because they've engaged in hateful conduct, somebody getting banned from every major social media site almost at once. This tends to happen to high-profile cultural figures, usually figures who are courting controversy. It may not be regarded as a threat to your everyday person, but I wonder what you make of this sort of rebuttal. It's not necessarily rebuttal, but I wonder what you make of this idea that yes, state, top-down, social credit or social control may not be much of a threat here, but that we have something to fear from private companies.

Corbin Barthold: That is a real concern. It is also one that those who raise it I see often rather quickly tip into overstating it as you said. And I'll even add to the mix there and say, I've thought to myself before about the danger of quasi-government, quasi-private institutions like, say, the State Bar of California telling me that I have to swear some oath to some kind of social-justice tenant in order to keep my license and then I would have to go Thomas Moore and refuse and do something else with my life than the part of my job that's legal practice. Yes, fortunately, I could probably go take the Idaho bar, and that leads me back to the fact that it's overstated. PayPal, as I mentioned in the article, we're seeing other private organizations come up and fill the space as it were. It's nice to have a market where people can come in and provide alternative options. And that is of course the key difference.

There's the brute fact that I'm not aware of a private company being able to stop me on the sidewalk and search my smartphone the way that the Russian or the Chinese government can, but then there's also that lesser market-option factor. The concern, if I may circle to . . . there's been these job owning cases where people complain that the government is requesting that content be taken down on, say, Twitter. And the lawsuits say, "Well, that's a First Amendment violation." And I'm not sure that's quite the right way to understand it. For it to be a First Amendment violation, there needs to be a government demand that's a threat where the platform was thinking about doing one thing and they feel so threatened that they don't have an option but to do another thing. And that's not the problem.

The problem is that the people in the private organization and the people in the government have the same set of priors and the same set of cultural understandings and they're on the same page to begin with. This is the problem you're talking about where they all see whatever, they have a strong sense of safety is this concept that needs to be imposed on social media content moderation. That is not really, in my opinion, ultimately a political problem with political solutions. The example I would give is when I was a kid, Prop 209, which banned racial preferences in government institutions in California. So my law school, UC Berkeley Law School, could not take race into account in admissions. And you know what happened? One year, their minority admissions dipped. And then after that one year, it went right back to the level it was at before. Because ultimately, it is very hard to impose political solutions on cultural forces.

So to the extent that that's a concern, and I've already given some reasons why it's a concern but not nearly to the level of a state-power concern. It is a long-drawn cultural battle that needs to be waged probably across decades where you are fighting to get new conservative elites into those elite spaces, having a voice as peers within those institutions. And I suppose the model here might be like the Federalist Society, played the long game. It's a long, drawn out thing and that's where I see answers coming from. I certainly wouldn't see . . . I'm not confident in the ability that you pass this law or file this lawsuit and you fix that concern.

Teddy Kupfer: Understood. You alluded to this distinction between China's system of social control and the social credit system, so let's talk about that bigger, broader, more sinister surveillance apparatus. A few weeks ago there was an apartment fire in the Xinjiang region that killed 10 people and injured more. There was this notion that but for Covid restrictions, which are still very severe or were still very severe in China, the people could have been rescued. And so protests broke out. First they were nearby, limited to the occasion of the fire, the area in which the fire transpired, and then they spread around the country as these things often do. Generally, the main through line was that they were . . . people were protesting the extremely draconian zero-Covid restrictions that Xi Jinping has implemented since the beginning of the pandemic. Requirements for constant testing, quarantining if you get a case or if you're in contact with somebody who tested positive, check-ins with QR codes on your smartphone if you want to go anywhere, that sort of thing.

Since the protest, the Chinese authorities have loosened many of these rules. Though it's unlikely to be the end of the unrest, because it's likely that cases will soon begin to rise, deaths will increase. China has had problems vaccinating the elderly. This is not the end of the story. But I want to ask, you're watching this story very closely, talk a bit about these events and how they add to our understanding of technology and freedom in China. Both the utility of technology to facilitate these cries for liberty and change people's lives for the better, but also how the authorities use technology from facial recognition to mining somebody's smartphone data and their social-media profiles to circumscribe freedom.

Corbin Barthold: When I get overly worried maybe about the spread of . . . We break our term so quickly, if you want to call it wokeness or cultural Marxism or whatever that would be. Illiberalism on the left, let's call it that. When I get overly worried about that, it's always salubrious to take a look over at China and really understand what oppression looks like. If you are in China, it is almost a certainty that you have an app on your phone that tracks your movements and the government knows where you are at any given time. To fortify that system, you have to check in often in various places where that data is collected and you are now known to have been in that place at that time. To get anywhere you need, as you mentioned, to use your QR code app. You can't take a cab ride unless you have proof of a negative test, recent negative Covid test. You can't enter most public places without the same.

Social media, it's just announced that the Chinese government will now be monitoring things that you like. We've seen during the worst of the 2020 cultural panic, people got in trouble for liking tweets. But the government doing that is really taking it to a whole new level. Companies now have to monitor the comments under their social media. The control if used to the maximum degree by a centralized force like the Chinese Communist Party is staggering, and it can be used ultimately, again, to highlight the situation in Xinjiang, to send you to a reeducation camp for a year where you'll be away from your family and you will suffer, well, a very bad situation.

It's important though also to remember that this is a spectrum. That technology is a tool, it can be used for good, it can be used for bad. The Chinese are not sitting there, most of them in that famous, "Are we the baddies?" The logic on the other end is that there was this tradeoff where we are going to have a lot of control, we the CCP, but we are going to impose order and harmony in our society. The positive-use case here would be the community grid system that's being used in some of the eastern cities where all of this big data, all of this surveillance and AI is being used to relieve traffic congestion or to address problems with black-market sellers, setting up shop on street corners or to get panhandlers to not harass people as they walk down the street. I live in the Bay Are,a and there's certain things in San Francisco that I have to admit it would be nice if the city would take seriously implementing some of these little smart city pieces of technology.

It's a diverse situation in the sense that that technology in China is being used for a wide variety of purposes, and so it's hard to pin down and just describe it as uniformly good or evil. I think we need to focus on the evil aspects and certainly evil in the sense of how they're incompatible with America and the American way of life, while understanding that there are nuances. And trying to understand the Chinese use case, perhaps the better to push back on the parts we don't like, but understand that just because AI is used . . . Or to use another example, just because AI is used to make the streetlights turn green when an ambulance is trying to go through, that's not dystopia.

Teddy Kupfer: Do developments in China over the last 15 years affect your view on whether information really wants to be free? As you're no doubt aware, this strain in internet activism, techno-optimism, in the 1980s and 1990s held that the internet would be a force for fragmentation but also a force for freedom. The Chinese authorities, as you outline here, have used technology to great effect to limit freedom. How might we modify the maxim that information wants to be free? Is it still the case?

Corbin Barthold: I'm reminded of the historians who still say when asked about the consequences of the French Revolution that it's too early to tell. Recent events in China have actually given me a boost because I'm one of those naive people who does think that although it's slightly corny and oversimplified, information does want to be free. And China for a while there was looking like a pretty solid counter example. Well, it's not clear. They made a pact with their people more or less coming out of the Cultural Revolution and into the reopening in the '80s that we're going to give you economic progress, we're going to improve your standard of living, but keep your mouth shut and behave yourself. And right now we're seeing some very concrete examples in that country of the brittleness of top-down control and also how top-down control can look really effective until it suddenly isn't.

Their economic growth is lagging in part because of their attempt to stamp out Covid, in part because of their attempt to control their tech companies and tell those tech companies how they should innovate. If economic growth lags and this attempt to censor tightens, it could just be a very scary, disappointing thing where the censorship works and people suffer for decades to come in a stifled society. Or it could be that we start to see more and more cracks until suddenly there's internal reform within the government. I don't know, I don't have a crystal ball. I'm rooting for freedom. I'll just be frank about that.

Teddy Kupfer: I think that's a good team to be on. I want to close by bringing things back here to home and to our own smartphone screens: Elon Musk and Twitter. There has been lots and lots of coverage of Musk, the richest man in the world's ownership of the micro-blogging service. And what Twitter does to many people it appears to be doing to Musk, namely distracting him from work and encouraging him to troll. Nonetheless, we've learned quite a bit over the last couple weeks about how the company had been operating before he took it private from apparently woefully inadequate info-security protocols to an ad hoc approach to circumscribing political speech via content moderation. We learned a lot about the people who used to run the firm, presided over its approach to "trust and safety," seen some signs that their political inclinations affected their approach to that work.

My first question is, what do you make of all this? From the Twitter files that have been reported on by Matt Taibbi, Bari Weiss, Michael Shellenberger, on how the company handled Donald Trump's suspension and the New York Post story on Hunter Biden's laptop, to this information about how Twitter used to operate?

Corbin Barthold: To lay my cards right out on the table, I don't consider myself to be on Team Musk, and I don't consider myself to be on team, what you might call “big dis-info” to use a shorthand for the other side. I know this is impossible. I'm just one data point, but I try to call balls and strikes on this stuff as I see it. My first thing would be to say this is a psychodrama that grips a certain kind of politics watcher, and I think it is overblown by all of us, including me who are in this world, in terms of its impact on the wider society. I will double down on that and say I think with the rise of ChatGPD and AI, we are seeing things coming down the pike in terms of speech and information that are going to be so disruptive that I do wonder if in hindsight some of these content-moderation disputes are going to look pretty small and petty in hindsight.

The third thing I'll say, finally getting to the discrete case. This is definitely my experience in litigation speaking, but I tend to be wary of hot docs. Hot docs are emails or internal communications within a company that are presented as the bloody red shirt you wave to show that something dastardly was occurring, especially in a context like this where they have refused to just release all the material for any journalist who wants to check it out. We are being drip-fed information as those holding it see fit. I'm wary of saying anything too strong in any direction. Having said that, while the people who got out over their skis and claim there was some kind of intentional conspiracy going on, I actually think have been pretty much proven wrong, which I always expected them to be.

I think we are seeing confirmation of the bubble problem that a lot of us suspected where you had a bunch of people . . . It kind of ties back to what I was saying about the government-private alliances being problematic where they all just kind of think the same way and have the same cultural priors and the same assumptions or in some cases from what we've seen here, even the more level-headed people are basically just suffering immense pressure from elsewhere in the company by people who have such attitudes, and that this resulted in sort of a hydraulic pressure to make certain decisions in certain ways.

I wrote for City Journal back in, I think it was April, on the Musk situation, and I laid out some things that I thought he should do, and some of them he's doing and some of them he's not. Getting this stuff out in the open and confirming that there really should be a wider diversity in the intellectual sense of decision makers who make these tough decisions when there's a crisis or something, a really big deal case that you need people with different perspectives in the room hashing out what the decision should be, I think it's clear he should do that. If he moves the company from San Francisco, I think that would make a lot of sense, frankly. But in my overall recommendation in the article I said content moderation is unavoidable, it's going to occur. Look no further than Elon banning Kanye West. The question is, how is it going to occur?

And we are a deeply divided country. People are not going to see the individual decisions in the same terms. There's always going to be disagreements. The best Elon can do is build trust in the process. That goes back to having people from different perspectives, and I think it would've been smart of him to come in, loosen the content-moderation rules. I personally am totally fine with that. Try to build up a belief that people from both extremes get booted in an understanding that it's not meant to be political, and then shut up and just let it take its course. I think he's making a bit of a mistake in going to the other extreme from what people disliked, and now he's starting to make some of these decisions Code Red. I mean, I would call out his tweet of, "My pronouns are prosecute Fauci."

Now that you own Twitter, as a sheer business decision, that's probably not the smartest tweet because a lot of your users are going to take that the wrong way and they're going to distrust you in exactly the way conservatives mistrusted the prior regime. So I'm not sure that's the smartest, but good things are happening. I mean, I'll close on if they actually give a feature—he's talked about doing this—that highlights to people that their tweets are being de-boosted. I think that's a fantastic idea. They should absolutely do that.

Teddy Kupfer: Interesting. So my second question and maybe a less sexy topic, the actual business of the company, Musk attempted to implement a pretty big change when he first took over, namely start charging subscription fees for Twitter Blue users. If you wanted to retain a blue check mark, or to acquire one, you would have to pay $8 a month depending on where you lived. This would be adjusted for purchasing power parody. This generated a lot of controversy. Many journalists and writers rely on Twitter for their jobs. The possession of a blue check mark is this conspicuous good that denotes that you're someone important, someone who should be listened to, and Musk quite self-consciously was trying to democratize the blue check. But it also had a clear logic to it, which is that look, people and companies benefit from the attention that their Twitter presence can afford them. Their companies trying to capture some of that surplus for itself.

The plan was put on ice after there were difficulties rolling it out. People were paying for Twitter Blue and then impersonating major public figures, big brands, tweeting all kinds of nonsense. But the idea seemed to have a certain logic to it. So A, I wonder what you make of that. And then B, I wonder what you make of Musk's more ambitious, and perhaps more speculative, talk of turning the company into an "everything app" like we see in some other countries where Twitter would not just be a place to post tweets or direct message, but also a place where you can exchange payments, maybe obtain ride shares, order food, do all sorts of things. What do you think of both of these ideas for the future of the company?

Corbin Barthold: Well, broadly speaking, I'm all in favor of experimentation by Musk. He's made some false steps that may be connected to thinking that you're experimenting on a blank slate and refusing to learn lessons that have already been learned. I do think in different ways, but on both the right and the left, there are often assumptions that content moderation is simple or easy, and it's just not. It's hard in all kinds of different ways. But it's a pretty simple mistake to think that if you're going to put out some kind of verification and just throw it out there, somebody's going to make a fake Disney account and start posting pictures of Mickey Mouse giving people the finger, that's going to take about three minutes. I'm not surprised that that kind of fell flat, but they're going to learn.

I am sympathetic to his tweet saying, "Let the process work a bit." I think some people in the past were maybe a little hard on the prior regime doing the exact same thing of needing to learn on the fly, because one of the things that happens when you've got millions of people talking on a platform is almost every day you're going to run into wacky situations you've never seen before, so that's great. I mean, some of the decisions or some of the prospects I certainly have my doubts about, he's talked about maybe turning tweets into a 4,000-character limit and I don't have a crystal ball, but that strikes me as kind of a terrible . . . that actually strikes me as potentially a way to kill the platform more than anything else by just making it something fundamentally different than what it is. We will see.

I mean, he's got a tough task because despite having taken it private, he can't just run it the way a billionaire runs a sports team, where he just hemorrhages money. I don't think he can afford to do that. He's got to find some revenue. It would seem to me that what you might want to do is—you mentioned purchasing-power parody. I mean, you scale up where you pay more money the more users you have, the more users you have, presumably the more valuable the product is to you, and force the people who get a lot of cultural cache out of tweeting to put a little more skin in the game. I have no idea how it's going to work out.

Subscriptions are probably a good idea given that he seems to be turning off a huge portion of his advertisers, the prior source of his revenue. Whether or not he can monetize that to a degree to keep the platform functional, I have not the slightest idea. And of course that's a prerequisite to him getting to the second step of making it into some kind of everything app, which it's hard to know what to do with that, right? On the one hand, there's no evidence whatsoever that that is anything more than a complete pipe dream and empty talk. On the other hand, there were times when I thought Elon was really full of it talking about his plans for Tesla, and here we are, and they got it off the ground and he made it work. You got to hand it to him, despite some of his more erratic behavior lately. If there's one guy who could pull it off, it's probably him.

Teddy Kupfer: I think we'll just have to wait and see. That's all we’ve got. Thanks, Corbin, for joining. Listeners don't forget to check out Corbin's writing on the City Journal website. We will link to his author page in the description. You can find City Journal on Twitter @CityJournal, on Instagram @CityJournal_MI. Corbin is on Twitter as well. We'll put in a link there. And as always, if you like what you heard on the podcast, please let us know by rating us very highly on whatever service you're listening to this. Corbin, thanks again for joining. Appreciate it.

Corbin Barthold: My pleasure, Teddy.

Photo by filo/iStock

More from 10 Blocks