Judd Rosenblatt joins Jordan McGillis to discuss DeepSeek and the competition around AI development.
Audio Transcript
Jordan McGillis: Welcome to 10 Blocks. I’m Jordan McGillis, Economics Editor of City Journal. On January 20th, 2025, the Chinese artificial intelligence firm DeepSeek released its R1 model. The model is competitive with top American models, but DeepSeek has reportedly achieved this feat at a tiny fraction of the cost that American firms have been pouring into their training.
The next day, President Donald Trump held a press conference at the White House with the heads of OpenAI, Oracle, and Japan’s SoftBank to announce a $500 billion plan to build a system of AI data centers in America called Stargate. To discuss the latest in AI geopolitics, I’ve invited Judd Rosenblatt on today’s show. Judd is the founder and CEO of AE Studio and a leading advocate for aggressive, thoughtful American AI development. Judd, thanks for coming on.
Judd Rosenblatt: Thanks for having me, Jordan.
Jordan McGillis: First question for you, what does DeepSeek’s release tell us about the state of the AI arms race?
Judd Rosenblatt: Well, it’s impressive that they were able to make so much algorithmic improvements with limited compute. And one thing that’s fairly interesting about the DeepSeek work is that it strongly reinforces this idea of a negative alignment tax where improving alignment techniques, investing in trying to make AI more likely to be more capable by virtue of its alignment, not only mitigates risks, but also enhances capabilities because it uses reinforcement learning to induce chain of thought reasoning.
And that winds up increasing... So basically optimizes for transparent reasoning structures and also increases model performance in all these different complex tasks, math ones especially. And so basically instead of just using reinforcement learning for preference alignment, which is what OpenAI’s RLHF does for politeness and stuff, it uses these reward signals in reinforcement learning that improves the internal structure of thought itself.
Jordan McGillis: Can you talk a bit about the unusual origin story that DeepSeek has? I understand it was kind of like a hedge fund as it got started.
Judd Rosenblatt: That is right. Yes, it is the passion project of the guy who runs this hedge fund, and he’s very interested in trying to build artificial super intelligence and had a lot of spare compute that they used at their hedge fund and decided to start building AI.
Jordan McGillis: One of my favorite tech thinkers, Kevin Xu, says that this is really about open source versus closed source in AI. Can you give us some context on that debate and how you look at that juxtaposition?
Judd Rosenblatt: Yeah, it’s a fairly complex nuanced debate. I think it’s very important that we try to accelerate American AI development and make sure that America wins. But at the same time, we want to make sure that AI doesn’t pose significant existential threats to America and to humanity and that we don’t lose control of it. And open source is actually the best thing there is for AI alignment.
The greatest gains in alignment and also associated capabilities have come because you’ve been able to get stuff open source and then share with other people and build upon their advancements. All the work that DeepSeek has built on top of is all this open source work that they had access to as well as distilling stuff from OpenAI’s model actually illegally or against terms of use.
But the fundamental problem with open source is it is also fairly dangerous. There’s a pretty crazy thing about how this all works where Anthropic had a paper called Sleeper Agents Paper and basically you can put sleeper agents into an open source model, and then there’s no way to know that they are there. And they can get activated anytime, which means that China could eventually create some open source model.
It doesn’t seem to be the case with DeepSeek, but it might be. We don’t know. And then everyone in the West could start using it and it could turn out to have a botnet that could take over all infrastructure in the West or something in the future. And there’s just no way to know about that. So it’s fundamentally extremely risky, which means that ideally you’d want to have some oversight into open source models.
And then there’s also just the fact that as things are poised to get more and more powerful algorithmic improvements, DeepSeek shows that you can make all these huge algorithmic improvements. And as that continues, we’re likely to see riskier and riskier stuff happen with open source. The increased capabilities at the frontier of open source mean that there could be substantial risk and biological weapons creation, misuse type of stuff is going to get easier and easier for any random person to do.
Jordan McGillis: Can you explain where the different big American AI players fall on the open source/closed source debate?
Judd Rosenblatt: Everyone is closed source except for Meta, which is very pro open source. Meta’s open source stuff was used in the creation of DeepSeek. The other big open source advocate is Mistral, in which Marc Andreessen, the American investor, is an investor.
Jordan McGillis: How did DeepSeek utilize OpenAI’s code?
Judd Rosenblatt: Well, they did a process of distillation where you can from the outputs of using OpenAI basically figure out ways to make their thing better. Technically, it’s something that they can just go ahead and do. OpenAI, it’s against their terms of use, I think, to do it and they’re not happy that it was done. But interestingly, at the same time, they’re not happy that it was done, Sam Altman also recently said in a Reddit AMA that maybe he thinks it was the wrong strategy for OpenAI to be closed source and thinks that maybe he should make OpenAI more open source in the future, which although I think he hinted that he doesn’t think that OpenAI employees would be very much in favor of that.
Jordan McGillis: One of the big controversies and debates that emerged in the days following the big splash that DeepSeek made is how American chips possibly played into the training that DeepSeek was doing. What’s going on with that?
Judd Rosenblatt: There seems to be a great deal of evidence that exactly that happened. There’s also a lot of evidence, people freaked out that it only costs $6 million to create R1, but in fact, there is a great deal of evidence that for everything that led up to that, it actually cost 1.6 billion or something in total compute. And that was some number like that. I think they mostly were using NVIDIA H800 chips.
I think it’s not since expert controls haven’t been super effective because of a loophole until end of 2023. They can still use compute in different countries like Singapore, which I think they’re doing. So the recent stuff from the Biden administration right before Biden went out, an executive order seeks to close those loopholes and it’s probably going to be fairly effective.
But I think interestingly, one of the things that’s really freaking out people coming away from DeepSeek is that they’re worried, okay, well, can China just outcompete us? Even if we have greater compute, can they outcompete us? And luckily, the answer to that is probably no. If we have very stringent expert controls and we continue to invest in compute ourselves, they probably can’t.
Because despite what algorithmic improvements you make, if you have greater compute, you get greater benefit from those algorithmic improvements in the first place. And also that brings down the cost of things. When you bring down the cost there, it actually increases the demand for things like chips, et cetera.
Jordan McGillis: I want to go back to your remark regarding Singapore. You’re saying that DeepSeek has its infrastructure there. It’s not that it’s using Singapore as a passthrough for getting the chips to China?
Judd Rosenblatt: Yeah, I think they’re using compute that is based in China probably, though, I mean, I’m not an expert about this stuff, but I think they might be using it based there and they’re also smuggling... There’s also a lot of AI chip smuggling that goes through countries like Singapore and Malaysia and like the UAE and stuff.
Jordan McGillis: How about Chinese indigenous chip development? Do you think that that’s something that’s going to take off or are the export controls that the Netherlands has cooperated with the US on the machines produce these things? Are those going to slow down Chinese development?
Judd Rosenblatt: It seems like it will slow down Chinese development, and they are definitely pursuing self-sufficiency in semiconductor technology, but it’s quite difficult. So people seem to be fairly confident that China is likely to remain behind. One potentially unlikely, but very risky thing is that potentially people could figure out huge hardware advances with AI, just figuring out new things that haven’t been invented yet, that’s a potential concern. But the consensus tends to be that China has a long way to go to catch up.
Jordan McGillis: All right, let’s spin over to your work. Tell us about your advocacy and the strategic brief that you sent my way before we hopped on.
Judd Rosenblatt: Sure, yes. Basically I run a bootstrapped AI product consulting company building AI products for clients and grew that to over 160 people, and then started building and selling our own companies and investing the profits into neglected approaches to AI alignment, basically trying to reduce existential risk from AI. Was motivated to start doing this after having kids myself and seeing that very few people are actually working on trying to solve the problem that nobody knows what is going on inside of AI models.
And we’re not tracking to really understand that or tracking to figure out how to make AI actually for sure not going to pose a threat to America and to humanity. I’ve been working in the industry for over a decade, and I noticed that since so few people are working on it and it is also getting more and more powerful. It’s getting more powerful at an accelerating pace. And that’s something that’s hard to get through your head. Humans didn’t evolve to be able to understand exponentials.
There’s something called exponential slope blindness, which is that it’s just hard for us to model what exponential growth looks like. But we can project that due to algorithmic improvements and scaling compute that over the course of the next five years, it’s interesting to ask yourself how much more capable do you think AI models are going to be say five years from now? What would be your guess?
Jordan McGillis: You’re asking me to do some exponential math on the fly. A lot more powerful.
Judd Rosenblatt: Right. They’re currently super human level. They’re better than doctors at diagnosis and stuff, top 200th best programmer in the world, and et cetera. They’re already there. How much more capable do you think they’ll be in five years?
Jordan McGillis: 10,000 times more capable.
Judd Rosenblatt: That is actually the lower bound of what people are projecting. Exactly that. The consensus seems to be somewhere between 10,000 and a million times more powerful five years from now.
Jordan McGillis: I just can’t even conceive of what that means.
Judd Rosenblatt: Yeah, exactly. You can’t. We have no idea. I don’t know what 1.5 times more intelligent than humans. It’s hard to conceive.
Jordan McGillis: And obviously there’s a lot of upside, a lot of upside there for us. There’s a lot of downside too. How are you thinking the benefits and the risks and how you can help socially and governmentally to steer things in a way that is aligned?
Judd Rosenblatt: Well, so I’m naturally quite optimistic and I want to make sure we capitalize on all of the benefits. If it works out, then we solve whatever major problems we have in the world today. We cure all diseases and reverse aging, and things like that are quite exciting. But in order to get there, we have to actually invest in trying to solve the alignment problem itself.
If we solve the alignment problem, then not only are we more likely to get there because AI doesn’t accidentally kill us along the way or get so powerful that some individual disaffected teenager can create a biological weapon and kill a million people instead of just shooting up his high school, but also actually investing in alignment itself yields exactly those sorts of advanced bio advancements that can do things like cure aging and solve diseases and stuff.
There wind up being all sorts of incidental benefits as well. It’s not really a trade-off actually. The biggest investments in alignment to date have actually advanced AI capabilities in the first place. There’s some cool research from Anthropic that confirms that in models above 10 billion parameters, alignment features consistently enhance performance rather than limiting it.
And if you make use of that stuff, you can just do more and more capable things over time. There’s also this chain of thought reasoning stuff is unlocking all these new capability frontiers, and that’s also paving the path forward for a lot of potential alignment advances where basically the more you invest in alignment there, the more you’re likely to have your company win in the market too because you get all these other benefits as well.
Jordan McGillis: Now, we’re a public policy research institute. From a governmental perspective, what can be done, what policies can be instituted to help align AI development with humanity’s best interest?
Judd Rosenblatt: Well, I think that Manhattan Institute, Nick Whitaker’s “Playbook for AI Policy” is excellent actually and makes a lot of really good recommendations of all to be done to expand US leadership in AI, things like protecting AI labs from hacking and espionage. It’s currently the case that things just leak. There’s not much security at the labs whatsoever. People think that things are just getting stolen and going straight to China right now. And it seems like there’s some evidence of that with DeepSeek.
We don’t want to overregulate and then cause us to not be able to... People actually incidentally are mostly worried about overregulation to not lose to China. But the other thing that you don’t want to overregulate about is you don’t want to overregulate and then thereby not be able to solve the alignment problem because you cripple yourself from making the advancements and capabilities and alignment that would actually make AI be more capable by virtue of its alignment in the first place, and then be able to defeat the AI that is less capable and less aligned.
And so what that means is there’s a lot of unfortunate legislation on a state level like in Texas right now focused on disparate impact stuff, which really would probably be a disaster if that stuff winds up passing.
Jordan McGillis: I’m not sure what you mean by that. What’s the legislation exactly?
Judd Rosenblatt: I’m not an expert on disparate impact law, but basically I think the quick summary of it is that they’re creating this new thing called the Texas Responsible AI Governance Act, and the goal is to try to prevent “algorithmic discrimination,” which means that if you’re an employer or you’re a company and you wind up doing things which they say are discriminatory to one group or another, then you get in trouble for that. It’s hard to hold people accountable for that sort of thing.
Basically it’s impossible to build AI that would be able to be accountable in that way in the first place. And it doesn’t make any sense anyway either. So if we have to wind up doing that stuff, then it’s going to cripple us compared to countries that don’t have to do that stuff. You asked, so what policy would actually be good, and there is some obvious stuff that would be good and that would be substantially increased investment in AI alignment. That would be the thing that would be the best thing you could possibly do.
I’m not a big fan of government involvement with anything generally. Ideally, it would be more on encouraging private industry to more substantially increase investment in AI alignment, which is better for them anyway. There’s a lot of development of data centers on federal land poised to happen and already happening. And you could institute some requirements to make sure that companies that are benefiting from that would be dedicating a certain percent of that compute to investments in AI alignment, which is good for them anyway.
You could also just spin up substantial investments in AI alignment in the first place, like DARPA could allocate eventually hundreds of millions or billions directly into the AI R&D that is needed here. It’s interesting because investments in alignment might seem on the surface to be less likely to yield economic benefit because they entail more AI R&D. R&D is fundamentally a risky unknown process and you don’t know whether a particular direction in R&D is going to work or not. But the cool thing is the work out yields orders of magnitude more economic value than the less ambitious stuff.
And so causing companies to and doing public private partnerships and stuff to invest in neglected or purchased AI alignment would be fairly high impact. I mean, fundamentally, there’s this unsolved problem with AI, which is nobody knows how it works and we know it’s about to get way more powerful. And we don’t know what’s going on inside of it, and so it might do what we want or it might do something completely different from what we want. We think the optimization function is tell it to do X.
But actually because we can’t look inside of it and know what’s going on, it might be that it’s optimizing to do X until it gets more powerful than us and then just kills us. It sounds crazy to say, but we have no idea because we can’t look inside and see what’s actually going on. And so given that, it’s sort of like this unsolved scientific problem, but the funny thing is very little has been invested in trying to solve the problem in the first place.
Jordan McGillis: How would a firm demonstrate credibly that that’s what they’re trying to solve to the adjudicating governmental officials who would manage the purse?
Judd Rosenblatt: That is a great question. Probably worth further reflecting on that and making sure that there aren’t unintended consequences of regulation preventing the innovation and alignment necessary there. It’s a great question.
Jordan McGillis: My next big question pertains to the emerging, I would argue, bipolar world we’re entering. How are China’s AI thinkers approaching the alignment question, if at all?
Judd Rosenblatt: The only Turing Award winner in China, who according to various reports Xi outsources his thinking about this stuff to, is extremely worried about existential risk from AI. He’s like the top AI guy in China it seems, and he’s quite worried about this stuff. Supposedly, Henry Kissinger, the final thing he did before he died, went to China and tried to make Xi an AI doomer.
And there are various different reports that that may have worked, but it also does not seem that it’s particularly top of mind for him right now. I mean, that may have changed, but it might be priority number 20 or 100 or something. It may have changed after DeepSeek, but it’s hard to know. It’s hard to trust anything coming out of China. But ideally, the reports are accurate and Xi is, in fact, an “AI doomer” now.
It would make sense that he doesn’t want to lose control of the authoritarian state. And that is something that ideally Donald Trump would be able to leverage and bully Xi on his AI doomerism stuff and cause China to slow down or to get on board with whatever paradigm we want to do. I mean, there is essentially effectively a war going on between the US and China independent of all this stuff in the first place, and we want to make sure that we win that.
But that doesn’t mean that we can’t also simultaneously leverage the reality on the ground with China and Xi and get him to be rightfully scared about loss of control to AI and then slow down and follow American lead on this in the first place.
Jordan McGillis: Is there any way that you see for the American government to integrate AI policy across the various agencies and branches of government?
Judd Rosenblatt: I’m not really an expert on that. Probably not sure. I mean, it seems like we’re going to have a strong executive branch in the next four years and probably a lot of...
Jordan McGillis: There’s a lot of vigor.
Judd Rosenblatt: That is true. Yes. Probably a lot of direction is going to be set there from meeting with various people in Congress in DC recently. It seems like Republicans are actually themselves... On an individual level, lots of congressmen, senators, et cetera, are extremely worried about existential risk from AI and they’d like to pass legislation about that. I don’t want them to overregulate and do things that don’t actually help, but they would like to do something.
They don’t know what the solution is. They’d like to do something and they’re waiting right now to see what the solution is, what’s going to be put forward by the Trump administration. And then most likely it seems like Republicans will get in line and support that and we’ll see. I mean, Trump is on record being very worried about “super-duper AI.” His daughter, Ivanka, actually she tweeted about Leopold Aschenbrenner’s situational awareness.
He’s a whistleblower who got fired by OpenAI. Whistleblower-ish. Got fired by OpenAI and wrote a great long piece about AI. That was something that she liked so much that actually she tweeted about it and I think she even made a website herself, it seems like, to educate people about AI and an existential risk from AI and stuff like that. And so hopefully that winds up moving forward and having Trump exert strong leadership on this.
I think there’s a strong history of good conservative leadership on difficult problems like this. There’s a guy named Herman Kahn, who in the ‘50s I think wrote a book called “Thinking the Unthinkable,” which was thinking about the possibility of nuclear catastrophe and thinking about that when nobody really want to think about it. It’s a scary thing to think about, things like loss of control to AI and even diplomacy with AI and stuff like that.
But the reality is this stuff is accelerating and we need to consider all sorts of different possible things that may come about and then make sure that America is in the best possible place given those possible realities. That’s the guy that Dr. Strangelove was based on, and we need that right now.
There needs to be realistic people thinking deeply about how this technology is going to change everything and then what the implications are and how America can win and American citizens can continue to endure and thrive and flourish in the future. And if we get this right, then as you mentioned, there’s enormous progress and we solved everything going forward.
Jordan McGillis: Beautiful that would be. Last question for you, Judd, where can listeners find your work?
Judd Rosenblatt: Website. Ae.studio is our website. Posted various different think pieces on LessWrong, which is where a lot of the people working on AI alignment hanging out. You could look me up on LessWrong or Twitter or something like that, I guess.
Jordan McGillis: All right. Judd Rosenblatt, thank you so much.
Judd Rosenblatt: Thanks for having me.
Jordan McGillis: Please do check out what Judd and his team are up to. And as always, like, comment, and subscribe to 10 Blocks from City Journal. Thanks for listening.
Photo by Justin Sullivan/Getty Images