Xi-Trump to talk AI Safety, Huh?
+ChinaTalk in SF, impromptu meetup tonight
ChinaTalk is in SF! RSVP for an impomptu meetup tonight.
Today, the second half of our conversation previewing the summit that just kicked off. With Mythos scrambling everyone’s priors on frontier capabilities, AI safety is suddenly back on the bilateral agenda. Julian Gewirtz (former NSC senior director for China) and Matt Sheehan (Carnegie) join to map how Beijing is processing the shift and what’s actually achievable in renewed US-China dialogue.
Check us out on YouTube or your favorite podcast app!
Good Job Alert: Coefficient Giving (formerly Open Philanthropy) is hiring senior generalists and grantmakers with a China background. Coefficient created CSET (who won ChinaTalk’s inaugural ‘think tank of the year’ award) and are set to fund more in the US-China AI space. Check out the job posting here and do consider applying.
The AI Safety Angle
Julian Gewirtz: Both sides have been signaling that AI will feature prominently in upcoming discussions. During the Biden administration, we pushed hard to get AI safety on the agenda when President Biden met with President Xi. Beijing initially gave us the cold shoulder, but gradually realized there was no major downside to including it on the agenda.
The Trump administration initially showed little concern about AI safety. JD Vance and other senior officials openly mocked AI safety as a construct, making US-China AI safety dialogue a non-starter — the United States didn’t even want it.
What’s changed recently in the Trump administration appears directly tied to the Anthropic Mythos moment. The realization that extraordinary and potentially dangerous AI capabilities aren’t theoretical conjectures for years down the line but exist in the real world right now has made the administration take this issue more seriously.
Both the Chinese and Americans are now backgrounding expectations that AI will come up in discussions, with potential AI safety-related deliverables. During the Biden administration, we pushed hard to get this topic on the leaders’ agenda. China’s initial response was essentially a cold shoulder — they weren’t interested in having the conversation. They felt it was happening in an environment of heating AI competition and were unhappy with export controls and other steps we were taking.
Whether we wore them down or won them over, the topic eventually came up between the leaders. Jake Sullivan also discussed it with Wang Yi. Beijing shifted its approach after realizing this was an area where the world was looking to the two most powerful countries to show leadership. They also recognized there was little downside from their perspective.
When the Trump administration came in, their approach was to dismiss AI safety entirely. You had JD Vance and other senior officials mocking AI safety, saying the administration would stop all “that nonsense” and focus solely on winning. But over the past month, since Anthropic began briefing on the Mythos capability, the administration has begun taking this more seriously. They’re realizing this isn’t conjecture about future risks but actual capabilities in the here and now that open the United States to profound vulnerabilities and dangers.
This creates an interesting and different starting point for renewed conversations with the Chinese about AI safety. One lesson from Mythos appears to be that for both the United States and China, advances in capability cannot be separated from increases in vulnerability. The more capable American models become, the more capable Chinese models become, the more risk, danger, and potential bad actor misuse emerges.
Some people in both countries have fantasized about reaching a point of such dominance and capability that safety issues would become less salient. But we’re learning that vulnerability and capability are fundamentally interlinked.
Matt Sheehan: That was a great rundown of it from the US side and then how the Chinese side looks in that engagement. During the same period of time, I’ve essentially been following the Chinese domestic conversations on this very closely. There’s been a pretty big evolution, partly in response to — largely in response to the development of the technology. But then also in response to different groups within China platforming these issues and then seeing them get some level of traction with leadership.
Maybe if we go back to at least pre-Mythos, because this is so recent, if you had to characterize how the Chinese government thinks about AI safety writ large, whether it’s misuse or control stuff, I’d say it has risen much higher on the agenda. They have essentially put it on the table as a topic that they need to think through, but they haven’t made up their mind on what they think of it.
You saw this has been cropping up in different policy documents. One place was in what they call the AI Safety and Governance Framework 2.0. It’s kind of these two organizations under the CAC, their roadmap for how are we thinking about AI risks? How are we thinking about mitigations, especially as it relates to technical standards?
They had a version of this in 2024 that was just super high level and very light on any, what we would call AI safety related topics. They updated it in 2025. You saw a bunch of changes between the two documents. In one of them, labor featured much more prominently and seriously in it.
Julian Gewirtz: Meaning people losing their jobs because of AI?
Matt Sheehan: People losing their jobs because of AI. In the 2024 version, it was some very handwaving of yes, it’ll restructure social relations and we should think about that. In the most recent one, I’ll miss the exact phrasing, but it said something along the lines of “This will lead to a devaluation of labor relative to capital and social disruptions related,” something like that.
Between these two documents, we saw labor rising a bunch and we saw safety in a few different forms, like misuse and also some of the control, loss of control language featured more highly. When I asked some people involved about this and what does this reflect or not reflect about the policy process over there? I specifically asked about these safety issues and it’s on the agenda, it’s something that we’re thinking about, but we don’t know what we think about it at this point in time. This is going back to September, September of last year.
Fast forward to now and obviously the biggest change has been Mythos. You also have people within the Chinese system that are essentially working to platform these issues. The area that I’m most focused on right now is the technical standards work. A couple months ago, they created an AI safety security working group on technical standards. It’s led by Zhou Bowen, who’s the head of Shanghai AI Lab. That’s one of the more safety-pilled organizations in China. We’re seeing, okay, below the line, underneath the surface, they’re starting to get their mind around these issues.
And Mythos is like the bomb that scrambles this equation. We don’t yet know how the party has actually taken Mythos on board. I’ve heard different things from different people who interact with different parts of the Chinese bureaucracy. Some downplay it, feeling like they’ve got it under control — it’s just a new cyber thing and we’ve been doing cyber things forever. Other people say they actually seem pretty shook about this and want to talk about it.
At least when this is getting tabled for this conversation, my read — not based on inside information — is that this is the US side pushing this as a topic for discussion, not necessarily the Chinese side. I have pretty low expectations for anything in the way of tangible deliverables from these discussions. The idea that we’re going to strike some type of grand bargain on AI where we both agree, “If you don’t do it, then I won’t do it” — we’re both going to be nice, we’ll have a hotline, and we’ll just call each other right away as soon as something goes wrong — I have very low expectations for that.
The effort should go into trying to establish some working level, more technical conversations, specifically on testing and evaluation for safety risks. This gets very tricky with the capabilities and threats dynamic. When you learn how to test a model for certain capabilities, that also might indirectly help you build those capabilities in advance. It gets very tricky, and people in the testing and evaluation world have somewhat different takes on this.
My takeaway from many of those conversations is that there is a path forward for sharing some relatively high-level information about how we test for these risks. There are a few reasons to be doing that. One is that currently, the Chinese frontier AI labs’ testing for frontier risks is nowhere near the level that it is in the US labs. It’s a funny inverse where the Chinese labs face tons of regulatory compliance obligations from their government, and therefore, they’re not tacking on all of this voluntary testing for frontier risks. The US labs, at least historically, have faced very low regulatory burden from the government, and therefore, they put a lot of energy into this type of voluntary testing.
If you take Chinese capabilities relatively seriously — even if we’re ahead and maybe going to get further ahead — their capabilities matter. And the type of testing that happens in China voluntarily within the Chinese system (not jointly testing, but the testing they do for their own national security reasons) really matters. We should try to do what we can to make that testing better, to bolster that part of their system.
Julian Gewirtz: Super interesting. When I hear you talk about this, I wonder what the version of this conversation that could happen at the leader level is, because you don’t have two leaders in this case who are going to be talking about that degree of specificity. We have to imagine, at some level, the conversation will essentially be, “AI matters, we both agree,” and maybe some other people figure out what to do about it.
Jordan Schneider: We were talking at lunch about the idea that even if you’re nine months behind, that means a Chinese lab will have a Mythos thing in nine months. Even taking away the US-China national security angle — NSA versus MSS — there are still criminals in China or around the world who might exploit this. Perhaps nine months from now, the rest of the world will have patched everything, and China will have the most vulnerabilities open to them to do ransomware on water treatment facilities or similar attacks.
The US government, or this administration was able to spend a year and a half dismissing it because it wasn’t really all that pressing. But everyone’s consensus view now is that — whether it’s six months, a year, or eighteen months — at some point in the not-too-distant future, there will be Chinese labs able to create extremely cheap, extremely potent cyberweapons from a domestically trained model. When things hit the fan in China from a domestic perspective, you have to think they’re going to start doing more testing than just checking if you’re saying anti-party stuff.
Julian Gewirtz: It’s fascinating to me because if you go back to the history of how China governed the internet giants, there’s a real similarity. Initially, it was, as long as you do censorship, you’re okay. No images of Winnie the Pooh, no mention of Tiananmen, and we’ll leave you alone.
But then they began to realize that even with that set of technologies, there were systemic risks. This is often shorthanded as the Jack Ma speech and the crackdown that followed on the Alipay IPO, but actually, it was a regulatory storm — a complete 360-degree crackdown on the sector to rein in financial, social, and political risks.
That hasn’t yet happened with the AI sector in China. It has largely been censorship and a few other things, partly because this is such an area of national competition. But that other shoe has to drop. I don’t see a way around it.
Jordan Schneider: What does the political response look like when we see crazy cyber hacks or actual real labor disruption?
Control, Harness, Govern
Matt Sheehan: Yeah, the comparison to the internet era is fascinating — the parallels are striking. So what’s China’s playbook here? It follows a pattern — control, harness, govern. Control means managing the speech implications, censorship, and political aspects of the technology first. Harness is the next phase — once they feel they have control, they focus on using the technology to diffuse and upgrade their economy. Govern represents the more sophisticated approach of addressing knock-on social effects beyond party control.
In the Internet era, control meant building the firewall over the long term. When I moved to China in 2010, there were about two years of relatively wild activity online. Then came the 2013 crackdown on the Big Vs where they implemented policies like making people legally liable if their Weibo posts were retweeted 500 times. This crackdown phase focused on controlling speech and information implications, spanning roughly 2012 to 2014.
For AI, this control phase ran from 2021 through 2023. They first worried about recommendation algorithms and their effect on people’s feeds, then deepfakes, and finally generative AI for similar reasons. They attacked these information problems first.
Once they felt comfortable with control, they moved to harness the technology. In the Internet era, this was the Internet Plus campaign, starting around 2014 or 2015. They launched the “1,000 entrepreneurs and 10,000 innovations” initiative — entrepreneurs and innovators everywhere. Having gotten the internet under control, they encouraged its expansion, leading to a huge explosion in mobile internet services spreading across the economy.
For AI, they’ve resuscitated the “plus” formulation with “AI+.” For those unfamiliar, AI+ means AI+ manufacturing, AI+ healthcare — the same pattern as Internet+ transportation. This represents the harnessing phase: politics controlled, economic diffusion good, or at least on the right path.
The government then asks: How do we deal with the knock-on effects? In the internet era, this meant the Cybersecurity Law, the Personal Information Protection Law, followed by anti-monopoly efforts and the broader tech crackdown.
We’re at the dawn of this phase with AI. They finalized regulation on anthropomorphic or human-like AI in April, addressing concerns about addiction, effects on minors, and psychosis related to AI addiction. It’s very focused on social impacts.
The question now is what comes next. Some will involve hard security and cyber issues, but there’ll also be a broader focus on labor impacts and other societal concerns.
Julian Gewirtz: We haven’t talked much about this, but there’s an important difference in how the Chinese Communist Party is governing the AI sector. One of the main ways they’re exercising control is by not allowing companies to obtain the compute they want from abroad.
We’ll see how this plays out when President Trump visits China, particularly if Jensen Huang accompanies him on the trip. There’s this fundamental tension between Chinese labs wanting to buy NVIDIA chips and Chinese regulators forbidding them from proceeding with those transactions because of geopolitical risks and leverage concerns. This is an interesting version of the governance paradigm, but from a side that we didn’t see the Chinese government worry about in the internet sector.
Some of the same dynamics may be true with investment from abroad. Obviously, if you think about the Manus acquisition debate — which you and I, Matt, have discussed many times before — that’s one where clearly the interests of a company and the government are at odds.
Matt Sheehan: I have a half-baked take I’m trying to bounce off people. You were talking about CBRN cyber criminal actors — non-state actors. This has been central to a lot of US discussions of AI safety. When people want to make these safety risks real, they’ll often refer to concerns about terrorists making bioweapons. I’m not dismissing that as unreal — it could be — but it’s something we go to very quickly in the US.
In China, they’ve been more skeptical of these risks for a while, for a variety of reasons. My half-baked take is that China doesn’t feel itself to be under siege from a world full of terrorists in the way that we do. In the United States, we have a self-conception — which is based in reality — that we are often the victim of terrorism. Everyone wants to get at us from abroad, and therefore, if these models are out there, we’ll be first in line to get CBRN attacked in one way or another from non-state actors.
In China, they say they’re worried about terrorism. Terrorists in their mind are domestic and are from a specific ethnic group in their conception of it. But they’re less worried about foreign non-state actors in the way that we are.
Jordan Schneider: The Falun Gong bioweapon — would you really put it past them? Yeah, I think it’s a bad take.
Julian Gewirtz: I think it’s a bad take, too, Matt. First, the Chinese Communist Party perceives itself as profoundly under siege and has a paranoid mentality that is absolutely central.
Jordan Schneider: Let’s start with Xinjiang. According to some narratives, the policy shift was initially triggered by concerns about foreign ideological infection and terrorist elements coming from abroad. What else do you see as problematic with this framing?
Julian Gewirtz: The Chinese Communist Party under Xi Jinping has the most catastrophic worst-case scenario planning mentality of any regime I can think of. Their relative lack of concern about chemical and biological weapons and AI stems more from assumptions about how AI differs from existing capabilities — and those assumptions may be changing — rather than from any lack of concern about external threats.
Over the past decade, I’ve seen the CCP become increasingly fixated on the idea that nefarious forces are out to get them.
Matt Sheehan: To clarify, when we talk about being under siege, it’s from non-state terrorist groups. The paranoia is intense, and the feeling of being under siege is real, but they’re usually talking about the United States of America. That’s fundamentally different.
Both governments should assume the other will use AI in every possible way to gain state-to-state advantages. But concern about non-state actors differs significantly between the two countries. Someone who focuses on Southeast Asia, the Golden Triangle, and the scam factories there might see this very differently.
With CBRN stuff, there’s a big distinction between state and non-state actors, and their paranoia focuses on the United States.
Julian Gewirtz: Here’s a comparative question — where does Japan fit into this framework? They’ve actually experienced a sarin gas attack. The United States has experienced horrifying terrorist attacks, but not specifically chemical or biological weapons attacks. Some societies have experienced these kinds of attacks firsthand.
I wonder whether Japan’s degree of anxiety about AI risk is heightened because of its experience, or not. If it maps similarly to other countries, then perhaps the alternative hypothesis — that concerns are mostly about AI capabilities rather than threat perception — is more accurate.
Jordan Schneider: I’d also say public discussion about organized crime or terrorism in China is heavily constrained. These conversations happen privately, but discussing them publicly on WeChat or Xiaohongshu is impossible. You can only discuss them in the context of announcements about arrests that have already been made.
Julian Gewirtz: As I think about it more, there’s no doubt that the AI safety community has talked extensively about chemical and biological weapons risks. But when I see what’s really driven actual concern about AI safety in broader society, it’s effects on kids, deepfakes, and similar issues. From a national security establishment perspective, there’s concern about use in warfare, and perhaps most fundamentally, this idea of out-of-control systems — a loss of human control.
I wonder whether the community that has held the candle for these risks, centered partly on CBRN risks, actually represents how most Americans think about AI risks. There is polling on this we could look up, but I doubt CBRN risks would be in the top three AI concerns for Americans.
Matt Sheehan: I totally agree that the average American, even the average policy world person, isn’t putting these risks top of mind. Within the community that’s been pushing the message that these systems are getting really dangerous really fast — not in a diffuse social impacts way, but in a safety way — that’s where these concerns are centered.
Jordan Schneider: It comes down to the binary of whether something is an existential risk or not. Cyberattacks aren’t existential risks. Labor disruption isn’t an existential risk. You don’t necessarily have those funders and people focused on existential risks clocking those sorts of issues as much. The whole existential risk framing hasn’t bled into the Chinese discussion nearly as much as it has at Berkeley and beyond.
For part 1:
Macartney to Mar-a-Lago
Julian Gewirtz, former Biden administration China official, now at Columbia, joins me to chat about the Xi-Trump visit and all things US-China. Matt Sheehan, senior fellow at the Carnegie Endowment for International Peace, drops by to give his takes on the AI angle.



