China + AI = Military Advantage? Plus: DC Meetup!
"We should expect AI to play an increasingly critical role in just about every area of military technology."
How advanced is China's AI ecosystem, and how much of it has military applicability? From a Department of Defense perspective, former DOD staffer and current CSIS fellow Greg Allen talks us through AI technology in China.
Co-hosting is Eric Lofgren of the podcast AcquisitionTalk.
AI usage in the war in Ukraine;
China's strategy for AI up to 2030;
The military applications of AI technology;
How China's mixing of commercial and military tech makes international cooperation difficult.
Before we get into the newsletter, I’ll be hosting a DC meetup this coming Thursday evening. Sign up here for details and to RSVP.
ChinaTalk is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber!
The Road to 2030
Jordan Schneider: 2027 and 2030? What do these dates mean for the PLA and AI?
Greg Allen: I think the really important date is 2030. Back in 2017, the Chinese government put out what they called their “next-generation AI plan”. This is very colloquially referred to as China's national AI strategy. It set all kinds of goals for various aspects of China's economy, government, and military in terms of taking advantage of the potential that AI has to offer. They set a series of milestones for when they wanted to match world state-of-the-art, when they wanted to lead world state-of-the-art, and when they wanted to dominate the global AI industry. 2030 is the timeframe they set in place for that final marker: dominating the global AI industry.
It's been five years since they put out that strategy and they have essentially achieved their first five-year goal, which was reaching and matching the state of the art. There are a bunch of different ways that you can measure success in this area. All the metrics are imperfect in some way or another, but one that I think is useful and illustrative is the share of the most highly cited papers in AI research. These are the papers that leading AI scholars are building upon in their own work for the next round of work. The share that China occupies in the global AI research ecosystem has just gone up and up over the years, and they are on track, either this year or next year if current trends continue, to be the number-one publisher of some of the most highly cited AI research papers around the world.
The cliché of “China cannot innovate, they can only copy” is very much in the rearview mirror in terms of the overall story of what's happening in China's technology ecosystem. There are a lot of companies who are generating a lot of revenue and are building really impressive stuff, and there are a lot of universities who are putting out a lot of impressive research in AI.
Jordan Schneider: The idea of measuring who's winning and losing in AI, is this a useful concept? If it is, what are your top five indicators?
Greg Allen: I don't know that there is a really solid measure of who's winning and who's losing, but I think it is clearly the case that China is making significant progress. Not all of that progress is zero-sum. There's a paper from Tsinghua University that found that of the papers that are most highly cited that include Chinese researchers, something like half of those include co-authors from outside of China.
Research is an area in which, especially in the AI sector, there's a great deal of international collaboration, and the United States benefits significantly from research collaboration with China. The thing that becomes frustrating, from a national security and a foreign policy perspective, is that some of the organizations that do genuinely state-of-the-art AI research in China are also deeply involved in some things that US foreign policy strongly opposes.
This includes the domestic security and repression system being set up in Xinjiang, where AI plays a significant part in the surveillance state over there, and also in genuine military capabilities. There are Chinese tech companies who the United States will experience as purely innocuous commercial businesses, but actually have reasonably deep ties to the People’s Liberation Army. The fact that AI is not just a dual-use technology, but a general-purpose technology that is weaving itself into all parts of the economy, society, and military, makes it a real challenge when thinking about what the right balance of cooperation versus prioritizing security is.
From Commercial AI to Military Use
Eric Lofgren: It's a generalizable technology, but there are these two concepts of general AI and narrower AI, right? How much does their massive data in China, relative to the security state and mobile consumerism, actually translate to the military side? How generalizable are some of those things relative to China, or is it just about who can apply this in these narrow little areas the fastest? Does having all this research really benefit them, and how do we get that research into technology in the real world?
Greg Allen: When it comes to commercializing, China is quite successful here. There are a number of Chinese startups or large tech conglomerates that are actually building very successful products, both domestically and abroad. A lot of the secret sauce of TikTok is in the recommendation algorithm that determines what to show folks next, so that they continue staying on and engaging with the platform. That's an area where China has been tremendously successful.
Your point, though, is absolutely right. There are certain types of AI technologies, like the neural networks algorithm, that are highly generalizable and are openly available to anyone on the internet. Similarly, Nvidia GPUs, which many companies and resource organizations use to train their neural networks from a processing-power standpoint, are very widely available, although we try and cut off access of those to Russia and military end-users in China.
But there are certain aspects of the artificial intelligence technology stack that are not perfectly generalizable. Folks have pointed out that data is the new oil and that China seems to have the largest data sets. There are parts of that story that are true, but you can turn oil into gasoline and a lot of other things. If you have a massive amount of Chinese facial recognition data, that doesn't necessarily help you build a missile guidance system. There are plenty of aspects of the advantage in data that are application-specific.
What I do want to point out here, though, is that the overall strength of the Chinese AI ecosystem does have a really strong impact on the Chinese military's ability to harness that. The success of Chinese facial recognition AI, social media AI, or financial data AI determines how many universities are going to be pumping out how many graduates with these types of skill sets. That determines the size of the workforce and familiarity with these types of underlying concepts. While the data sets might not be fungible, the size of those data sets also translates into the size of an overall ecosystem that the Chinese government has worked very hard to take advantage of.
The Chinese policy of military-civil fusion is one that very deliberately seeks to take advantage of China's success in the commercial technology sector and find ways for the People's Liberation Army to make use of that.
China Isn’t a Monolith
Jordan Schneider: Different Chinese bureaucracies process what AI and machine learning is going to do differently. Can you give us a breakdown of weapons manufacturers vs. the PLA, vs. the Ministry of Foreign Affairs, vs. track two dialogues? How are all these being filtered differently through those organizations or groups?
Greg Allen: There have been negotiations at the United Nations in Geneva on regulating the use of lethal autonomous weapons. Back in 2018, I believe, the Chinese Ministry of Foreign Affairs put out a paper saying that the Chinese government would support a ban on the use of AI-enabled autonomous weapons, but would not support a ban on its development. The headlines that went out around the world were “China supports ban on lethal autonomous weapons”, [but] there's a bunch of very important asterisks there that relate to the complicated Chinese foreign policy org chart.
The Ministry of Foreign Affairs is part of the Chinese government, whereas the People's Liberation Army is the military of the Communist Party. The Ministry of Foreign Affairs does not always speak for the People's Liberation Army.
Actually, the People's Liberation Army has said relatively little in terms of what it would be interested in, or what it would put up with, in terms of diplomacy-related artificial intelligence, whether that's coming up with some kind of international norms around what types of technologies are allowed to be used and what types of contexts. The weapons manufacturers, some of the largest ones of which are state-owned enterprises, have been incredibly bullish on the use of AI and autonomy in weapon systems.
There’s no formal international definition for what an autonomous weapon is. but there is an official DOD one, which is the ability to select and engage targets without further human intervention. So once the human says, “hey, I'm putting you in autonomous mode,” then the thing can go find its own targets, engage, and perhaps even kill them without further human intervention. There are plenty of Chinese weapons manufacturers who are, in their marketing documents, describing capabilities that are consistent with that definition. We have records of some of them being exported internationally. The Ziyan Blowfish A2 is one weapon system that falls into this category.
This makes it really hard for a diplomat, or somebody such as myself in the DOD, to really understand in which cases is one part of the Chinese state not talking to the other. It also makes it a little bit confusing for folks in the United States, whether that's in think tanks, academia, or the technology industry, to understand what's going on with China. When you see a headline that says, “China supports a ban on AI-enabled autonomous weapons,” you think, “that's a favorable development for world peace.” But the unfortunate reality is that a lot of the actions are not backing up that type of posture.
Eric Lofgren: The two-track dialogues, what are they saying?
Greg Allen: For those who are not familiar, a track one dialogue is when our government talks to their government, [and] a track two dialogue is when our academics, or perhaps retired government officials, talk to their academics or retired government officials. This is a series of formal and informal dialogues that were very helpful during various phases of the Cold War, and that tradition has been kept up in the case of China. One thing that makes it a little complicated in the case with China is that a lot of their universities, who are authorized to speak publicly on security issues, have active military affiliations. So you might get somebody who is presenting themselves as participating in a track two dialogue who is a general in the People's Liberation Army, which makes it seem a lot more like a track one-point-five dialogue. I’m still in favor of those types of institutions and dialogues continuing, but it's important not to assume that just because somebody in a track two dialogue said something, we can infer that the Chinese government or military actually agrees with that.
Eric Lofgren: Is the overarching sentiment out of track two more like the Ministry of Foreign Affairs or more like the weapons manufacturers?
Greg Allen: I would say pretty clearly that it's more like the Ministry of Foreign Affairs dialogue. When I was in the Department of Defense, we raised the issue of having a dialogue with China on risk reduction in the case of military AI, and the Chinese always refused that overture. So there has not been that type of diplomatic dialogue outside of the process that takes place in the United Nations, where, again, it's the Chinese Ministry of Foreign Affairs.
Jordan Schneider: Is this dangerous?
Greg Allen: The reason why everyone is so interested in artificial intelligence and machine learning technology fundamentally comes down to the fact that it delivers improved performance at reduced cost. It is that improved performance that has everybody so excited and chasing the opportunities of AI. Alongside that improved performance comes new failure modes. Machine learning software breaks, and sometimes it breaks in really strange ways that organizations don't have a great deal of muscle memory preventing, because it's relatively new and the number of folks who have this expertise is obviously limited.
In general, the United States Department of Defense is in favor of our stuff always working, and in favor of other militaries-who-are-not-our-allies’ stuff not working. There's a very important exception, though, and that is when certain types of technical failures might lead to unintentional escalation or hostile engagements. There are some pretty scary stories from the Cold War about certain types of technical failures leading to folks thinking that they were under attack when they were not under attack, and in a nuclear scenario that can be incredibly scary. The Department of Defense has spent a lot of time thinking about how to reduce these types of accidents, including in machine learning, and we've gotten really good at it. One of the things that we have wanted to have a conversation [with China] on is on norms we should be considering, or around how to mutually take advantage of our shared incentive to reduce the risks of these types of things. That's what I think about in terms of risks that are worth paying attention to.
Next up, we discuss to what extent the US military advantage is under threat.
Keep reading with a 7-day free trial
Subscribe to ChinaTalk to keep reading this post and get 7 days of free access to the full post archives.