Beijing’s Vision of Global AI Governance
Room for collaboration? Kind of!
The following is a guest column by Sihao Huang, a DPhil candidate at Oxford.
On October 18, Xi Jinping unveiled China’s Global Initiative for AI Governance at the opening ceremony of its Belt and Road Forum. The announcement came just two weeks before the UK AI Safety Summit, helping China set the tone on a technology bound to remake geopolitics in the decades to come.
Rapid advancements in AI capabilities have pushed governance issues to the forefront. In addition to challenges from algorithmic bias and economic displacement, highly capable AI models could be used to run propaganda campaigns, design novel biochemical agents, and threaten public safety. These harms could cross state borders and proliferate rapidly. As a result, many governments and organizations are now seeking to create international governance systems to manage the risks posed by advanced AI.
China has cast its new Initiative in the language of AI security, mirroring UK and US proposals for model evaluations, risk management, and trustworthiness. But Beijing is also using the Initiative to prioritize protecting its AI access and national sovereignty, criticizing the West’s “unilateral coercive measures” and “technological monopolies” for drawing ideological lines. This focus on substantive issues may lend room for robust collaboration with the West, but also sets the stage for new conflicts that complicate the playing field of global AI governance.
Learning to Mirror the West
Over the past few years, China has been grappling with two somewhat independent problems in AI governance. First is to craft a coherent foreign-policy position as a responsible AI developer. Beijing has adopted widely accepted language on AI ethics, such as bias mitigation and discrimination. Second is to learn how the rapidly evolving technology of AI interacts with China’s core interests. While Beijing is learning from the West on international statements, it has trailblazed arguably the most ambitious AI regulations so far, creating an auditing system and incident database with the goals of limiting security incidents and controlling its domestic information environment.
Both of these threads were visible throughout China’s AI diplomacy efforts, albeit the first priority — a positive international image — has dominated. In 2021, China signed on to a UNESCO declaration that called for an end to mass surveillance — a pledge which would have banned its own AI practices. In November 2022, the Foreign Ministry submitted a position paper in Geneva, “On Strengthening Ethical Governance of AI.” The document discussed human rights, data privacy, and basic freedoms, adopting extensive language on algorithmic fairness that sounds indistinguishable from what an American organization could have produced.
But buried in that 2022 paper were two points that will eventually be key to China’s AI strategy: Beijing advocated for regulatory systems suited to “national conditions,” and then denounced the “malicious obstruction of other countries’ technological development.” Chinese Ambassador Zhang Jun 张军 echoed these points in a speech at the UN Security Council (UNSC) this July: after emphasizing that AI must be regulated based on a country’s own social characteristics, he then launched a thinly veiled attack on the United States, stating that “a certain developed country, in order to seek technological hegemony, seeks to build exclusive small clubs.” His statements refer to America’s extensive semiconductor controls that are aimed at slowing down China’s compute capabilities. China has begun to leverage these value-based statements on AI governance to advance its own interests on the global stage.
New Export Controls! SemiAnalysis, Fabricated Knowledge, and Asianometry with Only the Hottest Takes
From Rhetoric to Substance
The recent unveiling of the Global AI Governance Initiative marks the next step in Beijing’s AI strategy. The timing on the eve of the UK AI Safety Summit and the reconvention of the G7 Hiroshima process was striking. It also came on the heels of Xi’s proposal this August for a BRICS AI study group — a “Global South” alternative to Western endeavors, which, in many cases, have been centered around affluent, AI-producing nations.
China wasn’t shy about expressing its grievances. In a new statement, Beijing “opposes using AI … for manipulating public opinion, spreading misinformation, [and] intervening … in internal affairs.” These fears about losing control resemble those expressed by Chinese policymakers in the wake of ChatGPT’s release last year. And though China’s UNSC speech in July mentioned the need to tailor regulations to each country’s conditions, China’s new Initiative spelled out “national sovereignty” as a distinct principle. The notion of sovereignty and non-interference appears poised to become a core tenet of China’s future AI diplomacy.
Access to AI was also a central issue in the AI Governance Initiative announcement. China argued that “all countries, regardless of their size, strength, or social system, should have equal rights to develop and use AI” — making an explicit call for open-source AI while taking aim at America’s approach to tech competition. [Ed: particularly awkward in light of CAC’s recent banning Hugging Face! See ChinaTalk’s coverage below.]
In the announcement, Beijing stated that it supports discussions “within the United Nations framework” to establish international institutions that govern AI — a venue were, unlike the G7 or OECD, developing nations call the shots.
This move could feed two birds with one scone. On the one hand, Beijing is currying favor among developing countries and positioning itself as a champion for equal access. On the other, it is seeking to fight back against America’s efforts to restrict China’s semiconductor industry and frontier AI development, particularly since it is falling behind on the most advanced systems.
At face value, China is trying to adopt the mantle of spokesperson for the Global South. This may be a bid for Beijing to build influence and advance its own AI agenda, but the narrative is a powerful one. Liberal democracies also have an obligation to make sure that developing countries are given a voice. As capabilities diffuse, nations currently not at the forefront of AI development will become increasingly relevant in discussions on AI safety. Fragmenting the AI governance ecosystem could lead to failures down the line, and it is bad for both equity and representation if developing states are included only in fora where China has an undue leverage. The West should be proactive in providing equitable, safe, and structured access to AI — an essential element to securing buy-in to any robust governance regime.
The Golden Age of AI Diplomacy?
With that said, not all is gloom and doom on AI collaboration. In addition to China’s core concerns about access and sovereignty, it also made clear a third priority: security. The new document is heavy on what Western observers would recognize as AI safety language, although in the official English version, “safety,” which has appeared in previous statements, is now replaced with the alternate translation “security”; both meanings map to the Mandarin word ānquán 安全.
This move — away from both safety and ethics language — could indicate that China is shifting toward more pragmatic policy concerns. The new Initiative talks about “[working] together to prevent risks” to make AI more secure and controllable. And consistent with what the US, UK, and EU have proposed, China suggests a “testing and assessment system based on AI risk levels,” arguing for a “tiered and category-based” management system to enable rapid response to emerging threats. [Ed: whatever that means!] It further states that countries should collaborate to “fight against the misuse … of AI technologies by terrorists.” The ultimate goal is to ensure that “AI always remains under human control” and to “build trustworthy AI technologies that can be reviewed, monitored, and traced.”
By replacing lofty statements with more concrete suggestions, this new risk-based framework may fare well for international collaboration. Democracies should not be naive about China’s motivations — but with the security and socioeconomic risks from AI more pressing than ever, there must be robust, issues-based collaboration to manage the global commons. Zeroing in on shared security concerns — such as taming risks from frontier models — will make it easier for democracies to engage in substantive dialogue with China without compromising on ethical issues. Such a strategy resembles engagement with China on climate and nuclear challenges and, provided genuine political goodwill across all parties, could chart a more constructive course ahead.