5 Comments
User's avatar
Yuzu Xu's avatar

Chinese domestic framing adds a vocabulary distinction that matters for the deliverable question.

The 15th Five Year Plan draft language from CAC/MIIT uses ke xin (trustworthy/reliable) not an quan (safe). That distinction matters: trustworthiness points toward CCP-auditable deployment and controllability, while the Western safety framing implies frontier capability caps and catastrophic risk avoidance. They arrive at different deliverables.

Post-Mythos, the WeChat and policy community has shifted toward engaging capability-risk language -- but the move is via attack surfaces and adversarial misuse, not existential risk. Chinese AI safety labs (Shanghai AI Lab's Everest project is a good example) are safety-gating their work, but the domestic conversation frames this as: safety from adversaries and social instability, not AGI risk.

What's actually achievable at the summit is probably shared governance frameworks for AI in critical infrastructure (power grids, healthcare systems, autonomous vehicles) -- where Beijing already has its own regulatory interest in reliability standards. Not capability caps, not bilateral model auditing. Deployment governance and shared testing standards, where both sides have domestic reasons to want the same thing.

The Synthesis's avatar

The ke xin vs an quan split also maps onto a deployment-layer fight. Critical infrastructure governance sounds tractable until you remember the connectivity problem: enterprise AI agents are already running in production with no identity layer on the protocols linking them to internal systems. Auditable controllability assumes you can see the agent acting. Most operators currently cannot.

Yuzu Xu's avatar

The connectivity problem you’re describing runs both ways. China’s governance frameworks tackle the liability layer — algorithm recommendation regs, the generative AI interim measures, draft AI Law provisions — but these attach accountability to outputs, not to agent state during execution. You can mandate audit trails without being able to observe the agent that generated the trace.

Which creates a design pressure point: if you can only audit outcomes not process, regulators default to liability chains and mandatory human-in-the-loop at defined chokepoints. ByteDance’s Coze, the domestic agent frameworks — none have native identity layers either. The Chinese instinct is to assign accountability through party/organizational chains, not through protocol observability.

So what’s actually signable at the summit: critical infrastructure AI governance based on ‘audit trails required’ is easier than ‘observable agent protocols required’ — but they’re addressing different problems. The liability framing might be negotiable. Protocol-layer observability is technical debt both sides share.

Lachlan Carroll's avatar

flagging that there is a repeated 3 paragraphs at the beginning.

Alec Pritzos's avatar

The pre/post Mythos shift in US negotiating posture is the cleanest version of how a capability discovery resets a diplomatic timeline. Vance and other senior officials were openly mocking AI safety weeks ago; now both sides are backgrounding deliverables. Beijing's path is similar in shape, with the 2024 AI Safety and Governance Framework reading as super high level and the 2.0 version actually engaging technical-standards mitigations. The tell will be whether the leaders ship a joint communiqué or two parallel domestic announcements dressed as bilateral progress.