This is one of the most important pieces I've read on the US-China AI divergence, and I think it deserves more attention from people outside the usual Chinawatching circles.
The "conceptual gap" framing is brilliant. Both sides are doing capability-equivalent work but narrating it through completely different theoretical lenses, which means surface-level intelligence analysis misses the real signal. SII's "self-evolving closed loop" IS recursive self-improvement, just wrapped in deployment-friendly language. The MiniMax 80% AI-generated code stat alone should have triggered more alarm in DC than it did.
What I find underexplored, though, is how this divergence reshapes the geopolitical calculus for everyone ELSE. If China's path to AGI runs through embodied AI and manufacturing infrastructure, then countries with strong manufacturing bases but weak compute access (Mexico, Vietnam, parts of Southeast Asia) suddenly become strategically relevant in ways the software-only RSI narrative ignores entirely. China's robot boot camps funded by local governments aren't just domestic policy. They're a template that could be exported through BRI-adjacent partnerships, creating physical AI training infrastructure across the Global South while the US focuses on chip export controls.
The constraint-driven innovation point also resonates beyond China. Every AI ecosystem outside the US faces similar compute ceilings. If the embodied path proves viable, it democratizes the AGI race in ways that pure scaling never would. That's either terrifying or hopeful depending on your threat model.
"democratizes the AGI race" might be a bit too rosy a view, though, right? If China is supploying the hardware, doing a lot of the diagnostics, and maybe inserting the odd back door, then it's pretty clear what the democratic outcome is going to be.
Not that I'm necessarily against the Chinese model. Inclusion is nicer than exclusion! But if the RoW know what's good for us, they'd better get pretty good at auditing this stuff.
AGI seems mostly like a buzzword in China – spoken of with a lack of seriousness, not too different from how the AWS website declares that one can "Get started with AGI by signing up for an AWS account today." There may be some true believers, but for many of the Chinese companies that talk about AGI, few employees have any notion of what AGI really represents. They are focused on the practical short-term capabilities.
The buzzword point is acknowledged in the article. Also agree on your point that most employees may not see their work as having anything to do with AGI. However, I think both the US and China have both practicality vs. ideological ambition on AI, and it is too general to see the US as ideological v. China as practical.
Also, we are all in certain echo chambers, and people around us may be more AGI-pilled than broader society as a whole. I do wonder, outside certain frontier AI labs in the US, how many employees in US AI companies have any notion of what AGI really means (also, no one has the monopoly over the definition of human-level intelligence imo). There are always certain people at the top having ideological ambition and others who still want to live their everyday lives. I don't think this is sth uniquely China.
Thanks Zilan, your post on China’s different path to AGI is especially valuable because it highlights a point that is still widely underappreciated: China is not ignoring AGI. Rather, it is approaching AGI through a path that is visibly different from Silicon Valley’s.
And just as importantly, another point you made is both rare and highly insightful: technological pathways are usually not determined by abstract technical optimality alone. They are shaped by resource endowments, industrial structure, and institutional incentives.
That said, there are several points that I think might need to be made more clearly.
First, the fact that many Chinese researchers, companies, and institutes are talking about multimodality, world models, and embodied intelligence does not automatically mean that China has already formed a clear, stable, national-level AGI pathway capable of shaping capital allocation and research priorities in a deeply coordinated way.
Second, the United States is not entirely neglecting world models or embodied intelligence either. Conversely, China is not neglecting coding agents, base models, or algorithmic efficiency.
Third, physical-world feedback may indeed prove to be a critical ingredient for AGI, but that remains an open question rather than a settled fact. It is entirely possible that high-quality simulated environments, tool use, software-agent collaboration, and automated research could be enough to push models to extremely high levels of general capability, in which case the importance of embodied closed loops may be overstated. On the other hand, if truly human-level intelligence cannot emerge without embodiment, environmental interaction, causal feedback, and social experience, then Silicon Valley’s current emphasis on coding agents and recursive self-improvement may turn out to be too narrow. At this stage, it is still too early to know which path is more likely to prevail.
Loved this piece. Nit-picking, but I slightly take issue with the both-sides-ism. China can plausibly be characterised as working with the resources it can control. The US approach seems to have more to do with the fact that Silicon Valley VCs know software, like the economics of software, and decidedly do not like stepping outside that comfort zone.
More like this, more from this author.
🫡🫡🫡
This is one of the most important pieces I've read on the US-China AI divergence, and I think it deserves more attention from people outside the usual Chinawatching circles.
The "conceptual gap" framing is brilliant. Both sides are doing capability-equivalent work but narrating it through completely different theoretical lenses, which means surface-level intelligence analysis misses the real signal. SII's "self-evolving closed loop" IS recursive self-improvement, just wrapped in deployment-friendly language. The MiniMax 80% AI-generated code stat alone should have triggered more alarm in DC than it did.
What I find underexplored, though, is how this divergence reshapes the geopolitical calculus for everyone ELSE. If China's path to AGI runs through embodied AI and manufacturing infrastructure, then countries with strong manufacturing bases but weak compute access (Mexico, Vietnam, parts of Southeast Asia) suddenly become strategically relevant in ways the software-only RSI narrative ignores entirely. China's robot boot camps funded by local governments aren't just domestic policy. They're a template that could be exported through BRI-adjacent partnerships, creating physical AI training infrastructure across the Global South while the US focuses on chip export controls.
The constraint-driven innovation point also resonates beyond China. Every AI ecosystem outside the US faces similar compute ceilings. If the embodied path proves viable, it democratizes the AGI race in ways that pure scaling never would. That's either terrifying or hopeful depending on your threat model.
Yeah, that's super interesting.
"democratizes the AGI race" might be a bit too rosy a view, though, right? If China is supploying the hardware, doing a lot of the diagnostics, and maybe inserting the odd back door, then it's pretty clear what the democratic outcome is going to be.
Not that I'm necessarily against the Chinese model. Inclusion is nicer than exclusion! But if the RoW know what's good for us, they'd better get pretty good at auditing this stuff.
AGI seems mostly like a buzzword in China – spoken of with a lack of seriousness, not too different from how the AWS website declares that one can "Get started with AGI by signing up for an AWS account today." There may be some true believers, but for many of the Chinese companies that talk about AGI, few employees have any notion of what AGI really represents. They are focused on the practical short-term capabilities.
The buzzword point is acknowledged in the article. Also agree on your point that most employees may not see their work as having anything to do with AGI. However, I think both the US and China have both practicality vs. ideological ambition on AI, and it is too general to see the US as ideological v. China as practical.
Also, we are all in certain echo chambers, and people around us may be more AGI-pilled than broader society as a whole. I do wonder, outside certain frontier AI labs in the US, how many employees in US AI companies have any notion of what AGI really means (also, no one has the monopoly over the definition of human-level intelligence imo). There are always certain people at the top having ideological ambition and others who still want to live their everyday lives. I don't think this is sth uniquely China.
Thanks Zilan, your post on China’s different path to AGI is especially valuable because it highlights a point that is still widely underappreciated: China is not ignoring AGI. Rather, it is approaching AGI through a path that is visibly different from Silicon Valley’s.
And just as importantly, another point you made is both rare and highly insightful: technological pathways are usually not determined by abstract technical optimality alone. They are shaped by resource endowments, industrial structure, and institutional incentives.
That said, there are several points that I think might need to be made more clearly.
First, the fact that many Chinese researchers, companies, and institutes are talking about multimodality, world models, and embodied intelligence does not automatically mean that China has already formed a clear, stable, national-level AGI pathway capable of shaping capital allocation and research priorities in a deeply coordinated way.
Second, the United States is not entirely neglecting world models or embodied intelligence either. Conversely, China is not neglecting coding agents, base models, or algorithmic efficiency.
Third, physical-world feedback may indeed prove to be a critical ingredient for AGI, but that remains an open question rather than a settled fact. It is entirely possible that high-quality simulated environments, tool use, software-agent collaboration, and automated research could be enough to push models to extremely high levels of general capability, in which case the importance of embodied closed loops may be overstated. On the other hand, if truly human-level intelligence cannot emerge without embodiment, environmental interaction, causal feedback, and social experience, then Silicon Valley’s current emphasis on coding agents and recursive self-improvement may turn out to be too narrow. At this stage, it is still too early to know which path is more likely to prevail.
Loved this piece. Nit-picking, but I slightly take issue with the both-sides-ism. China can plausibly be characterised as working with the resources it can control. The US approach seems to have more to do with the fact that Silicon Valley VCs know software, like the economics of software, and decidedly do not like stepping outside that comfort zone.