China Reacts to Anthropic-DoW
"idealists like Anthropic who try to walk a tightrope between commerce and ethics are destined to be crushed under the wheels of power"
Anthropic managed to massively piss off both the DoW and China in the same week.
For context: On February 23rd, Anthropic was summoned to the Pentagon by Secretary Hegseth, who demanded Claude’s safety guardrails be stripped for unrestricted military use. That same day, Anthropic published a blog post accusing three Chinese AI labs (DeepSeek, Moonshot/Kimi, and MiniMax) of industrial-scale distillation. A few days later, Trump called them a “RADICAL LEFT, WOKE COMPANY,” blacklisted them from all federal contracts. Hegseth then said he would designate them a national security supply chain risk, which was a label previously reserved for foreign adversaries like Huawei. The distillation accusations, meanwhile, landed in China as hypocritical politicking, compounding the bad blood from Anthropic’s September 2025 restrictions on Chinese-controlled companies.
Anthropic now occupies an unprecedented political position: regarded in Washington as too woke to be trusted, and in Beijing as the most hawkish AI company.
The Irony
The most palpable emotion on Chinese social media is irony. Given Anthropic’s track record with China — banning Chinese-controlled companies, labeling China an enemy state in internal documents, and pushing hardest in Washington for compute restrictions on Chinese firms — Chinese netizens were not exactly sympathetic when the blacklist dropped.
Anthropic, which has done more than any other Western AI company to frame China as a threat, may now be deemed the same “supply chain risk” designation historically reserved for Chinese companies like Huawei. The mockery lands harder given that just weeks earlier, Anthropic was being called “AI Thanos” (“AI灭霸”) after its February product releases wiped out software stocks (IBM down 13%, CrowdStrike down 6.5%).
But there’s a second level of political irony. The US government, which built its entire AI export control regime around the premise that democracies develop AI differently from autocracies, spent this week threatening a company with criminal prosecution for refusing to enable domestic mass surveillance and fully autonomous weapons, the exact use cases Washington spent years warning China would pursue. From America’s AI Action Plan, the Trump Administration’s policy roadmap for AI released in July 2025:
“AI systems will play a profound role in how we educate our children, do our jobs, and consume media. It is essential that these systems be built from the ground up with freedom of speech and expression in mind, and that U.S. government policy does not interfere with that objective. … The distribution and diffusion of American technology will stop our strategic rivals from making our allies dependent on foreign adversary technology.”
For Chinese audiences, this is evidence that the democratic AI governance narrative under Trump is more about competitive advantage than principle.
Distillation Accusations
The distillation accusations landed in China as a bad-faith political attack dressed up as a security concern. A framing that came up repeatedly was ‘the thief crying thief’ (贼喊捉贼). Many outlets, like Guancha’s 关心 Guanxin column, say Anthropic trained its models on internet data scraped without authorization, then accused Chinese companies of “distillation” and framed it as a foreign attack requiring government intervention. 36Kr made the further point this was a lobbying document timed to coincide with the Pentagon negotiations, an attempt to invoke the China threat to win a contract dispute.
Guanxin made a related point that’s been gaining traction across Chinese tech commentary, which is that Anthropic inadvertently made the strongest possible case for open-source AI. Anthropic claims it could identify individual researchers at Chinese labs from API metadata, by tracking query patterns down to specific employers.
“Anthropic, intending to attack its competitors, inadvertently became the most powerful advertisement for open-source AI. Their actions demonstrated to everyone that under the architecture of closed-source AI services, your privacy, your autonomy, and your right to know are all unprotected. When a company can monitor, judge, and punish you at any time in the name of ‘security,’ so-called ‘trust’ is no longer a virtue, but a risk.”
This argument feels a bit presumptuous, since open-source models have their own API businesses, which offer providers comparable visibility into customer workflows. But perhaps the essential claim is that open-source models can be self-hosted, run locally, with no API calls to the original developer at all.
心智观察所 Xinzhi Observatory, another Guancha column, voiced a more nuanced opinion. It argues that Anthropic’s attitudes towards both China and the Pentagon are consistent with the company’s longstanding worldview.
“[Amodei’s] core argument is not ‘a particular country is dangerous’ but ‘highly capable AI is inherently dangerous.’ In his view, regardless of whose hands a model falls into, the absence of constraints is sufficient for it to be weaponized for mass surveillance or autonomous weapons systems. The intellectual roots of this position can be traced to the influence of effective altruism and long-termism. The logic runs: once AI capabilities cross a certain threshold, they may produce structural risks — and constraints must therefore be built in before deployment. [...] In invoking national security language in its accusations against Chinese companies, Anthropic has, objectively speaking, participated in America’s tech-competition narrative toward China. But its fundamental starting point is concern about ‘capability proliferation,’ not hostility toward any particular nation. It can criticize Chinese companies for distillation, and it can also refuse to grant the U.S. military ‘blanket authorization’ for military use cases. It draws red lines in both directions.”
The Dissolution of US AI Governance
Putting Anthropic aside for the moment, Chinese commentary is drawing some broader structural conclusions about what this episode reveals about the US’s approach to AI governance.
The most common read, unsurprisingly, is that the Washington-Silicon Valley rift exposes a fundamental instability in the American AI ecosystem. State-affiliated general news outlet 澎湃 framed this primarily as a Silicon Valley vs. Washington D.C. story, noting that 550+ Google and OpenAI employees signed an open letter supporting Anthropic. TMTPost 钛媒体, a leading business and tech news, goes a step further in predicting the end of the Washington-Silicon Valley alliance altogether:
“This marks the moment when the covert power struggle between Washington and Silicon Valley — over AI control, the limits of military applications, and tech ethics — finally dropped all pretense and broke into open, no-holds-barred confrontation.”
China, by contrast, has already resolved this question — at least according to many Chinese observers. There was never a pretense that commercial AI companies could set their own limits on military use. The US is discovering messily and publicly what China settled structurally years ago, which is that frontier AI is a powerful technology with deeply dual-use implications, not solely a commercial product with obvious ethical opt-outs. As the aforementioned TMTPost piece puts it:
“[…] idealists like Anthropic who try to walk a tightrope between commerce and ethics are destined to be under the wheels of power […] In the track of artificial general intelligence (AGI), there has never been a so-called ‘neutral zone.’ In the coming months, the battle between Washington and Silicon Valley over model control, underlying values and business interests will surely usher in more intense pains. The final outcome of this game may have a more profound impact on the future of humanity and AI than any iteration of technical parameters.”

Domestic surveillance in China is a de facto assumption, with all companies required to surrender user data to the government if requested. That being said, Chinese analysts have not reached a consensus on Anthropic’s other red line: autonomous drone strikes. Back in February 2025, Peking University Professor Zhu Qichao 朱启超 contributed an op-ed about AI and the ethics of autonomous weapons to the People’s Daily, the official newspaper of the Chinese Communist Party’s Central Committee. The publication of analytical writings like this in top state media outlets is a good indication that for decision-makers in Beijing, this is a topic worthy of further study and debate rather than a settled matter. Zhu wrote:
“When an AI system malfunctions or makes a flawed decision, should it be treated as an independent entity bearing responsibility? Or should it be treated as a tool, with human operators bearing all or part of the liability? The complexity of this accountability question lies not only at the technical level but also at the ethical and legal levels. On one hand, although AI systems are capable of autonomous decision-making, their decisions remain constrained by human-designed programs and algorithms — meaning their liability cannot be entirely separated from human responsibility. On the other hand, AI systems may in some circumstances exceed the parameters humans have set and act on independent judgments; how to define accountability in those cases has become a persistent challenge in arms control. […]
As AI is applied ever more deeply to military contexts, the human role within combat systems is shifting — from the traditional ‘human-in-the-loop’ model toward ‘human-on-the-loop,’ with humans evolving from direct operators inside the system to external supervisors monitoring it from without. This transition, however, raises new questions of its own. Ensuring that AI weapons systems continue to adhere to human ethics and values when operating independently represents one of the most significant challenges currently confronting the field of AI weapons development.”
For many in China who look to the US as a place where a safety-focused company could resist state capture, where Anthropic’s model of principled refusal was even theoretically possible, that idea has now taken a big hit. Weijin Research 未尽研究, an independent analysis firm, argued in a piece that came out before this dispute, “Anthropic’s safe-first principle functioned not only as a moral standard-bearer but also as a powerful commercial moat — one that proved especially effective in enterprise and government markets.” Quoting Dean W. Ball’s commentary on the Pentagon’s decisions, Weijin Research asserted that the situation is a “warning for the entrepreneurship ecosystem and talent flows […] under this political environment, is any tech company truly safe?”
Taiwanese Perspectives
Does the threat of falling behind China justify tabling ethical questions about military AI? Some Taiwanese defense analysts think the world would be better off if Anthropic chose to work within the system.
Pei-Shiue Hsieh 謝沛學 at Taiwan’s Institute for National Defense and Security Research (INDSR) writes:
“Non-democratic regimes possess an ‘asymmetric advantage’ in the military application of AI. The standoff between Anthropic and the U.S. Department of War over ‘lethal autonomous weapons’ reflects an uncomfortable truth: setting aside technological and economic capabilities, democracies have inherent disadvantages and limitations in the military application of artificial intelligence — particularly in the development of ‘lethal autonomous weapons.’ The ‘don’t be evil’ principle may occupy the moral high ground, but it only has influence over policymakers and corporations in democratic countries; politicians in non-democratic states are entirely unconstrained by it.
This is analogous to how the restrictions the Intermediate-Range Nuclear Forces Treaty (INF Treaty) imposed on the United States allowed China — which refused to join the treaty — to build up an advantage in intermediate-range ballistic missiles and area-denial capabilities in the Indo-Pacific region.
Let us posit a scenario here: Anthropic’s resistance succeeds and triggers a chain reaction, Silicon Valley’s tech mainstream reverts to its stance of withdrawing from defense contracts, and the U.S. military’s military AI development is severely impeded as a result. Meanwhile, China is able to integrate AI into all manner of military R&D without restraint, ultimately achieving an overwhelming advantage in military AI — particularly in ‘lethal autonomous weapons.’ Would such a world be safer?”
Meanwhile, other analysts lamented the emerging race-to-the-bottom dynamic. One writeup called the dispute “the AI industry’s first coming-of-age ceremony” (成年禮). Another author, “Future Lin,” wrote on Substack:
This is a decisive moment for AI governance, not a neutral policy debate. The core issue is not whether Anthropic should concede, but rather: “When governments have the weapon to designate tech companies as national security threats, who dares to say no to the military?” Taiwan’s AI industry is not a bystander, because once this logic becomes entrenched, the ethical accountability mechanism of the global AI supply chain will be fundamentally shaken.
…
For decades, the U.S. tech industry’s advantage has partly stemmed from its relatively independent operating logic — the government can procure, but it cannot make unlimited demands. This boundary is one of America’s invisible assets for attracting top global AI talent: you can start an AI safety company here without worrying that the government will forcibly repurpose your technology for something you consider harmful.
That premise has now been compromised.
The implications for Taiwan are more direct: many Taiwanese AI startups have business plans that include the U.S. market and U.S. government contracts. In this new environment, the assessment of “whether you can land a U.S. government contract” must incorporate a new dimension — if you have ethical boundaries, and those lines conflict with what the government demands, what consequences are you prepared to bear? The more fundamental question is this: if the ethical standards of the global AI supply chain are dictated by the government agencies with the greatest purchasing power, where is the market space for the very concept of “AI safety”?
Social Media
Living next to an authoritarian superpower that faces no such internal friction, some Taiwanese commentators see Anthropic’s ethical stand as a luxury democracies can’t afford.
On Threads:
This whole thing is obviously just Anthropic being idiots (當小87). The company isn’t like Google, with data centers and energy infrastructure spread all over the world, and it doesn’t have the ability to develop its own hardware either. Under those conditions, its bargaining position was already weak to begin with. Because its supply chain has to follow the U.S. government and the military anyway, it basically has zero leverage to pursue some “tech-lefty” agenda. Google employees can afford to play some progressive political games because the company’s fundamentals are strong enough to support that. Anthropic doesn’t have that luxury at all — it’s basically just making more investors want to pull their money out.
From the PTT Stocks board:
Is it possible for your enemy, China, to do such a thing?
If the PLA were to use Deepseek, would Liang Wenfeng dare to tell them, “You can only use Deepseek for XXX, not OOO.”
Therefore, the same pressure forcing Anthropic’s hand reflects genuine urgency inside the Pentagon about closing the gap with China — particularly in drone warfare, where the quality of Chinese drones has spooked Washington. If a Taiwan contingency ever materializes, the DoW at least seems serious about not showing up to that fight with inferior AI.

It’s so easy to forget what makes a democracy and markets, especially seeing the mess of recent election choices. Democracy and markets are not straight lines, there are bad actions/actors and bad choices but there is learning in that process and through less successful outcomes. Certainly not to say that mistakes aren’t repeated but the results through learning are higher highs and creating answers the alternatives never explore.