5 Comments
User's avatar
joostshao1989's avatar

In terms of science and technology, ordinary Chinese people understand the wave, which is a typical anxiety, everyone knows that the demographic dividend is over, and they don't want to be eliminated, and this anxiety makes most people unable to calm down and look at themselves

Giving Lab's avatar

Really liked the “cyber bureaucratic court” framing — it explains something most OpenClaw debates miss: governance is not a bolt-on, it’s part of system performance. The reliability point in the thread comments is spot on too; if teams can’t trace why an agent acted, they can’t maintain it.

One practical pattern we’ve been using is a lightweight run receipt for each important agent action (intent → context used → tool calls → rollback/fallback note). It turns governance from philosophy into an auditable workflow.

If useful, we publish concrete teardowns of these operator workflows at Giving Lab: https://substack.com/@givinglab

Banji Lawal's avatar

Your post reminds me that even though LLMs can produce code they can't produce software with a guaranteed reliability and security because what's happening in the modules isn't known or tested. It might work on delivery but be difficult to maintain.

It might be cheaper to hire a software architect and 10 good developers than having all these vibe coding platforms.

James Wang's avatar

I'm not even sure what niche this falls into, but I enjoy it.

BigDog's avatar

I can’t tell you how fascinated I am with China’s response to OpenClaw, and the idea of using historical Chinese political systems to govern models.

There’s a pretty amusing narrative coming to mind, where the decadent west, being so obsessed with freedom and democracy, fails to truly apply the power of AI agents. China, meanwhile, owing to its history of different ways to organise society, have the mental and cultural frameworks built to properly govern and apply AI.