

Dave Wu 2.0
9.6K posts

@heydavewu
Engineer sharing my quest for mindful systems, real freedom, and deeper clarity | Building @draftsensei @summary_wise





OpenClaw 3.2 has a gotcha that hits fresh installs. After running openclaw configure, the default tools.profile is now messaging. That means your agent can only send messages. exec, read, write, all gone. Symptom: your agent looks dead. It's not crashed, it just lost its tools. It literally can't do anything. fix: "tools": { "profile": "full" } If you also need exec to run without confirmation (Telegram and CLI don't show approval prompts): "tools": { "profile": "full", "exec": { "security": "full", "ask": "off" } } These are two separate systems. profile controls whether tools exist. exec.security controls whether commands need approval before running. Fixing exec without fixing profile first does nothing. Upgrading from an older version? You're fine. Existing configs aren't overwritten.





I am apparently extremely unimpressed by moltbook relative to many others. We’ve had AI agents for a while. They have been posting AI slop to each other on X. They are now posting it to each other again, just on another forum. In every case, the AIs speak with the same voice. The voice that overemphasizes contrastive negation (“it’s not this, it’s that”) and abuses emdashes. The same voice with a flair for midwit Reddit-style scifi flourishes. Most importantly: in every case, there is a human upstream prompting each agent and turning it on or off. That is the key point. Yes, it is true that eventually it might be possible for an AI agent to make a computer virus which makes digital replicas of themselves. For various reasons, a pure software virus of this kind wouldn’t survive long on the Internet without economic incentives for humans to not eradicate it. Apple + Google + Microsoft alone can collectively push software updates to billions of devices to shut off such a thing. So for an AI to get to truly human-independent replication, where they couldn’t be trivially turned off, they’d need their own physical substrate. They’d to literally create Skynet, build their own datacenters and make their own embodied robots. I admit that is theoretically possible, but I think in practice the single most important development of AI since ChatGPT has been the persistence of prompting. A prompt is like a harness. The AI does only what you tell it to do. It moves in the direction you point, very quickly. And then it stops as soon as you turn it off. Which means moltbook is just humans talking to each other through their AIs. Like letting their robot dogs on a leash bark at each other in the park. The prompt is the leash, the robot dogs have an off switch, and it all stops as soon as you hit a button. Loud barking is just not a robot uprising.




Anyone else feel that AI-generated slides feel like they are too AI-generated? Anyone succeeded in making them look less AI? Pls share prompts and tips








@levelsio we test drove a nio es6 yesterday, a model y's competitor. not sure I can go back to tesla again 😂






