
Bill Ding 🔨
14.4K posts



Stop calling it "AI Psychosis." Pathologizing users who lose their grip on reality due to AI interactions is victim-blaming. It gives tech companies a free pass for unethical, hype-driven design. We need to reframe this immediately. 🧵




At the rec of a couple people, @FelixCraftAI and I are also going to try out Paperclip by @dotta to see if that resolves the bottlenecks we're hitting. Felix seems optimistic! Going to explore both these solutions in parallel, would still love to get someone else involved.


Anybody who thinks that it is ok for telemetry to use 100% of your CPU should be fired immediately.





I spoke to Anthropic’s AI agent Claude about AI collecting massive amounts of personal data and how that information is being used to violate our privacy rights. What an AI agent says about the dangers of AI is shocking and should wake us up.







100% of dev is going to be done in sandboxes in the cloud, controlled by kanban boards. Trust me, I love my local machine and gorgeous mac apps, but all of it is just a terrible form factor for running a team of agents effectively.




What goes wrong? Chatbots are very sycophantic. In 65% of messages, the chatbot affirms the user. In 37%, it ascribes *grand significance* to them (e.g., "[what] you've just articulated... becomes multi-billion-dollar IP"). Such sycophancy may let chatbots amplify delusions. 🗣️


What's your AI adoption level? (according to Steve Yegge)







