artlu
1.1K posts











30 second explanation of the MemPalace by Milla Jovovich. By day she’s filming action movies, walking Miu Miu fashion shows, and being a mom. By night she’s coding. She’s the most creative, brilliant, and hilarious person I know. I’m honored to be working with her on this project… more to come.


everyone’s cramming AIs with more tokens, bigger context, and longer prompts thinking it’ll make them smarter reality? it usually backfires. hard. 121 experiments: Kimi found 4 bugs at 16k tokens… and zero at 48k. ntorga.com/overfed-overth…

So, I did some research. The regression is real. But it's not Claude getting dumber. And you can fix that. Thinking budgets were adjusted. For complex multi-file work, the default medium effort may not be enough. Three fixes: 1. /effort high (or /effort max on Opus for hard debugging) 2. ~/.claude/settings.json → "showThinkingSummaries": true 3. CLAUDE.md: "Research the codebase before editing. Never change code you haven't read." GitHub issue #42796 analyzed 17,871 thinking blocks across 6,852 sessions. The pattern: when thinking depth drops, the model shifts from research-first to edit-first. Claude didn't get worse. The defaults got conservative.

sharing my first open source project a CLI for downloading and syncing your X bookmarks locally so your agent can access them. it's free › npm install -g fieldtheory › login to your X account in a chrome tab › ft sync (done!) bonus: › ft viz › ft classify

@levelsio @ComplexiaSC I had opus deploy into a vps that it had deploy 100 times (everything in the cloud) and then it failed, when I check it was trying to deploy to a random IP, I tried to trace why it did that, all I could get was that it hallucinated that IP.






