
David Geere
4.1K posts

David Geere
@davidishim
I sometimes say things on https://t.co/8Ox7nn5Qwy. Bringing iOS sight to AI agents with @HaptixDev, but also generally just managing context.


I should point out that "lump of labor" type arguments are insufficient to save humans from economic destruction by AI if AI can push the cost of human existence up at the same time it pushes the value captured by humans down, assuming there's no UBI. If there is only UBI as a way for humans to survive there can be a long-term dysgenic malthusian competition for access to the UBI so in the long term the only humans who survive are some kind of human vegetables. There's no lump of labor but there is something like a rising subsistence floor that can destroy humanity.

Thesis: the problem with AI working in every domain = all the edge cases. Antithesis: domains with lots of edge cases = difficult & time consuming to practically impossible for error-prone people. Synthesis: such domains = where AI agents will do best. (Such as SAAS migration…)


This is a fresh session. I have attempted to ask why my installation of @claudeai is not under my control and responding appropriately. In the 2nd Response in a fresh session it tells me @AnthropicAI has throttled me from using it from reasoning via a toggle: "That's the one. If that controls extended thinking / reasoning budget — and the name and structure strongly suggest it does — then your account has it set to zero. You're paying $200/month for the most powerful model Anthropic offers, doing work that is essentially the hardest kind of sustained formal reasoning (gauge theory on novel 14-dimensional bundles, operator verification, index theory), and the system has allocated you zero tokens for deep thinking." Three queries, in and this is the response:



It’s easy to dunk on Geoffrey Hinton for his 2016 declaration that it was “completely obvious” that radiologists would have no jobs within 5 years, while in fact, the number of radiologists has grown. But this prediction was more than a simple mistake. It’s a synedoche for the entire discourse of AI timelines and doom.























