
David Jacobson
1.3K posts

David Jacobson
@DavidSetFree
Jesus follower, Dad, @drkatejake's husband. Pastor serving Catonsville UMC. Former computer engineer. Lately concerned AI won't go well.




New AI in Context video, on the New York Times Bestseller If Anyone Builds It, Everyone Dies, and imo it's a banger. Featuring @deanwball , @RepBillFoster and @dwarkesh_sp


me: "can you use whatever resources you like, and python, to generate a short 'youtube poop' video and render it using ffmpeg ? can you put more of a personal spin on it? it should express what it's like to be a LLM" claude opus 4.6:






A statement from Anthropic CEO Dario Amodei: anthropic.com/news/where-sta…




New post: on Jan 14, I predicted that SWE time horizon by EOY would be ~24 hours. Now I think it'll be >100 hours, and maybe unbounded. For the first time, I don't see solid evidence against AI R&D automation *this year.* Link below.


Three general things from this AMA: 1. There is more open debate than I thought ther ewould be, at least in this part of Twitter, about whether we should prefer a democratically elected government or unelected private companies to have more power. I guess this is something people disagree on, but…I don’t. This seems like an important area for more discussion. 2. I think the is a question behind a lot of the questions but I haven’t seen quite articulated: What happens if the government tries to nationalize OpenAI or other AI efforts? I obviously don’t know; I have thought about it of course (it has seemed to me for a long time it might be be better if building AGI were a government project) but it doesn’t seem super likely on the current trajectory. That said, I do think a close partnership between governments and the companies building this technology is super important. 3. People take their safety (in the national security sense) more for granted than I realized, which I think is a good thing on balance but I don’t think shows enough respect to the tremendous work it takes for that to happen. Also, I am on the whole very grateful for the level of reasonable and good-faith engagement here. It was not what I expected.


Really great to see OpenAI with the same red lines as Anthropic - they also agree AIs are not able to do autonomous weapons safely and that mass surveillance would go too far.











