Sabitlenmiş Tweet
Benjamin Tereick
13 posts

Benjamin Tereick
@BenTereick
Associate Program Officer, Forecasting at Coefficient Giving (previously known as Open Philanthropy) https://t.co/tv7dybPDm2
Berlin, Germany Katılım Temmuz 2025
110 Takip Edilen35 Takipçiler

@girishsastry But it seems plausible ex ante that such tools could be helpful, and with AI forecasters one could study the effect on accuracy much more easily.
English

@girishsastry Good catch! The source is basically anecdotal, we know some good forecasters who use PPLs in their own work. But the way we wrote this is too strong - I don't think anyone has studied the effect of quantitative tools on accuracy.
English

Reminder that our RFP for AI for forecasting and sound reasoning will be closing soon. We've updated the submission deadline to February 6, so make sure to submit your proposal in the next week!
Benjamin Tereick@BenTereick
Open Philanthropy's Forecasting team is launching an RFP for AI for forecasting and sound reasoning! We will be accepting proposals at least until January 30, 2026. Link to full RFP text below! (🧵)
English
Benjamin Tereick retweetledi

"What's your p(doom) from AGI?" ...it's a bad question
"p(doom)" talk is a common thing to hear at SF parties and elsewhere, but it's annoying because there's so much confusion between the following:
1. The likelihood of doom if (a) AGI is built and (b) we do not have any change in levels of safety engineering / safety policies / alignment than we currently have today -- essentially the level of doom with business as usual and current techniques assuming no one pays any more attention than they already do.
2. The likelihood of doom if AGI is built (but allowing for the possibility we might dramatically increase safety and/or solve alignment or something).
3. The likelihood of doom over a given time period, including both (a) the chance that AGI is just never built, AGI turns out to be too hard to make, we're wrong about AI capability development and (b) the possibility we might dramatically increase safety and/or solve alignment or something
People equivocating between these three different meanings makes it pretty hard to have a conversation about this.
Additionally, the notion of "doom" itself is also pretty confused. Some take it to mean just human extinction.
Some say that if all of humanity is way better than today but we never achieve our greatest possible future galactic potential than we are still in "doom" -- and a lot in between.
If it were the year 2100 and there were one trillion people on Earth who each had the quality of life of "top 1% in the US in 2025", would that be "doom"? Some say yes, some say no. This makes it even harder to have a conversation about this if we don't even know what "doom" is.
So if you ask me "What's your p(doom) from AGI?" I'm going to have to say "Please just ask me a normal question".
English

@postfactnews @rheimann @_KarenHao @AndyMasley I think @rheimann thought you were referring to Andy's thread by "this thread", and you kept talking past each other.
English

@rheimann @_KarenHao @AndyMasley Genuinely this is an embarrassing interaction for you.
I’ve already provided you with source material, you’re refusing to engage with it and asking for me to (further) spoonfeed it to you
English
Benjamin Tereick retweetledi

See the full description here: openphilanthropy.org/request-for-pr…
English




