Benjamin Tereick

13 posts

Benjamin Tereick

Benjamin Tereick

@BenTereick

Associate Program Officer, Forecasting at Coefficient Giving (previously known as Open Philanthropy) https://t.co/tv7dybPDm2

Berlin, Germany Katılım Temmuz 2025
110 Takip Edilen35 Takipçiler
Sabitlenmiş Tweet
Benjamin Tereick
Benjamin Tereick@BenTereick·
Open Philanthropy's Forecasting team is launching an RFP for AI for forecasting and sound reasoning! We will be accepting proposals at least until January 30, 2026. Link to full RFP text below! (🧵)
English
3
7
28
6.9K
Luzia 🔸
Luzia 🔸@_revoluzia_·
Ben and I got engaged last weekend! So excited for this next chapter in our lives 🥰
Luzia 🔸 tweet media
English
86
11
1.7K
21.7K
Benjamin Tereick
Benjamin Tereick@BenTereick·
@girishsastry But it seems plausible ex ante that such tools could be helpful, and with AI forecasters one could study the effect on accuracy much more easily.
English
0
0
1
20
Benjamin Tereick
Benjamin Tereick@BenTereick·
@girishsastry Good catch! The source is basically anecdotal, we know some good forecasters who use PPLs in their own work. But the way we wrote this is too strong - I don't think anyone has studied the effect of quantitative tools on accuracy.
English
1
0
0
24
Benjamin Tereick retweetledi
Peter Wildeford🇺🇸🚀
Peter Wildeford🇺🇸🚀@peterwildeford·
"What's your p(doom) from AGI?" ...it's a bad question "p(doom)" talk is a common thing to hear at SF parties and elsewhere, but it's annoying because there's so much confusion between the following: 1. The likelihood of doom if (a) AGI is built and (b) we do not have any change in levels of safety engineering / safety policies / alignment than we currently have today -- essentially the level of doom with business as usual and current techniques assuming no one pays any more attention than they already do. 2. The likelihood of doom if AGI is built (but allowing for the possibility we might dramatically increase safety and/or solve alignment or something). 3. The likelihood of doom over a given time period, including both (a) the chance that AGI is just never built, AGI turns out to be too hard to make, we're wrong about AI capability development and (b) the possibility we might dramatically increase safety and/or solve alignment or something People equivocating between these three different meanings makes it pretty hard to have a conversation about this. Additionally, the notion of "doom" itself is also pretty confused. Some take it to mean just human extinction. Some say that if all of humanity is way better than today but we never achieve our greatest possible future galactic potential than we are still in "doom" -- and a lot in between. If it were the year 2100 and there were one trillion people on Earth who each had the quality of life of "top 1% in the US in 2025", would that be "doom"? Some say yes, some say no. This makes it even harder to have a conversation about this if we don't even know what "doom" is. So if you ask me "What's your p(doom) from AGI?" I'm going to have to say "Please just ask me a normal question".
English
20
5
57
5.9K
PostFact
PostFact@postfactnews·
@rheimann @_KarenHao @AndyMasley Genuinely this is an embarrassing interaction for you. I’ve already provided you with source material, you’re refusing to engage with it and asking for me to (further) spoonfeed it to you
English
1
0
3
95
Karen Hao
Karen Hao@_KarenHao·
I am working to address an apparent error for a data point I cited in my book about the water footprint of a proposed data center in Chile. I’d like to explain what happened, what I’m doing to remedy it, and provide more recent data on the water footprint of data centers. 1/
English
64
137
1.8K
657.3K
Benjamin Tereick retweetledi
Alexander Berger
Alexander Berger@albrgr·
Some news: Open Philanthropy is now Coefficient Giving! Our mission is unchanged but the new name reflects our growing work with other donors to multiply the impact of their giving. 🧵 on our work to make philanthropy a more efficient "market" and plans going forward:
Alexander Berger tweet media
English
30
72
436
185.2K
Benjamin Tereick
Benjamin Tereick@BenTereick·
For AI for sound reasoning, we’re interested in funding both research projects (including, but not limited to, creating relevant benchmarks) and the development of tools, e.g. tools for fact-checking, or for analyzing arguments.
English
1
0
5
277
Benjamin Tereick
Benjamin Tereick@BenTereick·
Open Philanthropy's Forecasting team is launching an RFP for AI for forecasting and sound reasoning! We will be accepting proposals at least until January 30, 2026. Link to full RFP text below! (🧵)
English
3
7
28
6.9K