Gus Docker

148 posts

Gus Docker

Gus Docker

@GusDocker

Podcast Host @FLI_org

Copenhagen, Denmark Katılım Ocak 2022
417 Takip Edilen301 Takipçiler
Aaron Bergman 🔍 ⏸️ (in that order)
1. Abolish FDA approval as a thing that needs to happen 2. Regulate animal husbandry to a degree most people would call extreme Challenge: add in a third that's distant from your first two: 3. not exactly political but "the standard scientific materialist worldview is at the very least radically insufficient/incomplete"
English
2
0
33
1.4K
James Medlock
James Medlock@jdcmedlock·
What are your two least correlated political positions?
English
123
5
226
93.4K
Gus Docker retweetledi
Future of Life Institute
Future of Life Institute@FLI_org·
"There is no gain from getting to superintelligence. The only actor gaining is the superintelligence itself." -@ControlAI founder and CEO @AndreaMiotti on the latest FLI Podcast episode with host @GusDocker, available now at the link or on your favorite podcast player! ⬇️ 🔗
English
5
13
34
1.5K
Gus Docker retweetledi
Gus Docker retweetledi
Future of Life Institute
Future of Life Institute@FLI_org·
🚨 "We have seen significant periods of instability, unrest, even revolution with inequality and when the gains from the economy are not well distributed enough, and we might see similar outcomes if we see that the wealth [from AI] gets overly concentrated within a small set of people." 📻 New on the FLI Podcast: @WindfallTrust Director of Research Deric Cheng joins @GusDocker to discuss how AI could reshape the social contract and global economy. 🔗 Listen now at the link in the replies:
English
1
2
5
712
Gus Docker retweetledi
Future of Life Institute
Future of Life Institute@FLI_org·
🆕 📻 On the latest FLI Podcast episode, Future of Life Foundation researcher Oly Sourbut joins host @GusDocker to discuss how AI could help humans reason better. 🔗 Watch in full at the link below:
English
1
3
6
2.7K
Gus Docker retweetledi
Future of Life Institute
"I'd rather get all the flourishing things out of not superintelligent systems but highly capable systems that I can coordinate with well & that can coordinate with each other well, as opposed to training successor agents that we don't know how to train." -@AmmannNora on the FLI Podcast:
English
2
2
9
892
Gus Docker retweetledi
Future of Life Institute
🚨 "Why are companies building these things? The REAL reason, the goal, is to not give people the tools that will just make them more productive, but to replace people." -@AnthonyNAguirre on the FLI Podcast with host @GusDocker ⬇️ 🎥
English
4
5
26
1.9K
Gus Docker retweetledi
Future of Life Institute
Future of Life Institute@FLI_org·
🆕"If the final input at the end of the day that informs regulation is what the public wants and who they vote for, then at some point the money stops working for you." -@TheMidasProj's @TylerJnstn on the FLI Podcast w/ @GusDocker, discussing how to hold Big AI accountable 🔗👇
English
1
10
12
1.4K
Gus Docker retweetledi
Future of Life Institute
Future of Life Institute@FLI_org·
"Better futures - namely, trying to make the future better, conditional on there being no catastrophe - is in at least the same ballpark of priority as reducing existential risk itself." New on the FLI Podcast, Forethought senior research fellow @willmacaskill joins host @GusDocker to discuss the contents of his new essay series, "Better Futures". 🔗 Listen to the full episode at the link in the replies below:
English
2
3
14
1K
Gus Docker retweetledi
Future of Life Institute
"If you feel like, 'hey, we're actually not hitting certain alignment things right now and we're using misaligned models to try and align models of the future'... probably good to speak up now." -Karl Koch, founder of the AI Whistleblower Initiative @AIWI_Official, on the latest FLI Podcast episode with host @GusDocker. 🔗 Listen to the full episode at the link in the replies:
English
1
1
8
1K
Gus Docker retweetledi
Rob Wiblin
Rob Wiblin@robertwiblin·
I think the complaints that such statements are not specific or enforceable enough are misguided — if widely supported internationally it could be made specific and enforced well enough. But Dean makes a substantive argument that this broad path would naturally lead to a government monopoly on superintelligence, and that's actually riskier, all things considered, than a decentralised/chaotic AI rollout. I don't agree, but it's not a stupid argument, and the balance of risk will come down to very challenging guesses about the difficulty of technical alignment, the ease of bioweapon development, how the government would use AGI, how AI would be adapted in the military, and on and on. In a way Dean is really pointing out that our situation is even scarier and more precarious than you might otherwise think, because unfortunately there's no actor we can trust not to act self-servingly.
Dean W. Ball@deanwball

Vague statements like this, which fundamentally cannot be operationalized in policy but feel nice to sign, are counterproductive and silly. Just as they were two or so years ago, when we went through another cycle of nebulous AI-statement-signing. Let’s set aside the total lack of a definition of “superintelligence.” I’ll even grant the statement drafters that we all arrive on a mutually agreeable definition. Then assume we write that definition into a law, which says “no superintelligence until it’s proven safe.” How do we enforce this law? Now comes the fine print—the stuff left unsaid in the statement, the stuff the statement drafters probably did not much discuss with the many signatories who lent their names and reputations to this endeavor. How do you prove superintelligence will be safe without building it? How do you prove a plane is flightworthy without flying it? You can’t. So, the logic would go, we will need a sanctioned venue and institution for superintelligence development, where we will experiment with the technology until it is “proven safe” (who decides this, by the way, and what happens after it is “proven safe”?) This institution would need to be funded somehow by all governments with similar prohibitions (which the statement drafters, though probably not all signatories, would likely argue needs to include every country on Earth, including US adversaries). A global governance body whose purpose is to build the thing the statement drafters have told us is so dangerous, partially because of the power it could confer on those who control it. A consortium of governments which, if successful, would exercise unilateral control over how to wield this technology—and against whom to wield it. The same people who uniquely possess militaries, police, and a monopoly on legitimate violence. The same people who possess, in other words and in the final analysis, the right to kill you or confiscate your property if you do not listen to them, newly empowered with the most powerful technology ever conceived. Does that sound “safe” to you? This sounds to me like the worst possible way to build “superintelligence.” I reject all efforts to centralize power in this way. And I reject blobby statements with no path to productive realization in policy.

English
11
4
57
9.1K
Gus Docker retweetledi
Future of Life Institute
Future of Life Institute@FLI_org·
📻 "If you're a business, you wanna make money, you wanna chase profits, you have shareholders, fine. What actually irks me personally is when people try to have it both ways, in the way that the leaders of OpenAI do, where they try and speak as if they're still a nonprofit who are doing things for the benefit of humanity... and they're clearly not." 🗣️ @business tech columnist and "Supremacy" author @Parmy discussing how AI companies have transitioned from research labs to product-led businesses, on the latest FLI Podcast episode 🔗👇
English
2
3
9
1.1K
Gus Docker retweetledi
Future of Life Institute
Future of Life Institute@FLI_org·
🤖 "If you have an agent that has very broad goals and a very open-ended autonomy, you're gonna lose a lot of meaningful oversight of that system, most likely. So, that's the biggest shift I think between a Tool AI and this more agentic path that we're on right now. I think you could have a Tool AI that's still an agent, but it would have a very bounded autonomy." -📻 @ForesightInst @HopeExistential Program Director Beatrice Erkers on the newest FLI episode with host @GusDocker. 🔗 Tune in now: youtube.com/watch?v=zU8xne…
YouTube video
YouTube
English
2
5
12
1.8K
Gus Docker retweetledi
Future of Life Institute
Future of Life Institute@FLI_org·
🆕 "As we continue to build technology that is designed to replace rather than to augment, we move closer and closer towards a world where people just don't matter. And then of course you're reliant on other forces [...] it's a very precarious situation to be in." -@luke_drago_ (co-author of "The Intelligence Curse" essay series; @WorkshopLabsPBC co-founder) with host @GusDocker on the newest FLI Podcast episode. 📻 Listen now at the link in the replies:
English
3
8
22
1.9K
Gus Docker retweetledi
trevor (taylor’s version)
my favourite econ prof did a podcast!
Future of Life Institute@FLI_org

📻 Economist @BasilHalperin on the latest FLI podcast episode: 📢 "It's hard to get away from the idea that there will be skyrocketing inequality in a truly transformative AI scenario, but skyrocketing inequality might still be consistent with everyone being better off." 🔗 Listen now in the replies for Basil & @GusDocker's discussion on what markets tell us about AI timelines:

English
0
1
23
1.6K
Gus Docker retweetledi
Basil Halperin
Basil Halperin@BasilHalperin·
Did someone say global long-term interest rates are going up? 🤔🤔🤔 Very fun deep dive with Gus on our paper about AI pushing up rates!
Basil Halperin tweet media
Future of Life Institute@FLI_org

📻 Economist @BasilHalperin on the latest FLI podcast episode: 📢 "It's hard to get away from the idea that there will be skyrocketing inequality in a truly transformative AI scenario, but skyrocketing inequality might still be consistent with everyone being better off." 🔗 Listen now in the replies for Basil & @GusDocker's discussion on what markets tell us about AI timelines:

English
4
8
38
6.6K