

Sasha Gusev
21.2K posts

@SashaGusevPosts
Statistical geneticist | Associate Prof at @DanaFarber / @harvardmed / @DFCIPopSci | Blogging at https://t.co/4D7UObBNdd







The unique thing about EA / rationalist philanthropy is that, while it has its traditional "cause areas," it is broadly steerable by better arguments. That is, if someone marshalled dispositive evidence that we're headed for an AI winter or that the technical alignment problem wasn't hard or that xyz funding strategy created more costs than benefits or that shrimp are p-zombies, EA and Rat funders would turn on a dime and fund something else. You can't say that about any other big philanthropic source.


















Stumbled upon an interesting debate on AI super-intelligence from 2011. Yudkowsky makes three core claims/predictions, all of which are (to date) wrong: 1) That human intelligence is relatively simple and ASI can be achieved with a few small innovations; ...


@allTheYud @PDoomOrder1 And to de-stress the discussion a bit, I'll throw in your correct 2021 prediction that AI would achieve IMO gold by 2025: x.com/GarrisonLovely…



