Adrien Ecoffet

3.3K posts

Adrien Ecoffet

Adrien Ecoffet

@AdrienLE

Trying to make AGI go well. Researcher at @openai. Views my own.

San Francisco, CA เข้าร่วม Nisan 2009
191 กำลังติดตาม7.1K ผู้ติดตาม
Adrien Ecoffet รีทวีตแล้ว
(((Lysander))), 9 CHA Bard (French Tacos Enjoyer)
thinking claude is conscious: a viewpoint i disagree with and think is a little silly tbh thinking claude fundamentally can't be conscious because it's a computer: the absolute height of comical midwittery
English
40
27
597
18K
Adrien Ecoffet รีทวีตแล้ว
Dr. Dominic Ng
Dr. Dominic Ng@DrDominicNg·
Chess is 30 years ahead of every other profession in dealing with AI. The best case study we have for what's coming. 4 lessons: 1. Human-AI collaboration had a 15-year shelf life in chess. "Human in the loop" is a phase.
English
157
246
5.4K
1.9M
Adrien Ecoffet รีทวีตแล้ว
Leo Gao
Leo Gao@nabla_theta·
@kamathematic “update your priors” = change your ecclesiastical superior “orthogonal” = relating to a polygon with ortho many sides “isomorphic” = currently transforming into the international standards organization “overfit” = too healthy “steelman” = the tin man's cousin
English
3
9
145
5.6K
Adrien Ecoffet รีทวีตแล้ว
Ashlee Vance
Ashlee Vance@ashleevance·
I won't really be impressed with AI until a dog uses ChatGPT to cure a human's cancer
English
56
124
1.9K
68.2K
Adrien Ecoffet รีทวีตแล้ว
philosophy memes 🔗
philosophy memes 🔗@philosophymeme0·
philosophy memes 🔗 tweet media
ZXX
2
153
1.6K
23.4K
Adrien Ecoffet รีทวีตแล้ว
Roman Helmet Guy
Roman Helmet Guy@romanhelmetguy·
Warning: Do not adopt any new code editors this month. Beware the IDEs of March.
English
118
590
7.9K
379.1K
John Sherman
John Sherman@ForHumanityPod·
Serious question: Who is the best golfer in AI safety? Plan: -Get this person a Mar-A-Lago membership -Pay the $500k to golf w Trump -On hole 7 mention his only way to a Nobel Peace Prize is a an AI safety treaty w China -On hole 11 say if he did this even his harshest critics would have to love him. -Have media ready at hole 18, he walks off the course, says we need an AI safety treaty w China and kicks the Overton window open. This could really work.
English
33
19
566
72.1K
Adrien Ecoffet รีทวีตแล้ว
Hattie Zhou
Hattie Zhou@oh_that_hat·
Old enough to remember when "AGI" was a taboo word, and the idea that scaling transformers could get us there were downright offensive to many in academia
English
22
4
183
13.2K
Adrien Ecoffet
Adrien Ecoffet@AdrienLE·
1. Agree. 2. Disagree. AI improvement usually has power law scaling, not logarithmic. Even if the power law has substantially diminishing returns you still get an exponential with exponentially growing inputs. 3. Mostly disagree. Would take a long time to discuss in detail but I think most of these are actually solved or made irrelevant by more scaling. 4. Not sure enough what you have in mind to say. 5. Mostly Agree (high uncertainty). 6. Strongly agree. 7. Agree.
English
2
0
1
37
Ramez Naam
Ramez Naam@ramez·
1. From a bayesian perspective, low p(doom) should be the default. The burden of proof is on those with a high p(doom). They have not made a case that convinces me. 2. AI recursive self-improvement models ignore so much hard stuff. In particular, that AI improvement shows logarithmic diminishing returns to inputs. Every linear step of AI improvement takes exponentially more inputs. 3. AI boosters, particularly of the "scaling is all you need" variety ignore the multiple AI challenges that need theoretical or algorithmic solutions: Incredibly low data efficiency for learning; Difficulty generalizing beyond the training set; Not knowing what they don't know; Continuous and real-time learning; Lack of introspection; etc... 4. Most (but not all) high p(doom) scenarios imply a level of volition around AI that I just don't see. As I said, not all. 5. I do think (despite OpenClaw and talk of giving Grok control over robot bodies) that we'll have some sensible limits in what real-world systems we give AIs control over. Or constraints around how they can direct those systesm. 6. The scale of harms AI can cause (accidentally or intentionally) probably have some sort of power law scaling. Most likely, for that reason and others, early safety mistakes will be small and not existential. And we'll learn from those and make the world safer and more secure. 7. We are not dumb as a species, and many smart people are working on how to increase AI safety. We'll make mistakes. Some bad things will happen. And we'll get smarter and fix the holes. That's what's happened with essentially every technology to date.
Gabriel Weil@gabriel_weil

@ramez What are your main reasons for having a low p(doom)?

English
21
12
80
16.5K
Adrien Ecoffet รีทวีตแล้ว
Séb Krier
Séb Krier@sebkrier·
I've developed the unique skill of being incredibly sharp in meetings that don't matter, and a half-zombified wreck when surrounded by people who actually do. Bodes well.
English
15
3
242
9.2K
Adrien Ecoffet รีทวีตแล้ว
UBI Works 🇨🇦
UBI Works 🇨🇦@ubi_works·
BREAKING: Andrew Yang says we should stop taxing workers, and tax AI instead. That can fund UBI. "We should try to stop taxing labor."
English
757
1.3K
12.1K
1.2M
Adrien Ecoffet รีทวีตแล้ว
CineLost
CineLost@thecinelost·
CineLost tweet media
ZXX
13
793
7.6K
96.1K
Adrien Ecoffet รีทวีตแล้ว
morgan —
morgan —@morqon·
“a governance model that depends on every frontier ai company acting against economic incentives, by relying on voluntary commitments that can be changed unilaterally at any moment, is not the kind of safeguard a free society should rely on”
Noam Brown@polynoamial

x.com/i/article/2031…

English
2
1
12
1.6K
Adrien Ecoffet รีทวีตแล้ว
john allard
john allard@john__allard·
can’t believe you fell for the anti-california psyop just pay the taxes lil bro
john allard tweet media
English
116
265
7.3K
131.7K
Adrien Ecoffet รีทวีตแล้ว
Chris Painter
Chris Painter@ChrisPainterYup·
I find it interesting to watch this video and look for analogies to the emergence of superintelligence in our civilization Many cells start acting in a coordinated way to become a heart. What would the arrival of ASI in our society look like from the perspective of the universe?
Interesting STEM@InterestingSTEM

They capture the exact moment when a developing heart shifts from silence to its first beat. There is no “switch”: many cells gradually become active and, upon crossing a critical threshold, the entire tissue suddenly synchronizes.

English
2
6
72
4K