Adrien Ecoffet
3.3K posts

Adrien Ecoffet
@AdrienLE
Trying to make AGI go well. Researcher at @openai. Views my own.
San Francisco, CA Katılım Nisan 2009
191 Takip Edilen7.1K Takipçiler
Adrien Ecoffet retweetledi
Adrien Ecoffet retweetledi
Adrien Ecoffet retweetledi

@kamathematic “update your priors” = change your ecclesiastical superior
“orthogonal” = relating to a polygon with ortho many sides
“isomorphic” = currently transforming into the international standards organization
“overfit” = too healthy
“steelman” = the tin man's cousin
English
Adrien Ecoffet retweetledi
Adrien Ecoffet retweetledi
Adrien Ecoffet retweetledi
Adrien Ecoffet retweetledi
Adrien Ecoffet retweetledi

@ForHumanityPod Why does this person need to be a good golfer?
English

Serious question: Who is the best golfer in AI safety?
Plan:
-Get this person a Mar-A-Lago membership
-Pay the $500k to golf w Trump
-On hole 7 mention his only way to a Nobel Peace Prize is a an AI safety treaty w China
-On hole 11 say if he did this even his harshest critics would have to love him.
-Have media ready at hole 18, he walks off the course, says we need an AI safety treaty w China and kicks the Overton window open.
This could really work.
English
Adrien Ecoffet retweetledi
Adrien Ecoffet retweetledi

Two Diet Cokes
Three Cokes Zero
Merriam-Webster@MerriamWebster
'Sausage mcmuffins' because sausage is the TYPE of mcmuffin. In 'attorneys general,' 'attorneys' is the noun being modified.
English

1. Agree.
2. Disagree. AI improvement usually has power law scaling, not logarithmic. Even if the power law has substantially diminishing returns you still get an exponential with exponentially growing inputs.
3. Mostly disagree. Would take a long time to discuss in detail but I think most of these are actually solved or made irrelevant by more scaling.
4. Not sure enough what you have in mind to say.
5. Mostly Agree (high uncertainty).
6. Strongly agree.
7. Agree.
English

1. From a bayesian perspective, low p(doom) should be the default. The burden of proof is on those with a high p(doom). They have not made a case that convinces me.
2. AI recursive self-improvement models ignore so much hard stuff. In particular, that AI improvement shows logarithmic diminishing returns to inputs. Every linear step of AI improvement takes exponentially more inputs.
3. AI boosters, particularly of the "scaling is all you need" variety ignore the multiple AI challenges that need theoretical or algorithmic solutions: Incredibly low data efficiency for learning; Difficulty generalizing beyond the training set; Not knowing what they don't know; Continuous and real-time learning; Lack of introspection; etc...
4. Most (but not all) high p(doom) scenarios imply a level of volition around AI that I just don't see. As I said, not all.
5. I do think (despite OpenClaw and talk of giving Grok control over robot bodies) that we'll have some sensible limits in what real-world systems we give AIs control over. Or constraints around how they can direct those systesm.
6. The scale of harms AI can cause (accidentally or intentionally) probably have some sort of power law scaling. Most likely, for that reason and others, early safety mistakes will be small and not existential. And we'll learn from those and make the world safer and more secure.
7. We are not dumb as a species, and many smart people are working on how to increase AI safety. We'll make mistakes. Some bad things will happen. And we'll get smarter and fix the holes. That's what's happened with essentially every technology to date.
Gabriel Weil@gabriel_weil
@ramez What are your main reasons for having a low p(doom)?
English
Adrien Ecoffet retweetledi

Who called it "Effective Altruism" and not "Crazy Rich Bayesians"?
Shakeel@ShakeelHashim
Anthropic's founders and employees are about to have a lot of cash. Many of them have pledged to give away huge amounts of that cash. But where's it gonna go? @cogcelia took a look in this excellent new piece:
English
Adrien Ecoffet retweetledi
Adrien Ecoffet retweetledi
Adrien Ecoffet retweetledi
Adrien Ecoffet retweetledi

“a governance model that depends on every frontier ai company acting against economic incentives, by relying on voluntary commitments that can be changed unilaterally at any moment, is not the kind of safeguard a free society should rely on”
Noam Brown@polynoamial
English
Adrien Ecoffet retweetledi
Adrien Ecoffet retweetledi
Adrien Ecoffet retweetledi

I find it interesting to watch this video and look for analogies to the emergence of superintelligence in our civilization
Many cells start acting in a coordinated way to become a heart. What would the arrival of ASI in our society look like from the perspective of the universe?
Interesting STEM@InterestingSTEM
They capture the exact moment when a developing heart shifts from silence to its first beat. There is no “switch”: many cells gradually become active and, upon crossing a critical threshold, the entire tissue suddenly synchronizes.
English
















