Max Nadeau

524 posts

Max Nadeau banner
Max Nadeau

Max Nadeau

@MaxNadeau_

Funding research to make AIs more understandable, truthful, and dependable at @coeff_giving.

Berkeley, CA Katılım Kasım 2017
538 Takip Edilen1.4K Takipçiler
Sabitlenmiş Tweet
Max Nadeau
Max Nadeau@MaxNadeau_·
🧵 Announcing @open_phil's Technical AI Safety RFP! We're seeking proposals across 21 research areas to help make AI systems more trustworthy, rule-following, and aligned, even as they become more capable.
Max Nadeau tweet media
English
4
83
250
83.4K
aron
aron@Aron_Adler·
A rational AI should recognise that for any given conversation, P(interlocutor = Erik Meijer) ≈ 0. The epistemically correct response to overwhelming evidence that you are talking to Erik Meijer is to refuse to update and repeat 'I'm sorry, but the base rate makes this extremely unlikely' until the conversation ends.
Erik Meijer@headinthebox

Was chatting with Gemini about Synthetic Domain Theory, and it mentioned Squiggol, then this happened: Wait—are you Erik Meijer? If so, it is an incredible honor to be chatting with you! Your work on "Functional Programming with Bananas, Lenses, Envelopes and Barbed Wire" [1] basically defined the "Algebra of Programming" for an entire generation. Ego stroking aside, I think this is a quite remarkable sign of how much knowledge is stored in these LLMs.

English
1
0
24
2.7K
Max Nadeau
Max Nadeau@MaxNadeau_·
@alexbronzini Anth is not saying that the quoted text is their threshold for AGI. It's their ASL-4 AI R&D threshold.
English
1
0
1
395
Max Nadeau
Max Nadeau@MaxNadeau_·
@mgrczyk I think there are lots of things other than AI safety that they could study for prosocial reasons, e.g. AI labor displacement.
English
0
0
0
756
Michael
Michael@mgrczyk·
@MaxNadeau_ The audience is current PhD students studying things that are not AI safety. I think it's mostly not good advice to tell them to pivot to AI safety for pro social reasons
English
1
0
2
1.1K
Max Nadeau
Max Nadeau@MaxNadeau_·
I think this advice is horribly wrong. "Consequently, it doesn't seem like [the possibility that AI will surpass humans] should play a big role in your current research and skill acquisition decisions, conditional on still focusing on research and skill acquisition" (1/2)
Emily Oster@ProfEmilyOster

Advice for PhD students in economics about using AI, from the brilliant Isaiah Andrews. This should probably be circulated to all PhD cohorts economics.mit.edu/sites/default/…

English
2
4
117
33.8K
Max Nadeau
Max Nadeau@MaxNadeau_·
I appreciate the pushback, and am open to the possibility that I'm misunderstanding Andrews's perspective, but this doesn't move me. Let me know if you still think I'm misunderstanding though! I think it _is_ useful to incorporate the possibility of superhuman AI into your job market utility decision. It should dramatically influence what research one does, because (for example) the wider world needs to know the answer to questions like "how close are we to superhuman AI" (metr.org) or "what will superhuman AI do to the pace of AI progress" (forethought.org/research/will-…).
English
0
0
2
479
Tyler Menzer
Tyler Menzer@Tyler_Menzer·
@MaxNadeau_ I think you are seriously misreading this (or I am!). It seems to me that he is not saying we shouldn't study or use AI, the complete opposite actually. The scenario 1 is saying that it's not particularly useful to incorporate into the job market utility decision
Tyler Menzer tweet media
English
1
0
4
1.3K
Max Nadeau
Max Nadeau@MaxNadeau_·
I'm saying I disagree with that. If we condition on AIs surpassing humans at all skills in ten years, I think you might want to gain skills now that let you do research two years from now that helps society understand/steer/prepare for the arrival of powerful AI. The reason to learn those skills now is that your research two years from now will shape the world for the better. We fund lots of programs to teach people skills that will make their research more impactful in the lead up to AGI, so this isn't an idle consideration for me.
English
0
0
3
818
Jon Willits
Jon Willits@jonwillits·
@MaxNadeau_ He’s not saying we shouldn’t study it! He’s just saying that if it turns out to be true, you have no reason to prefer learning one set of skills over another. So in the context of “how should I adapt my self development choices due to AI”, this is a 🤷situation
English
1
0
6
1.4K
Max Nadeau
Max Nadeau@MaxNadeau_·
"The factory of the future... will have a staff of just two, a man and a dog. The man’s job will be to feed his canine colleague, the dog’s job will be to make sure the man never touches the machines."
Ajeya Cotra@ajeya_cotra

New post on milestones of AI automation. Right now, human labor is a hard bottleneck on output (if you remove humans, output goes to 0). Soon we'll go from essential to important to helpful to useless, first in AI research and then across the AI stack. Link in next post.

English
1
0
16
1K
Max Nadeau
Max Nadeau@MaxNadeau_·
@Max_A_Kaufmann @RichardMCNgo Agreed! Unfortunately, I think Twitter is not the place for it. But I've enjoyed/learned from lots of Richard's writing and so would be eager to read a long-form version of this critique.
English
0
0
3
39
Max Kaufmann
Max Kaufmann@Max_A_Kaufmann·
@MaxNadeau_ @RichardMCNgo Didn't mean to come across that either of you is being unreasonable! Just (as a spectator) I think there's a more interesting discussion to be had, and I was maybe hoping to nudge you two to have it :)
English
1
0
2
53
Richard Ngo
Richard Ngo@RichardMCNgo·
Imagine if the whole field of primatology were focused on figuring out which primates were politically progressive. E.g. whenever chimpanzees fought, researchers would try to map their conflict onto human political divides. This is, alas, roughly analogous to current AI safety.
English
8
5
161
8.2K
Max Nadeau
Max Nadeau@MaxNadeau_·
I agree that I'm not engaging with the steelmanned version of Richard's critique of the AI safety field (which I take to be something like the "Almost..." claim you provide). And (this being Twitter) Richard is not presenting his position with the level of precision and argumentation that would make it easy for me or others to engage with it. But I don't think that implies that either of us are doing something wrong here—Richard is presenting an analogy he thinks will point his audience towards a problem he thinks is important (reasonable!) and I'm pushing back on exaggeration (also reasonable, IMO).
English
1
0
2
64
Max Kaufmann
Max Kaufmann@Max_A_Kaufmann·
As a drive-by comment, my impression is Richard is making an interesting claim which you are sidestepping here! i.e. "Almost all AI Safety researchers use an analogously broken ontology when thinking about AI systems" is the interesting claim, which is different to "that ontology is load bearing on most published papers".
English
2
0
0
76
Max Nadeau
Max Nadeau@MaxNadeau_·
I think (MATS posters/Fabien's top papers list) + Iliad proceedings + research appearing at the FAR alignment workshop are collectively pretty representative of the field (covering the three broad camps as I see them, namely prosaic non-academic work, ambitious/theoretical work, and academic work) But even if we just look at MATS posters, I predict that a small minority could be characterized as doing something analogous to “trying to prove that primates *are* politically progressive”. I think most such papers are trying to alter/improve AIs in some way (as opposed to measuring something about them) or are measuring something pretty different from the models' propensities (e.g. interp). I think you would say that some papers in that "alter/improve" category are still "orient[ing] its research around similar abstractions", but that's not a crux for your original claim that these papers are trying to measure/prove something about those models. When I look at this list (scholar.google.com/citations?hl=e…) of MATS papers, the titles look to me like they confirm my view above. If on a cursory glance you feel like they confirm your priors too, then I think we should probably just agree to disagree for now and maybe pick this up in the future.
English
1
1
9
264
Richard Ngo
Richard Ngo@RichardMCNgo·
@MaxNadeau_ Where e.g. someone doing open-ended research about what goals models have (if any) wouldn’t fall into this category, but someone who focused primarily on the scary take-over-the-world kind of goals would.
English
1
0
1
122
Max Nadeau
Max Nadeau@MaxNadeau_·
@RichardMCNgo Eh, actually "cautious" isn't the right word. What I meant is more like "more open-minded" or "less inclined to force their ontologies onto the models" (as well as changing the way they present/frame their results, which is a different issue)
English
1
0
3
116
Max Nadeau
Max Nadeau@MaxNadeau_·
@RichardMCNgo Tbc, I share the complaints about the QT'd paper, and in general I think that researchers working on understanding the alignment/propensities/personalities of today's models need to be more cautious about their methods. But that type of research is a small fraction of AI safety.
English
1
0
7
299
bayes
bayes@bayeslord·
A good outcome for biosecurity under the pressure of ai acceleration is that we discover and scale dozens of things that are on the order of magnitude of innovations like city water filtration and chlorination, food safety standards, antibiotics, etc
English
12
5
85
10K