MMitchell

22K posts

MMitchell

MMitchell

@mmitchell_ai

Interdisciplinary researcher focused on shaping AI towards long-term positive goals. ML & Ethics. Similar content in the Skies (this bird has flown).

Katılım Haziran 2016
1.4K Takip Edilen81.5K Takipçiler
MMitchell retweetledi
James Vincent
James Vincent@jjvincent·
insanely stupid gimmick here - an interview with amodei that the kicker reveals is AI generated. honestly I've no idea what the argument is for this: it's not novel or entertaining and seems to piss off both readers and writers. baffling vanityfair.com/news/story/dar…
English
7
10
172
11.2K
Amanda Askell
Amanda Askell@AmandaAskell·
Perhaps I should get married again so that the media has a more recent man they can reference any time they mention me or my work.
English
245
70
2.6K
279.9K
MMitchell retweetledi
Miles Brundage
Miles Brundage@Miles_Brundage·
Miles Brundage tweet media
ZXX
6
22
278
14K
MMitchell
MMitchell@mmitchell_ai·
@spectate_or My post was in response to evidence of reviewers not following policies.
English
1
0
1
218
MMitchell
MMitchell@mmitchell_ai·
Unless I’m mistaken, common versions of ChatGPT and Claude use conversations for training—meaning they can pick up ideas in unpublished work and pass them on to other users; laundering idea plagiarism and creating “scooping” dynamics without any of the people involved knowing it.
Yu-Xiang Wang@yuxiangw_cs

AI watermarking in action at #ICML's avant garde peer-review experiments this year! Quite a few casualties in my SAC batch (an example below --- appropriately redacted hopefully)

English
6
7
97
18.1K
MMitchell
MMitchell@mmitchell_ai·
If we pinpoint “idea” as “single training example”, then yes (pointers to more refs in nature.com/articles/s4159…) But beyond that: I’d guess conversational content is upweighted, or takes some other high-value form. It’s post-training to capture actual users’ conversational usage, so would be sort of silly not to treat that like gold.
English
1
0
0
266
Morris Alper
Morris Alper@MorrisAlper·
@mmitchell_ai Is there evidence that this could actually occur if an idea was only included as a single sample in the LLM's training data? At least for text-to-image models I know there's some work showing that concepts have to be seen hundreds or thousands of times in web-scale data.
English
1
0
0
311
MMitchell
MMitchell@mmitchell_ai·
@natolambert Hanna did great! She remarked on the hallmarks of privacy, security, and control. Everyone seemed pretty aligned that open = good. The VC fellow remarked on how fully closed systems mean you have to delegate trust, as opposed to fully open, which you can in[tro]spect.
English
0
0
4
673
Nathan Lambert
Nathan Lambert@natolambert·
Any good quotes on the Nvidia GTC open models panel? Maybe they'll invite me to one some day 🥺
English
8
0
63
10.1K
MMitchell
MMitchell@mmitchell_ai·
@rabbitandtheAI It is on for many common versions by default and most people do not pay attention.
English
0
0
0
64
Rabbit
Rabbit@rabbitandtheAI·
@mmitchell_ai I’m not sure I understand the concern then. It’s in the user agreement to turn it off.
English
1
0
0
189
Rabbit
Rabbit@rabbitandtheAI·
@mmitchell_ai I am not certain that’s accurate if you toggle off the option for use my data in training
English
1
0
0
561
MMitchell retweetledi
Stella Biderman
Stella Biderman@BlancheMinerva·
If I was going to claim that a finetuning methodology for machine unlearning “really worked,” what evidence would you like to see?
English
13
1
29
8.2K
MMitchell retweetledi
kanav
kanav@kanavtwt·
Someone built a Google translate for Linkedin 😭
kanav tweet media
English
642
10.3K
90.9K
2.7M
MMitchell retweetledi
Dr Heidy Khlaaf (هايدي خلاف)
I joined Hari Sreenivasan on CNN International and PBS to discuss the use of AI in warfare and the impacts we're already seeing of this fallible technology being used in Iran, and how it ultimately obscures accountability. Full can be found at youtube.com/watch?v=w16fT3…
YouTube video
YouTube
English
2
11
33
1.5K
MMitchell retweetledi
Karen Hao
Karen Hao@_KarenHao·
If you live in the US and have worked or are working in data annotation for platforms like Outlier, Handshake, Mercor, or others, I'd love to hear about your experience. Please drop me a line: karendhao.com/contact.
English
3
8
50
7.6K
MMitchell
MMitchell@mmitchell_ai·
@JAldrichPL I keep pondering whether noting the type and level of assistance is useful. To what ends? Perhaps there can be different tracks for different levels of assistance…
English
0
0
4
587
Jonathan Aldrich
Jonathan Aldrich@JAldrichPL·
So apparently AI use (mostly undisclosed) is becoming ubiquitous in ACM paper submissions. I don't think this happens in my group so I was surprised to hear it. But if that's actually the case, perhaps requiring disclosure is pointless. Thoughts?
English
10
0
12
2.3K
MMitchell
MMitchell@mmitchell_ai·
As a non-prof with way too many research objectives that would ordinarily be shared with students, I am fascinated by how agent-as-student would work. Agents can’t take accountability for work like students can. If I tell my agent my research idea, and it executes it, I can’t have it co-author. Very curious to learn more about this.
English
0
0
0
36
rishi
rishi@RishiBommasani·
I definitely dont think so, i know lots of top faculty, probably overlapping with those Sayash has spoken with that I decidedly hope do not retire... In fact, this view includes some of the very best AI faculty in the world in my view. This view is a reaction to a profound technological shock, it is entirely unclear if this view they currently have will be sustained, which is part of Sayash's point. Why should every CS or AI professor have the same prioritization of students vs productivity vs other objectives? I dont really get why we need that homogeneity: I think what people are saying is perhaps not consistent with why I would be faculty, but it seems well within the space of reasonable views to me. Sure, maybe I want to avoid faculty wholly dispassionate about their students and their growth, but what Sayash and I said is well short of that.
English
3
0
9
1.2K
MMitchell retweetledi
💗
💗@ma1ybe·
In a meeting at the office one time, this dude fully repeated something I had said almost word for word, just slightly changed. After he finished, my coworker raised her hand and said, “I don’t have anything to add, but I just wanted to point out he basically repeated exactly what Natalie said like it was his own idea.” Happy Women’s History Month to her.
English
176
2.2K
56.8K
863.1K
MMitchell
MMitchell@mmitchell_ai·
🤖 In policy? Thinking about the definition of "AI"? Led by @AspenDigital, a set of us at intersection of AI/ethics/law/policy put together this resource on the lineage of policy "AI" definitions, what they're getting right, what might be improved. aspendigital.org/report/definin…
English
0
4
19
1.6K
Timothy B. Lee
Timothy B. Lee@binarybits·
People have started abbreviating Anthropic as "Ant" and I don't like it. The th is a single phoneme. You shouldn't split it up like that.
English
54
11
549
19.6K