TimDarcet

1.4K posts

TimDarcet

TimDarcet

@TimDarcet

codegen @ FAIR, prev. DINO stuff @ INRIA & FAIR

参加日 Mart 2021
799 フォロー中4.5K フォロワー
固定されたツイート
TimDarcet
TimDarcet@TimDarcet·
1/ This week we released DINOv2: a series of general vision encoders pretrained without supervision. Good out-of-the-box performance on a variety of domains, matching or surpassing other publicly available encoders.
English
5
114
706
124K
TimDarcet がリツイート
Kyle Chan
Kyle Chan@kyleichan·
6. The Influence of Kaiming He Xie deeply admires Kaiming He, with whom he worked closely at FAIR. He learned that Kaiming's superpower is his extreme focus and ability to build an impenetrable baseline ("scaffold") before aiming for breakthroughs [02:34:30]. Kaiming taught him that the real research "signals" come from the failures and surprises during the experimental process, not from armchair theorizing [02:08:12]. 7. "Impact" vs. "Understanding" Xie dislikes the word "impact," finding it too ego-driven and aggressive. Instead, he views research through the lens of philosopher Hannah Arendt: the goal of research is "understanding." Publishing a paper is about sharing a profound realization with the world to find resonance and intellectual kinship, rather than just changing the world by force [01:31:27]. 8. Intelligence is More Than Human Language Xie cautions against human arrogance in defining intelligence solely through language. Drawing on evolutionary biology, he argues that building an AI with the survival and physical reasoning skills of a squirrel is actually a much harder problem than building an AI that can write code or pass exams [06:13:05]. 9. The Power of "Research Taste" Xie equates research to filmmaking, where "taste" dictates everything from the problem you choose to solve down to the formatting of the paper. Influenced by Kaiming He (who even gifted him the Buddhist Diamond Sutra to teach him to look past superficial "forms" to find true substance), Xie believes good taste means avoiding crowded, hyped areas in favor of fundamental, eternal problems like representation learning [02:45:03]. 10. The Degradation of Open Academic AI Research Xie expresses concern over how major industrial AI labs have become increasingly closed off, shifting from open academic exploration to secretive commercial competition. This shift strips researchers of their autonomy and turns them into easily replaceable cogs in a massive engineering machine, heavily motivating his decision to start a new, more open research-driven company [05:21:26].
English
4
35
404
60.1K
TimDarcet
TimDarcet@TimDarcet·
@gabriberton @francoisfleuret I did not say "we could use MLPs if we had infinite compute", I said "we could use MLPs" Granted, you'd have to slightly twist the word to be able to feed text to an MLP
English
0
0
0
61
Gabriele Berton
Gabriele Berton@gabriberton·
@TimDarcet @francoisfleuret We always knew that right? Sufficiently large MLPs can approximate any function but are too data-hungry and inefficient to be realistically used
English
1
0
0
80
François Fleuret
François Fleuret@francoisfleuret·
If you have an explanation of why the transformer is so successful, here is a rapid sanity check: if it works for a huge MLP ("depth!", "SGD!", "magic of ml!") it's a very insufficient explanation.
François Fleuret@francoisfleuret

English
25
10
162
22.9K
TimDarcet
TimDarcet@TimDarcet·
@francoisfleuret In vision when ViTs came people were like wow does that mean we can use any arch?? can we use an MLP???? omg it works MLP transformers CNN they alll work omgggg And then everyone went back to the arch they used before, out of habit and well-tuned recipes
English
0
0
4
486
TimDarcet
TimDarcet@TimDarcet·
@francoisfleuret Counterpoint transformers are just big MLPs and we could probably also use big MLPs
English
3
0
9
1.2K
TimDarcet がリツイート
Belen Alastruey
Belen Alastruey@b_alastruey·
🔎A closer look at Omnilingual No Language Left Behind, the encoder-decoder system presented as part of @AIatMeta new Omnilingual Machine Translation work!🌍 Many say encoder-decoder is dead in the age of decoder-only LLMs but we show it’s not! 📄:ai.meta.com/research/publi… 🧵1/n
Belen Alastruey tweet media
English
4
17
109
18.1K
TimDarcet がリツイート
João Maria Janeiro
João Maria Janeiro@JoaoMJaneiro·
Happy to announce that today we released OmniSONAR (tinyurl.com/3j9rn2u8) and OmniMT. In OmniSONAR, we have been able to really push the edge on largely mutlilingual embedding models, where representations across all languages are aligned like never before! 🧵1/n
João Maria Janeiro tweet media
English
2
17
74
15.7K
TimDarcet がリツイート
Belen Alastruey
Belen Alastruey@b_alastruey·
Happy to share 🌍Omnilingual Machine Translation🌍 In this work @AIatMeta we explore translation systems supporting 1,600+ languages. We show how our models (1B to 8B) can outperform baselines of up to 70B while having much larger language coverage. 📄:ai.meta.com/research/publi…
Belen Alastruey tweet media
English
10
42
186
22.1K
TimDarcet がリツイート
Takeshi Imai
Takeshi Imai@TakeshiImaiLab·
Our live tissue clearing paper is out in @naturemethods! We achieved optical clearing of mammalian brain tissues without compromising normal neuronal function. Big congrats to @Shigenori774 and our wonderful collaborators! 🎉 nature.com/articles/s4159… (1/10)
English
19
184
703
147.2K
celeste
celeste@vmfunc·
the full recording for my talk @UnderscoreTalk is available and translated! persona, the free internet, the age of agentic hackers, KYC, ID verification, and censorship
English
19
13
164
16.3K
TimDarcet
TimDarcet@TimDarcet·
@difficultyang ughhh steam does not want my money why can't I get to the buy pageeee
English
0
0
0
135
difficultyang
difficultyang@difficultyang·
Red alert red alert slaythespire reddit has gone full skonger this is not a drill
English
1
0
2
511
TimDarcet がリツイート
Ronan Collobert
Ronan Collobert@trebolloc·
26 years ago I created OG Torch. Now I am on X. Samy: whhhatt, you want to call it Torche? Me: yes, why not? Samy: but people will know what it means in French? <after 1 day of name brainstorming> Samy: ok, remove the "e", let's call it Torch. I think it was a good name.
Ronan Collobert tweet media
English
13
14
206
26.1K
TimDarcet がリツイート
𝐆𝐞𝐨𝐓𝐚𝐥𝐞𝐬
🔸Je lance un appel solennel à tous les journalistes qui me suivent (et il y en a beaucoup). Faîtes passer le mot, s'il vous plaît, n'invitez plus Luc Julia pour venir parler d'IA, vraiment 🙏 Il y a d'excellents spécialistes en France alors qu'on doit se taper Julia à toutes les sauces. C'est pénible, il est complètement dépassé, has-been et fait preuve d'un rassurisme ridicule. Et vous mes abonnés, je compte vraiment sur vous pour partager ce post, qu'il soit visible au maximum. Cet homme fait de la désinformation sur tous les médias où il passe. Préservez-vous. Pitié !
Brivael - FR@BrivaelFr

x.com/Fabien_Mikol/s… Luc Julia est à l'IA ce que Jancovici est au climat: un idiot utile qui détourne le débat des vrais sujets. Dire "l'IA c'est juste un outil, pas de panique" c'est endormir les gens pendant qu'un pourcentage massif de jobs va être balayé dans les années qui viennent. Pendant que Julia fait sa tournée des plateaux pour rassurer tout le monde, la Chine et les US avancent à vitesse grand V. Et nous on se rendort confortablement. Le résultat? Des entrepreneurs qui ne se lancent pas, des politiques qui ne bougent pas, et une France qui va se réveiller un matin en réalisant qu'elle a raté le train. Ce dont on a besoin c'est pas de gens qui minimisent ce qui arrive. C'est d'un électrochoc collectif.

Français
29
125
544
69.4K
TimDarcet
TimDarcet@TimDarcet·
@miniapeur Beta, mu, pi and sigma are the other way around for me
English
0
0
0
382
Mathieu
Mathieu@miniapeur·
Mathematician training:
English
16
79
701
95.2K
TimDarcet
TimDarcet@TimDarcet·
@jm_alexia When at Meta? Or before? Also I think you meant Alexandr, with an e it's a fashion designer
English
0
0
0
773
Alexia Jolicoeur-Martineau
Alexia Jolicoeur-Martineau@jm_alexia·
In case you didint know, Alexander Wang, the chief AI officer of Meta, did the same as Sam Altman. He sold LLMs for use in the US army. So FYI both Meta and OpenAI are part of the same coin sadly.
English
10
15
184
15.6K
TimDarcet
TimDarcet@TimDarcet·
Out of all the fascinating concepts in Dune, I really, really did not expect that the Butlerian Jihad would be relevant to real life one day
English
0
0
3
494
TimDarcet がリツイート
Janos Lagos
Janos Lagos@LagosJanos·
@TimDarcet try this >>>t = (1, 2, [30, 40]) >>> t[2] += [50, 60]
English
1
0
0
51
TimDarcet
TimDarcet@TimDarcet·
Fun question: what does this give out?
TimDarcet tweet media
English
1
0
9
1.7K
TimDarcet
TimDarcet@TimDarcet·
@giffmana Yep! And/or never paid much thought to the inner workings
English
0
0
1
260
Lucas Beyer (bl16)
Lucas Beyer (bl16)@giffmana·
@TimDarcet this is actually a standard gotcha about closures, not just in python. It's also super common in js. The solution pattern is to do `lambda i=i: i`. This tells me you haven't written much functionall code :)
English
1
0
11
1.2K