Daniel Severo

3.8K posts

Daniel Severo banner
Daniel Severo

Daniel Severo

@_dsevero

Research Engineer at Meta Superintelligence Labs - FAIR / PhD @UofT @VectorInst / Previously @GoogleAI @UFSC

Montréal, Québec Katılım Mart 2011
1.1K Takip Edilen2.4K Takipçiler
Daniel Severo
Daniel Severo@_dsevero·
Bullish on the Montreal AI ecosystem!
Siva Reddy@sivareddyg

Montreal deep tech scene is getting hot!! Many recent hires of Cohere, Mistral, Periodic Labs, Poolside are all based in Montreal. And now, AMI will have an office here 🔥 It's a no-brainer, though. @Mila_Quebec has the highest concentration of deep learning expertise with interdisciplinary connections. Thanks to recent US regulation changes on immigration, no more brain drain! Let's build more in Canada!

English
0
0
3
432
Daniel Severo retweetledi
Siva Reddy
Siva Reddy@sivareddyg·
Montreal deep tech scene is getting hot!! Many recent hires of Cohere, Mistral, Periodic Labs, Poolside are all based in Montreal. And now, AMI will have an office here 🔥 It's a no-brainer, though. @Mila_Quebec has the highest concentration of deep learning expertise with interdisciplinary connections. Thanks to recent US regulation changes on immigration, no more brain drain! Let's build more in Canada!
Yann LeCun@ylecun

Unveiling our new startup Advanced Machine Intelligence (AMI Labs). We just completed our seed round: $1.03B / 890M€, one the largest seeds ever, probably the largest for a European company. We're hiring! [the background image is the Veil Nebula - a picture I took from my backyard, most appropriate for an unveiling] More details here: techcrunch.com/2026/03/09/yan…

English
18
48
755
71.3K
Daniel Severo retweetledi
Lucas Beyer (bl16)
Lucas Beyer (bl16)@giffmana·
If you're going to ICLR next month and are interested in research at Meta, it's a good idea to come to our event there: events.atmeta.com/iclrnetworking… (I'm unfortunately not going to iclr this year - not sure yet which conference I'll go to)
English
2
3
96
12.5K
Daniel Severo
Daniel Severo@_dsevero·
Long-context is so good now that my vscode can’t keep up. It freezes up from having to store too much text in the tab
English
1
0
1
200
Daniel Severo
Daniel Severo@_dsevero·
2 weeks of 2025 work = 1 day of 2026 work I can’t even imagine what 1 day of 2027 will be in 2025 units. What a time to be alive
English
0
0
0
227
Daniel Severo retweetledi
Benjamin Kurt Miller
Benjamin Kurt Miller@bkmi13·
We're looking to hire at all levels for the FAIR Chemistry team in Meta's Superintelligence Labs! We have three full-time positions posted across research scientist and research engineer positions, new PhD grads to senior research leaders. Links to all of the positions in reply
English
4
11
151
18.3K
Daniel Severo retweetledi
Mark Carney
Mark Carney@MarkJCarney·
We’re buying Canadian, and we’re building Canadian.
English
8.7K
8.9K
72.9K
4.3M
Dan Roy
Dan Roy@roydanroy·
In mid-January, I’ll join Google DeepMind’s Science unit as a Visiting Research Scientist, on leave from the University of Toronto. I'm excited to be joining Google DeepMind's efforts to accelerate mathematical research with AI.
English
30
6
302
13.6K
Dan Roy
Dan Roy@roydanroy·
Big announcement time... Today is my last day as Research Director at the Vector Institute. It has been my incredible privilege over the past 2.5 years to serve the Vector community and help build an institution that supports world-class ML research and real-world impact.
English
36
10
606
54.9K
Daniel Severo retweetledi
Zeyuan Allen-Zhu, Sc.D.
Zeyuan Allen-Zhu, Sc.D.@ZeyuanAllenZhu·
Facebook AI Research (FAIR) is a small, prestigious lab in Meta. We don't train large models like GenAI or MSL, so it's natural that we have limited GPUs. GenAI or MSL's success or failure, past or future, doesn't reflect the work of FAIR. It is important to make this distinction
Zeyuan Allen-Zhu, Sc.D. tweet media
Zeyuan Allen-Zhu, Sc.D.@ZeyuanAllenZhu

No matter how AI evolves overnight—tech, career, how it may impact me—I remain committed to using "physics of language models" approach to predict next-gen AI. Due to my limited GPU access at Meta, Part 4.1 (+new 4.2) are still in progress, but results on Canon layers are shining

English
15
60
832
124.3K
tuōmo
tuōmo@7uomoki·
ai elevates the 'scientists' (aka blue collar thinker) back to being philosopher cause the 'scientific process', when done by AI only, and the results will regain the same mysticism as the natural world, unstudied. thus, pondering the machine outputs and possible processes that enabled those to emerge will be so interesting, that you can for example choose to live in a barrel to be able to do such a task
English
3
2
32
3.9K
Lucas Beyer (bl16)
Lucas Beyer (bl16)@giffmana·
I am very happy to see a lot of push back on this today on my TL. Yesterday when i read it there was none, and i was worried this may even be a silent majority opinion?! I wrote a scalding criticism, but let it sit because i never rage-post. But now I'm happy to see that's no more needed. I only have one thing to add: this same person, 2y ago, basically wrote "well tough luck artists" regarding ai art. Well, tough luck scientists. x.com/togelius/statu…
Lucas Beyer (bl16) tweet media
Julian Togelius@togelius

I was at an event on AI for science yesterday, a panel discussion here at NeurIPS. The panelists discussed how they plan to replace humans at all levels in the scientific process. So I stood up and protested that what they are doing is evil. Look around you, I said. The room is filled with researchers of various kinds, most of them young. They are here because they love research and want to contribute to advancing human knowledge. If you take the human out of the loop, meaning that humans no longer have any role in scientific research, you're depriving them of the activity they love and a key source of meaning in their lives. And we all want to do something meaningful. Why, I asked, do you want to take the opportunity to contribute to science away from us? My question changed the course of the panel, and set the tone for the rest of the discussion. Afterwards, a number of attendees came up to me, either to thank me for putting what they felt into words, or to ask if I really meant what I said. So I thought I would return to the question here. One of the panelists asked whether I would really prefer the joy of doing science to finding a cure for cancer and enabling immortality. I answered that we will eventually cure cancer and at some point probably be able to choose immortality. Science is already making great progress with humans at the helm. We'll get fusion power and space travel some day as well. Maybe cutting humans out of the loop could speed up this process, but I don't think it would be worth it. I think it is of crucial importance that we humans are in charge of our own progress. Expanding humanity's collective knowledge is, I think, the most meaningful thing we can do. If humans could not usefully contribute to science anymore, this would be a disaster. So, no. I do not think it worth it to find a cure for cancer faster if that means we can never do science again. Many of those who came up to talk to me last night, those who asked me whether I was being serious or just trolling, thought that the premise was absurd. Of course there would always be room for humans in science. There will always be tasks only humans can do, insight only humans have, and so on. Therefore, we should welcome AI. Research is hard, and we need all the help we can get. I responded that I hoped they were right. That is, I truly hope there will always be parts of the research process which humans will be essential for. But what I was arguing against was not what we might call "weak science automation", where humans stay in the loop in important roles, but "strong science automation", where humans are redundant. Others thought it was immature to argue about this, because full science automation is not on the horizon. Again, I hope they are right. But I see no harm in discussing it now. And I certainly don't think we need research on science automation to go any further. Yet others remarked that this was a pointless argument. Science automation is coming whether we want it or not, and we'd better get used to it. The train is coming, and we can get on it or stand in its way. I think that is a remarkably cowardly argument. It is up to us as a society to decide how we use the technology we develop. It's not a train, it's a truck, and we'd better grab the steering wheel. One of the panelists made a chess analogy, arguing that lots of people play chess even though computers are now much better than humans at chess. So we might engage in science as a kind of hobby, even though the real science is done by computers. We would be playing around far from the frontier, perhaps filling in the blanks that AI systems don't care about. That was, to put it mildly, not a satisfying answer. While I love games, I certainly do not consider game-playing as meaningful as advancing human knowledge. Thanks, but no thanks. Overall, though, it was striking that most of those I talked to thanked me for raising the point, as I articulated worries that they already had. One of them remarked that if you work on automating science and are not even a little bit worried about the end goal, you are a psychopath. I would add that another possibility is that you don't really believe in what you are doing. Some might ask why I make this argument about science and not, for example, about visual art, music, or game design. That's because yesterday's event was about AI for science. But I think the same argument applies to all domains of human creative and intellectual expression. Making human intellectual or creative work redundant is something we should avoid when we can, and we should absolutely avoid it if there are no equally meaningful new roles for humans to transition into. You could further argue that working on cutting humans out of meaningful creative work such as scientific research is incredibly egoistic. You get the intellectual satisfaction of inventing new AI methods, but the next generation don't get a chance to contribute. Why do you want to rob your children (academic and biological) of the chance to engage in the most meaningful activity in the world? So what do I believe in, given that I am an AI researcher who actively works on the kind of AI methods used for automating science? I believe that AI tools that help us be more productive and creative are great, but that AI tools that replace us are bad. I love science, and I am afraid of a future where we are pushed back into the dark ages because we can no longer contribute to science. Human agency, including in creative processes, is vital and must be safeguarded at almost any cost. I don't exactly know how to steer AI development and AI usage so that we get new tools but are not replaced. But I know that it is of paramount importance.

English
25
22
581
77.7K
Daniel Severo retweetledi
Heli Ben-Hamu
Heli Ben-Hamu@helibenhamu·
I'll be at NeurIPS on Dec 3-4. Would be happy to meet up and chat about efficient sampling methods from language models ⚡️ Or, catch me at our EB-Sampler poster on Thursday 4:30pm Joint work with @itai_gat, @_dsevero, Niklas Nolte, Brian Karrer
Heli Ben-Hamu tweet media
English
0
8
39
3K
Daniel Severo retweetledi
Stella Biderman
Stella Biderman@BlancheMinerva·
Wait what the hell? #ICLR2026 what are you thinking? Leaking the names of reviewers is bad, but how does reversing any changes (which are overwhelmingly score-increases) make things better? They say it's to prevent collusion... is collusion that rampant? That's a wild sentiment
Stella Biderman tweet media
English
7
8
103
18.7K