Fabian Fuchs

152 posts

Fabian Fuchs banner
Fabian Fuchs

Fabian Fuchs

@FabianFuchsML

Research Scientist at DeepMind. Interested in invariant and equivariant neural nets and applications to the natural sciences. Views are my own.

Oxford, England Katılım Haziran 2018
314 Takip Edilen2.7K Takipçiler
Sabitlenmiş Tweet
Fabian Fuchs
Fabian Fuchs@FabianFuchsML·
A year ago I asked: Is there more than Self-Attention and Deep Sets? - and got very insightful answers. 🙏 Now, Ed, Martin and I wrote up our own take on the various neural networks architectures for sets. Have a look and tell us what you think! :) ➡️fabianfuchsml.github.io/learningonsets/ ☕️
Fabian Fuchs tweet media
Fabian Fuchs@FabianFuchsML

Both Max-Pooling (e.g. DeepSets) and Self-Attention are permutation invariant/equivariant neural network architectures for set-based problems. I am aware of a couple of variations for both of these. Are there additional, fundamentally different architectures for sets? 🤔

English
2
74
318
0
Fabian Fuchs retweetledi
Adam Golinski
Adam Golinski @adam_golinski·
Our Apple ML Research team in Barcelona is looking for a PhD intern! 🎓 Curiosity-driven research 🧠 with the goal to publish 📝 Topics: Confidence/uncertainty quantification and reliability of LLMs 🤖 Apple here: jobs.apple.com/en-gb/details/…
English
4
48
273
46.5K
Fabian Fuchs
Fabian Fuchs@FabianFuchsML·
Graphs , Sets, Universality We put more work into this and are presenting it via the ICLR blogpost track (thanks to organisers and reviewers!). Have a read and let us know what you think: iclr-blogposts.github.io/2023/blog/2023… better in light mode💡, dark mode🌙 messes with the latex a bit
Petar Veličković@PetarV_93

📢 New blog post! Realising an intricate connection between PNA (@GabriCorso @lukecavabarrett @dom_beaini @pl219_Cambridge) & the seminal work on set representations (Wagstaff @FabianFuchsML @martinengelcke @IngmarPosner @maosbot), Fabian and I join forces to attempt to explain!

English
0
8
43
9.2K
Fabian Fuchs retweetledi
Adam R. Kosiorek
Adam R. Kosiorek@arkosiorek·
Text-to-image diffusion models seem to have a good idea of geometry. Can we extract that geometry? Or maybe we can nudge these models to create large 3D consistent environments? Here's a blog summarizing some ideas in this space :) akosiorek.github.io/geometry_in_im…
Adam R. Kosiorek tweet media
English
0
25
137
19.1K
Gabriele Corso
Gabriele Corso@GabriCorso·
Super cool blog post about the universality of graphs and sets! The best resource that I have seen in all these years to understand the importance of considering continuity when studying expressiveness! A concept at the foundations of PNA that I hope will be further explored!
Fabian Fuchs@FabianFuchsML

I have recently had a range of very insightful conversations with @PetarV_93 about graph neural networks, networks on sets, universality and how ideas have spread in the two communities. This is our write up, feedback welcome as always! :) ➡️fabianfuchsml.github.io/universalgraphs ☕️

English
1
3
21
0
Fabian Fuchs retweetledi
Petar Veličković
Petar Veličković@PetarV_93·
📢 New blog post! Realising an intricate connection between PNA (@GabriCorso @lukecavabarrett @dom_beaini @pl219_Cambridge) & the seminal work on set representations (Wagstaff @FabianFuchsML @martinengelcke @IngmarPosner @maosbot), Fabian and I join forces to attempt to explain!
Fabian Fuchs@FabianFuchsML

I have recently had a range of very insightful conversations with @PetarV_93 about graph neural networks, networks on sets, universality and how ideas have spread in the two communities. This is our write up, feedback welcome as always! :) ➡️fabianfuchsml.github.io/universalgraphs ☕️

English
1
11
56
0
Fabian Fuchs
Fabian Fuchs@FabianFuchsML·
I have recently had a range of very insightful conversations with @PetarV_93 about graph neural networks, networks on sets, universality and how ideas have spread in the two communities. This is our write up, feedback welcome as always! :) ➡️fabianfuchsml.github.io/universalgraphs ☕️
Fabian Fuchs tweet media
English
2
44
202
0
Fabian Fuchs retweetledi
Sergey Ovchinnikov
Sergey Ovchinnikov@sokrypton·
Anyone know of a department looking to hire faculty in the protein/genome+evolution+ML space? Also RNA biology (asking for a friend) 🙂🥼🧪
English
20
24
116
0
Fabian Fuchs retweetledi
Adam R. Kosiorek
Adam R. Kosiorek@arkosiorek·
New blog post! Find out: - what reconstructing masked images and our brains have in common, - why reconstructing masked images is a good idea for learning representations, - what makes a good mask and how to learn one akosiorek.github.io/ml/2022/07/04/…
Adam R. Kosiorek tweet media
English
2
15
96
0
Padarn
Padarn@Padarn·
@FabianFuchsML Great post. Sorry a stupid question I couldn't find in the paper: In your experiments do you use unrolled gradient descent or an implicit function theorem approach?
English
1
0
1
0
Fabian Fuchs
Fabian Fuchs@FabianFuchsML·
Graph neural networks often have to globally aggregate over all nodes. How we do this can have a significant impact on performance 🎯. After we recently finished a project on this, I wrote a blog post on this topic. Let me know what you think! :) ➡️fabianfuchsml.github.io/equilibriumagg… ☕️
Fabian Fuchs tweet media
English
6
81
476
0
Fabian Fuchs
Fabian Fuchs@FabianFuchsML·
@jonkhler That's very true, nice observation! Let me know how it goes in case you do, I am curious :)
English
0
0
2
0
Jonas Köhler
Jonas Köhler@jonkhler·
@FabianFuchsML Nice stuff! I see a lot of your topics converging here :) looking forward to trying it on some problems where I experienced sum pooling and even attention to be insufficient
English
1
0
2
0
Fabian Fuchs
Fabian Fuchs@FabianFuchsML·
@newplatonism Depends, are you happy with treating cars as point masses in vacuum? :P More seriously: people do work on making the L/H-based NNs more general (like allowing for friction & external forces), but, to my understanding, it's still mostly constrained to physical particle systems
English
0
0
0
0
Fabian Fuchs
Fabian Fuchs@FabianFuchsML·
Emmy Noether connected symmetries and conserved quantities in physics - how is this related to exploiting symmetries with neural networks? 🤔 I've tried to answer this question in a blog post (no background knowledge required!): ➡️fabianfuchsml.github.io/noether/ ☕️
Fabian Fuchs tweet media
English
5
40
222
0
Fabian Fuchs
Fabian Fuchs@FabianFuchsML·
@lzamparo The book was already linked but I now added a few more links for further reading, including the ICLR keynote. Thanks for the suggestion!
English
0
0
1
0
Fabian Fuchs
Fabian Fuchs@FabianFuchsML·
I should have said 'no physics background knowledge required' - the blog post does assume general machine learning background knowledge :)
English
0
0
7
0
Fabian Fuchs
Fabian Fuchs@FabianFuchsML·
@william_woof @andrewwhite01 +1; also, this equivalence does not need to be obvious at all. In some cases, like with max(), it might even seem counterintuitive that the function >can< be written as a sum-decomposition (ie in a Deep-Sets form)
English
0
0
1
0
William Woof
William Woof@william_woof·
@andrewwhite01 @FabianFuchsML Well if all functions can be represented by a deepsets, then that means that any function that for that is equivalent to a deepsets function.
English
1
0
1
0
Andrew White 🐦‍⬛
Andrew White 🐦‍⬛@andrewwhite01·
Deep learning friends, help me understand DeepSets. They claim the only way to do permutation invariant network is with their func (see pic). But how would a trivial maximum be represented here? arxiv.org/abs/1703.06114
Andrew White 🐦‍⬛ tweet media
English
6
1
10
0
Fabian Fuchs
Fabian Fuchs@FabianFuchsML·
@andrewwhite01 'the only way to do permutation invariant networks is with their func' is actually not what they are trying to say here; The theorem of your screenshot is actually a (cumbersome) way of saying 'all permutation invariant funcs CAN be modelled/learned/represented by DeepSets'
English
2
0
6
0