Giovanni Petri

4.3K posts

Giovanni Petri banner
Giovanni Petri

Giovanni Petri

@lordgrilo

Topology, complex networks, neuroscience; Professor @NUnetsi; PI @NPLab_; PI @ProjectCETI; Science Comm Fellow @museumofscience; Angoleiro, wine-drinker.

Turin Katılım Ekim 2009
1.2K Takip Edilen3.1K Takipçiler
Giovanni Petri
Giovanni Petri@lordgrilo·
👇👇👇👇👇👇👇👇👇👇👇
Surya Ganguli@SuryaGanguli

The @Stanford biosciences affirmation taken by all of our PhD grads is inspiring, with pledges to: 1) Do science with rigor, integrity, and uncompromising respect for truth 2) Show kindness and compassion to colleagues 3) Show honesty and respect to the public 4) Place public trust in science above self 5) Foster inclusiveness to drive progress 6) Through our actions, honor the legacy of scientists who precede us and earn the respect of those who follow us. 7) Work to advance knowledge for the benefit of all humanity and the world. Seriously brought a tear to my eye.

ART
0
1
4
337
Giovanni Petri retweetledi
António Leitão
António Leitão@leitalhao·
New preprint! "It's all about covers — Persistent Homology of Cover Refinements" We argue that covers are the natural level of abstraction for building filtrations and comparing persistence modules. arxiv.org/abs/2602.22784
English
1
2
6
510
Giovanni Petri retweetledi
Surya Ganguli
Surya Ganguli@SuryaGanguli·
Here is a recording of my keynote talk for the India AI impact summit in Delhi on “Advancing the science and engineering of intelligence:” youtube.com/watch?v=hWq1j-… Here are the topics and references I covered: Data Efficiency: Deriving neural scaling laws from first principles: arxiv.org/abs/2602.07488 Beating neural scaling laws (NeurIPS 22 best paper): arxiv.org/abs/2206.14486 Combining evolution and learning (Nature Comms 21): nature.com/articles/s4146… Energy Efficiency: Fundamental energetic limits on the speed and accuracy of computation (Phys. Rev E 23): journals.aps.org/pre/abstract/1… Brain as a smart energy grid (Nature 21): nature.com/articles/s4158… Quantum neuromorphic computing: Hopfield memories in cavity QED (Phys Rev X 21): journals.aps.org/prx/abstract/1… Quantum entanglement & optimization (Phys Rev X 24): journals.aps.org/prx/abstract/1… Geometric landscape annealing and optimization: (Phys Rev X 24): journals.aps.org/prx/abstract/1… Melding brains and machines: Digital twin of the retina (Neuron 23): cell.com/neuron/fulltex… Writing percepts into the brain: (Science 19): science.org/doi/10.1126/sc… Brain reading (Nature 22): nature.com/articles/s4158… Limits on perception (Nature 20): nature.com/articles/s4158… Digital twin of epileptic brain: biorxiv.org/content/10.110… Scaling up digital twins through enigma project: enigmaproject.ai
YouTube video
YouTube
Surya Ganguli tweet media
English
4
19
114
5.5K
Giovanni Petri retweetledi
Surya Ganguli
Surya Ganguli@SuryaGanguli·
Our new paper "From Kepler to Newton: Inductive Biases Guide Learned World Models in Transformers" arxiv.org/abs/2602.06923 lead by @ZimingLiu11 w/ @naturecomputes @AToliasLab Prev work suggests transformers trained on planetary motion do not learn a world model. We fix this: Key ingredients in the fix: 1) promote spatial continuity in the learned tokenization of space 2) ensure noise robustness of future predictions With these two ingredients the transformer learns a Keplerian world model (Kepler's elliptical equations can be decoded from the transformer hidden states). 3) reduce the context length to 2. Then (and only then) is Newton's gravitational world model learned (Newton's force law can be decoded from transformer hidden states). See @ZimingLiu11's excellent thread for more details. x.com/ZimingLiu11/st…
Surya Ganguli tweet media
English
16
66
389
23.7K
Giovanni Petri retweetledi
Andrew Saxe
Andrew Saxe@SaxeLab·
Why don’t neural networks learn all at once, but instead progress from simple to complex solutions? And what does “simple” even mean across different neural network architectures? Sharing our new paper @iclr_conf led by Yedi Zhang with Peter Latham arxiv.org/abs/2512.20607
GIF
English
9
51
409
21.7K
Giovanni Petri retweetledi
Andrew Saxe
Andrew Saxe@SaxeLab·
Upcoming online talk next Monday 9th February, at the ELLIS Reading Group on Mathematics & Efficiency of Deep Learning! Open to all. Info at sites.google.com/view/efficient…
English
0
2
11
804
Giovanni Petri retweetledi
Andrea Brovelli
Andrea Brovelli@BrovelliAndrea·
Welcome to Braina: AI agent for Brain Interaction Analysis using information-theoretical tools on neural data (fMRI, MEG, EEG, SEEG, LFP, MUA and SUA). Compatibile with most CLIs agents. Check out and send feedback! 🫶 github.com/brainets/braina
English
1
8
42
2.4K
Giovanni Petri retweetledi
Kenneth D Harris
Kenneth D Harris@kennethd_harris·
New preprint on activity sequences: in every brain region, stable over weeks. With Célian Bimbard and Matteo Carandini. Based on data from Célian and the International Brain Lab. biorxiv.org/content/10.648…
Kenneth D Harris tweet media
English
3
22
87
8.1K
Giovanni Petri retweetledi
Millie Marconi
Millie Marconi@MillieMarconnni·
I'm reading NVIDIA's new paper and its wild. Everyone keeps talking about scaling transformers with bigger clusters and smarter optimizers… meanwhile NVIDIA and Oxford just showed you can train billion-parameter models using evolution strategies a method most people wrote off as ancient. The trick is a new system called EGGROLL, and it flips the entire cost model of ES. Normally, ES dies at scale because you have to generate full-rank perturbation matrices for every population member. For billion-parameter models, that means insane memory movement and ridiculous compute. These guys solved it by generating low-rank perturbations using two skinny matrices A and B and letting ABᵀ act as the update. The population average then behaves like a full-rank update without paying the full-rank price. The result? They run evolution strategies with population sizes in the hundreds of thousands a number earlier work couldn’t touch because everything melted under memory pressure. Now, throughput is basically as fast as batched inference. That’s unheard of for any gradient-free method. The math checks out too. The low-rank approximation converges to the true ES gradient at a 1/r rate, so pushing the rank recreates full ES behavior without the computational explosion. But the experiments are where it gets crazy. → They pretrain recurrent LMs from scratch using only integer datatypes. No gradients. No backprop. Fully stable even at hyperscale. → They match GRPO-tier methods on LLM reasoning benchmarks. That means ES can compete with modern RL-for-reasoning approaches on real tasks. → ES suddenly becomes viable for massive, discrete, hybrid, and non-differentiable systems the exact places where backprop is painful or impossible. This paper quietly rewrites a boundary: we didn’t struggle to scale ES because the algorithm was bad we struggled because we were doing it in the most expensive possible way. NVIDIA and Oxford removed the bottleneck. And now evolution strategies aren’t an old idea… they’re a frontier-scale training method.
Millie Marconi tweet media
English
54
186
1.1K
96.6K
Giovanni Petri retweetledi
Jorge Bravo Abad
Jorge Bravo Abad@bravo_abad·
Building compositional tasks with shared neural subspaces One of the big open questions in cognitive neuroscience is how the brain pulls off the kind of flexible, compositional behaviour that modern AI systems are still struggling to match. We know animals can recombine simple skills—“categorize this,” “move eyes there”—to solve new tasks, and that artificial networks trained on many tasks tend to reuse internal components. But what does that reuse look like in real neural populations? Sina Tafazoli and coauthors tackle this by training monkeys on three cleverly related tasks that recombine the same subtasks: categorizing by colour or shape, and responding along one of two saccade axes. While the animals switched between tasks without explicit cues, the authors recorded from prefrontal, parietal, temporal cortex and striatum. Using population decoding, they show that task-relevant information—colour category, shape category, motor response—lives in low-dimensional subspaces of neural activity. Crucially, these subspaces are shared across tasks: the same colour subspace is reused whenever colour matters, and the same motor subspace is reused whenever a given response axis is required. The really interesting part is how these shared subspaces are engaged. The authors decode an internal “task belief” signal from prefrontal activity during fixation, and show that as the monkeys infer which task is currently active, the brain selectively scales the relevant subspaces (amplifying useful colour or shape information, suppressing irrelevant features) and funnels activity from the appropriate sensory subspace into the appropriate motor subspace. Motor axes are updated quickly; sensory representations adjust more slowly, matching behaviour. The picture that emerges is strikingly aligned with ideas in multitask and continual learning in AI: flexible behaviour arises not from isolated, task-specific circuits, but from a set of shared neural primitives that can be recombined and gain-modulated on the fly. Paper: nature.com/articles/s4158…
Jorge Bravo Abad tweet media
English
4
37
221
15K
Giovanni Petri retweetledi
Project CETI
Project CETI@ProjectCETI·
CETI Scientists developed a system which uses the pressure variations of whales’ dives to regenerate a vacuum in suction cups on our bio-inspired whale digital bioacoustic sensors! By: Germain Meyer,@danielmvogt,Xinyi Yang & Robert J. Wood from @hseas.Read:bit.ly/47TM9U5
Project CETI tweet media
English
0
1
2
434
Giovanni Petri retweetledi
Network Science Institute
NetSci 2026 is coming to the Network Science Institute at Northeastern University! 🎉 We are honored to host a conference that brings together the world’s leading ideas, discoveries, and challenges in network science. @NUnetsi @Northeastern @NetSciConf
NetSci 2026@NetSciConf

📌Save the Date! The flagship conference of the Network Science Society - 𝗡𝗲𝘁𝗦𝗰𝗶 𝟮𝟬𝟮𝟲 - is coming to Northeastern University’s Network Science Institute, 𝗝𝘂𝗻𝗲 𝟭-𝟱, 𝟮𝟬𝟮𝟲. Registration opens soon! 🔗 netsci2026.com

English
0
2
6
974
Giovanni Petri retweetledi
Melanie Mitchell
Melanie Mitchell@MelMitchell1·
A response to the NY Times' Thomas Friedman's "magical thinking" on AI ⬇️
English
22
51
208
132.6K