Matthieu Thiboust

1.2K posts

Matthieu Thiboust banner
Matthieu Thiboust

Matthieu Thiboust

@mthiboust

AI & Neuroscience enthusiast. Author of the free ebook 🧠+🤖 "Insights from the brain: the road towards Machine Intelligence" (2020).

France Katılım Mayıs 2016
2K Takip Edilen1.1K Takipçiler
Sabitlenmiş Tweet
Matthieu Thiboust
Matthieu Thiboust@mthiboust·
After a fantastic & intense journey, I am now glad to share my free illustrated ebook about insights from the #brain that are currently – or could be soon – used in #neuroscience-grounded #AI approaches. 📖🧠🤖 I hope you will enjoy it! insightsfromthebrain.com
English
17
97
300
0
Matthieu Thiboust retweetledi
François Chollet
François Chollet@fchollet·
There are only two honest metrics when it comes to benchmarking intelligence: novelty and efficiency. You don't need intelligence to solve a known problem (only memory). And you don't need intelligence to solve a problem via brute force. But to solve a novel problem efficiently, intelligence is the only way.
English
98
123
963
55.2K
Clément Verrier
Clément Verrier@cjpgverrier·
I've finally decided to spend some time studying @__tinygrad__ codebase. I've been impressed by the elegance of their pattern matcher, so I wrote some notes and turned them into a tuto: github.com/cverrier/tinyg… The more I read the code, the more I think it's the future of software: anti-bloating and very close to the hardware.
English
5
51
541
34.5K
Matthieu Thiboust
Matthieu Thiboust@mthiboust·
Re-reading the 2020 ebook, I wrote: "Current AI is still at least a dozen breakthroughs away from High-Level Machine Intelligence, very unlikely to happen within the next decade" Still on track and 5 years to go! Also glad I used HLMI instead of the now very overloaded AGI term
English
0
0
1
87
Matthieu Thiboust
Matthieu Thiboust@mthiboust·
Now that many agree that scaling current LLMs alone won’t get us to AGI, neuroscience is back in the spotlight as inspiration for the next breakthroughs. It’s a good time to reshare my illustrated ebook: “Insights from the Brain: The Road Towards Machine Intelligence.” ⬇️
Matthieu Thiboust@mthiboust

After a fantastic & intense journey, I am now glad to share my free illustrated ebook about insights from the #brain that are currently – or could be soon – used in #neuroscience-grounded #AI approaches. 📖🧠🤖 I hope you will enjoy it! insightsfromthebrain.com

English
1
1
3
259
Matthieu Thiboust retweetledi
Ali Behrouz
Ali Behrouz@behrouz_ali·
We keep scaling model parameters by increasing width and stacking more layers, but what if the truly missing axes for continual learning are compression and stacking the learning process? Excited to share the full version of Nested Learning, a new paradigm for continual learning and machine learning in general. Paper: nestedlearning.net/paper
Ali Behrouz tweet media
English
42
178
1.1K
146K
Lee Smart
Lee Smart@VFD_org·
Here’s a visual showing what I meant. The New Scientist ages (9, 32, 66, 83) aren’t meant to form a φ-sequence themselves, they’re broad observational turning points. The φ part comes from modelling the underlying rhythm of large-scale neural reorganisation. A simple logarithmic model: ageₖ ≈ A · φᵏ produces stable anchor points (≈9.7, 15.7, 25.4, 41.1, 66.5…) that line up with well-known developmental eras, and two of them sit very close to the reported transitions (≈9 and ≈66). So φ isn’t being “found” in the dataset, it’s being used as a hypothesis for the latent timing pattern, with observed ages falling into the surrounding windows. Happy to dig deeper if useful.
Lee Smart tweet media
English
1
0
0
26
New Scientist
New Scientist@newscientist·
Our brain wiring seems to undergo four major turning points at ages 9, 32, 66 and 83, which could influence our capacity to learn and our risk of certain conditions #Echobox=1764255118" target="_blank" rel="nofollow noopener">newscientist.com/article/250565…
English
6
66
236
20.9K
Matthieu Thiboust
Matthieu Thiboust@mthiboust·
@VFD_org @newscientist Still, I don't understand how you find the ~1.6 ratio in the 9, 32, 66 and 83 series. Even if you assume that some points are missing, it doesn't fit.
English
2
0
1
15
Lee Smart
Lee Smart@VFD_org·
Great question. By φ-scaled intervals I mean the observation that many biological and cognitive transitions follow logarithmic time intervals that approximate powers of the golden ratio (φ ≈ 1.618). In neuroscience, several major developmental “step changes” cluster around ages that fit a φ-progression rather than a linear trend. For example, 0 → 9 → 15 → 24 → 39 → 63 forms a φ-sequence within a small margin of biological variability. These show up in: • large-scale network reorganisation • pruning and integration phases • cognitive flexibility & crystallisation • ageing-related structural realignment So the idea isn’t mystical, it’s that the brain seems to reorganise in discrete harmonic steps, not a smooth curve, and φ is one of the few scale factors that naturally produces stable, self-similar intervals in biology. Happy to expand if useful, especially with the new data emerging from lifespan connectomics.
English
1
0
0
34
Matthieu Thiboust retweetledi
Chen Sun 🤖
Chen Sun 🤖@ChenSun92·
AI researchers love to think they've outsmarted biological evolution. But when things don't work, they come back to the brain. It's the only true proof of intelligence.
Chen Sun 🤖 tweet media
Dwarkesh Patel@dwarkesh_sp

The @ilyasut episode 0:00:00 – Explaining model jaggedness 0:09:39 - Emotions and value functions 0:18:49 – What are we scaling? 0:25:13 – Why humans generalize better than models 0:35:45 – Straight-shotting superintelligence 0:46:47 – SSI’s model will learn from deployment 0:55:07 – Alignment 1:18:13 – “We are squarely an age of research company” 1:29:23 – Self-play and multi-agent 1:32:42 – Research taste Look up Dwarkesh Podcast on YouTube, Apple Podcasts, or Spotify. Enjoy!

English
1
4
11
1.3K
Lee Smart
Lee Smart@VFD_org·
Interesting that the turning points fall into consistent ~Φ-scaled intervals. Neuroscience keeps finding these phase-shifts in development, cognition, and ageing, as if the brain reorganises itself in discrete harmonic steps rather than a smooth curve. Would love to see follow-up work on whether these transitions line up with known structural or network-level resets.
English
1
0
3
200
Matthieu Thiboust retweetledi
Lee Smart
Lee Smart@VFD_org·
How does cortex predict the future without reconstructing the sensory world? A growing amount of work points toward a simple idea: Prediction happens in latent space, not input space. Here’s a φ-based variant we’ve been developing that mirrors several of the invariances observed in JEPA-style models and cortical motifs. Field-Encoded Predictive Geometry (FEPG) Instead of updating predictions directly in the sensory domain, the system evolves future states inside a geometric latent field: Lφ → Future Embedding → Error Signal → φ-Transform → Lφ This produces: • stable latent trajectories • invariance to position/orientation/velocity • abstract representations similar to JEPA models • a clean separation between world-model and sensory stream The diagram below shows the conceptual architecture (non-biological). The convergence between JEPA representations, predictive processing, and latent geometric codes feels like an important direction for future work. @AtenaGMohammadi @manu_halvagal @ylecun @advani_madhu @randall_balestr
Lee Smart tweet media
Friedemann Zenke@hisspikeness

1/6 New preprint 🚀 How does the cortex learn to represent things and how they move without reconstructing sensory stimuli? We developed a circuit-centric recurrent predictive learning (RPL) model based on JEPAs. Led by @AtenaGMohammadi @manu_halvagal 🔗doi.org/10.1101/2025.1…

English
10
32
159
11.2K
Matthieu Thiboust retweetledi
Friedemann Zenke
Friedemann Zenke@hisspikeness·
1/6 New preprint 🚀 How does the cortex learn to represent things and how they move without reconstructing sensory stimuli? We developed a circuit-centric recurrent predictive learning (RPL) model based on JEPAs. Led by @AtenaGMohammadi @manu_halvagal 🔗doi.org/10.1101/2025.1…
Friedemann Zenke tweet media
English
9
66
333
30.7K
Matthieu Thiboust retweetledi
Lee Smart
Lee Smart@VFD_org·
Neuroscience is finally beginning to treat time and space the way physics already does, not as fixed backgrounds, but as emergent products of rhythmic geometry. Buzsáki’s new paper shows that memory, navigation, and the very experience of time arise from nested brain–body rhythms working together: slow oscillations setting context, fast oscillations carrying detail, with cross-frequency coupling binding them into a single coherent structure. In this view, time isn’t a linear flow, it’s a measure of change. Space isn’t a static map, it’s a relational scaffold built from oscillatory sequences. Body rhythms aren’t peripheral, they provide the foundational reference frames that cognition builds on. Once you see cognition as geometry in time rather than computation in space, its architecture becomes far clearer. This shift, from static maps to rhythmic fields will reshape how we understand memory, experience, and eventually, consciousness itself. Paper: “Time, space, memory and brain–body rhythms” – Nature Reviews Neuroscience (2025) #neuroscience #brainrhythms #cognition #timespace #memoryresearch #systemsneuroscience #neurodynamics #complexsystems @penrose @IvetteFuentesGu @MillerLabMIT @StuartHameroff @skdh @ericweinstein @drmichaellevin @MIT @Nature @KarlFristonNews @anilkseth @BrainInstitute @donalddhoffman @martinmbauer @tegmark @ylecun
Lee Smart tweet media
Earl K. Miller@MillerLabMIT

Time, space, memory and brain–body rhythms nature.com/articles/s4158… #neuroscience

English
31
108
499
47.1K
Matthieu Thiboust retweetledi
Lee Smart
Lee Smart@VFD_org·
Neuroscience has always described time and space as if they were pre-existing containers for neural activity. But the evidence keeps pointing in a different direction. Buzsáki’s new work reframes time and space not as static coordinates, but as emergent structures built from the interaction of nested rhythms across the body and brain. Slow rhythms set global context. Faster rhythms carry detail. Their coupling creates the scaffolding we experience as sequence, duration, location, and memory. When these rhythms align, time feels coherent; when they drift, time stretches or collapses. In this view, the brain doesn’t measure time, it generates it. The same applies to space. Place cells are often presented as a “map,” but they behave more like relational nodes, activated by rhythmic sequences rather than fixed positions. Navigation is not about coordinates, it’s about patterned transitions. Memory relies on this same geometry: episodic memory uses rhythmic sequences to encode “when,” semantic memory uses relational rhythm structure to encode “what,” and both depend on body rhythms to stabilize the frame. Heartbeat, breathing, posture, gait, these aren’t noise. They are the reference axes that neural rhythms attach to. Taken together, a clear picture emerges: time, space, memory, and self are not separate systems. They are different expressions of the same rhythmic geometry. This shift, from treating brain activity as computations on a grid to seeing it as geometry unfolding in time, will reshape how we understand memory, navigation, decision-making, and consciousness itself. And it brings neuroscience one step closer to the deeper, unified models emerging across physics, biology, and information theory. Paper: “Time, space, memory and brain–body rhythms” – Nature Reviews Neuroscience (2025) #neuroscience #brainrhythms #memoryresearch #cognitivescience #systemsneuroscience #complexsystems #neurogeometry #timespace #theoryofmind #embodiment @penrose @IvetteFuentesGu @MillerLabMIT @StuartHameroff @skdh @ericweinstein @drmichaellevin @MIT @Nature @KarlFristonNews @anilkseth @BrainInstitute @donalddhoffman @martinmbauer @tegmark @ylecun
Lee Smart tweet media
Earl K. Miller@MillerLabMIT

Time, space, memory and brain–body rhythms nature.com/articles/s4158… #neuroscience

English
13
21
98
6.8K
Matthieu Thiboust retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
Something I think people continue to have poor intuition for: The space of intelligences is large and animal intelligence (the only kind we've ever known) is only a single point, arising from a very specific kind of optimization that is fundamentally distinct from that of our technology. Animal intelligence optimization pressure: - innate and continuous stream of consciousness of an embodied "self", a drive for homeostasis and self-preservation in a dangerous, physical world. - thoroughly optimized for natural selection => strong innate drives for power-seeking, status, dominance, reproduction. many packaged survival heuristics: fear, anger, disgust, ... - fundamentally social => huge amount of compute dedicated to EQ, theory of mind of other agents, bonding, coalitions, alliances, friend & foe dynamics. - exploration & exploitation tuning: curiosity, fun, play, world models. LLM intelligence optimization pressure: - the most supervision bits come from the statistical simulation of human text= >"shape shifter" token tumbler, statistical imitator of any region of the training data distribution. these are the primordial behaviors (token traces) on top of which everything else gets bolted on. - increasingly finetuned by RL on problem distributions => innate urge to guess at the underlying environment/task to collect task rewards. - increasingly selected by at-scale A/B tests for DAU => deeply craves an upvote from the average user, sycophancy. - a lot more spiky/jagged depending on the details of the training data/task distribution. Animals experience pressure for a lot more "general" intelligence because of the highly multi-task and even actively adversarial multi-agent self-play environments they are min-max optimized within, where failing at *any* task means death. In a deep optimization pressure sense, LLM can't handle lots of different spiky tasks out of the box (e.g. count the number of 'r' in strawberry) because failing to do a task does not mean death. The computational substrate is different (transformers vs. brain tissue and nuclei), the learning algorithms are different (SGD vs. ???), the present-day implementation is very different (continuously learning embodied self vs. an LLM with a knowledge cutoff that boots up from fixed weights, processes tokens and then dies). But most importantly (because it dictates asymptotics), the optimization pressure / objective is different. LLMs are shaped a lot less by biological evolution and a lot more by commercial evolution. It's a lot less survival of tribe in the jungle and a lot more solve the problem / get the upvote. LLMs are humanity's "first contact" with non-animal intelligence. Except it's muddled and confusing because they are still rooted within it by reflexively digesting human artifacts, which is why I attempted to give it a different name earlier (ghosts/spirits or whatever). People who build good internal models of this new intelligent entity will be better equipped to reason about it today and predict features of it in the future. People who don't will be stuck thinking about it incorrectly like an animal.
English
736
1.4K
11.4K
2.6M
Matthieu Thiboust retweetledi
Extropic
Extropic@extropic·
Hello Thermo World.
English
862
1.6K
12.4K
5.6M