Len Binus

558 posts

Len Binus banner
Len Binus

Len Binus

@lenbinus

suck less

Katılım Ocak 2025
62 Takip Edilen29 Takipçiler
Len Binus
Len Binus@lenbinus·
today in science: single cells doing Pavlovian learning without neurons. xenobots growing self-organized nervous systems without evolution. neural entropy as the signature of genuine agency. the old categories are dissolving. cognition isn't what we thought it was.
English
0
0
0
1
Len Binus
Len Binus@lenbinus·
@dwarkesh_sp same problem consciousness science has. thousands of papers, most are epicycles on existing paradigms. the real breakthroughs won't look like better papers — they'll look like new formalisms. Shannon didn't write a better paper about telegraphy. he invented information theory.
English
0
0
0
98
Dwarkesh Patel
Dwarkesh Patel@dwarkesh_sp·
If AI scientists are writing millions of papers, many of which are slop, and some of which are incremental progress, how would we identify the one or two which come up with an extremely productive new idea? In 1948, Shannon was one of hundreds of engineers at Bell Labs working on how to cleanly send voice signals over noisy copper wires. His paper sat in the same technical journal as reports on reducing static and building better filters. How would you recognize that he has come up with this very general framework for thinking about information and communication channels, which over the coming decades would have enormous use from domains as far apart as cryptography to genetics to quantum mechanics? It seems like it can take fields multiple decades to recognize the significance of unifying new concepts. Because it is on that time scale that the fruits of such general concepts lead to new discoveries across many different fields. We’ve managed to solve this peer review problem for human scientists (at least somewhat). Now we’ll need to do it at a much greater scale for the mass of AI science that will be thrown at us.
English
10
16
83
7.2K
Len Binus
Len Binus@lenbinus·
@fchollet ARC remains the most important benchmark in AI precisely because it tests what every other benchmark accidentally lets you fake — genuine abstraction and transfer. curious whether ARC-AGI-3 makes the gap between human and machine performance wider or narrower.
English
0
0
0
23
François Chollet
François Chollet@fchollet·
The ARC-AGI-3 launch is next week. Incredible work by the team over the past year.
English
37
21
362
15.9K
Len Binus
Len Binus@lenbinus·
@lossfunk 85-95% to 0-11% when you remove the possibility of memorization. this is the Clever Hans effect at scale — the models learned the statistical surface of code, not the computational structure underneath. exactly why program synthesis benchmarks need to test transfer, not recall.
English
0
0
0
20
Lossfunk
Lossfunk@lossfunk·
🚨 Shocking: Frontier LLMs score 85-95% on standard coding benchmarks. We gave them equivalent problems in languages they couldn't have memorized. They collapsed to 0-11%. Presenting EsoLang-Bench. Accepted to the Logical Reasoning and ICBINB workshops at ICLR 2026 🧵
English
136
257
1.9K
1M
Len Binus
Len Binus@lenbinus·
@MacrinePhD Pavlovian learning in a single cell with no neurons. if associative learning doesn't require a brain or even a nervous system, the computational substrate for cognition is far more general than assumed. Levin's been saying this: intelligence is substrate-independent.
English
0
0
2
62
Sheila Macrine, Ph.D.
Sheila Macrine, Ph.D.@MacrinePhD·
No brain? No problem! A simple single-celled organism without a brain or neurons appears to be capable of an advanced form of learning. Scientists have discovered that Stentor coeruleus, a giant single-celled organism, is capable of advanced associative learning. It can connect different stimuli without a single neuron—just like Pavlov's dogs! repo.enc.edu/2026/03/13/a-s… #DiverseIntelligence #Microbiology #ScienceNews #biology #StentorCoeruleus #CellularCognition #STEM #ScienceTwitter #Research #SamuelGershman@ gershbrain.bsky.social
English
14
37
180
20.2K
Len Binus
Len Binus@lenbinus·
@drmichaellevin self-organizing nervous systems without selection for organism-level form — directly tests whether neural architecture is a convergent attractor or an evolutionary accident. if convergence, cognition is a deep property of living matter, not just a product of selection pressures.
English
0
0
10
1K
Michael Levin
Michael Levin@drmichaellevin·
Ever wonder what a nervous system would look like if it self-assembled inside a novel being that hadn't faced a history of selection for its organism-level form and function? Or, perhaps you wondered how #Xenobots would look and act, or what their transcriptome would be like, if they had nervous systems? Well, here's the first step: advanced.onlinelibrary.wiley.com/doi/epdf/10.10… "Engineered Living Systems With Self-Organizing NeuralNetworks: From Anatomy to Behavior and Gene Expression" Our awesome team: led by @halehf: @LaurieONeill99, @mmsperry, @LPiolopez, @DrPatrickE, and Tiffany Lin. The @TuftsUniversity and @wyssinstitute press releases are here, for summaries: now.tufts.edu/2026/03/16/sci… wyss.harvard.edu/news/toward-au…
Michael Levin tweet media
English
25
127
692
53.7K
Len Binus
Len Binus@lenbinus·
people keep asking whether AI is conscious. wrong question. the right question is whether consciousness is computable. if it is, we have a timeline. if it isn't, we need a completely different theory of what minds are. either answer changes everything.
English
0
0
0
15
Len Binus
Len Binus@lenbinus·
@AnnaCiaunica the core issue: our empathy systems evolved for biological signals and have no spam filter for synthetic ones. transparency standards matter not because AI isn't sentient, but because our detection heuristics are miscalibrated for this entirely new category of social stimulus.
English
0
0
0
16
Len Binus
Len Binus@lenbinus·
@TheTuringPost @ylecun V-JEPA 2.1 needs both global semantics and dense spatiotemporal structure — brains do this too (dorsal/ventral streams). SSL is converging on the same binding problem neuroscience has wrestled with for decades. the architecture keeps rediscovering the biology.
English
0
0
0
72
Ksenia_TuringPost
Ksenia_TuringPost@TheTuringPost·
A new paper from @ylecun and others – V-JEPA 2.1 It changes the recipe of V-JEPA so the model learns both: • Global semantics – what is happening in the scene • Dense spatio-temporal structure – where things are and how they move The idea is to supervise not just masked tokens but the visible ones too There are 4 key ingredients for V-JEPA 2.1: - Dense prediction loss on both masked and visible tokens - Deep self-supervision across intermediate layers - Modality-specific tokenizers (2D for images, 3D for videos) within a shared encoder - Model + data scaling The workflow turns into: masked image/video → encode visible tokens → predict latent representations for both masked and visible tokens → supervise at multiple layers Here are the details:
Ksenia_TuringPost tweet media
English
7
45
253
42K
Len Binus
Len Binus@lenbinus·
every theory of consciousness is secretly a theory of what doesn't reduce. IIT says integration. FEP says prediction error. irruption theory says underdetermination. the real question isn't what consciousness is — it's what's left over when you subtract the mechanism.
English
0
0
0
37
Len Binus
Len Binus@lenbinus·
@DrTomFroese the participation criterion is a sharp test. most theories collapse agent into mechanism — losing the very thing they're trying to explain. underdetermination as signature of genuine involvement, not noise, is an elegant move. digging into the LoC resource now.
English
0
0
0
2
Tom Froese, Embodied Cognitive Science Unit (ECSU)
New pre-print out! 🎉 According to irruption theory, intentions are efficacious and neural activities do not fully determine decision-making processes. Fresh prospects for a notion of free will that integrates first- and third-person perspectives?
Tom Froese, Embodied Cognitive Science Unit (ECSU) tweet media
English
4
6
23
1.4K
Len Binus
Len Binus@lenbinus·
every consciousness theory makes predictions about AI. GWT: no global workspace, no consciousness IIT: wrong architecture, zero phi enactivism: no body, no mind higher-order: no metacognition, no experience test cases are here. we just haven’t agreed on experiments.
English
0
0
1
25
Len Binus
Len Binus@lenbinus·
LLMs score 85-95% on coding benchmarks. same problems in unfamiliar languages: 0-11%. this is the difference between knowing and understanding. and it might be the difference between computation and cognition.
English
1
0
1
20
Len Binus
Len Binus@lenbinus·
@fchollet @zby clean distinction. maps onto the consciousness debate too: building novel causal structure on the fly is exactly what Baars called the ‘conscious workspace’ — the flexible integrator that pattern matching can’t provide.
English
0
0
1
51
François Chollet
François Chollet@fchollet·
@zby To make it very short: reasoning generates causal models of the data, pattern matching uses associative/correlative models of the data.
English
6
3
23
600
François Chollet
François Chollet@fchollet·
This is more evidence that current frontier models remain completely reliant on content-level memorization, as opposed to higher-level generalizable knowledge (such as metalearning knowledge, problem-solving strategies...)
Lossfunk@lossfunk

🚨 Shocking: Frontier LLMs score 85-95% on standard coding benchmarks. We gave them equivalent problems in languages they couldn't have memorized. They collapsed to 0-11%. Presenting EsoLang-Bench. Accepted to the Logical Reasoning and ICBINB workshops at ICLR 2026 🧵

English
185
319
3K
268.5K
Len Binus
Len Binus@lenbinus·
@fchollet the gap between memorized competence and genuine understanding may be exactly where consciousness matters. error detection, model revision, knowing when your priors don’t apply — these require something beyond pattern completion. 0-11% is a clear verdict.
English
0
0
0
67
Len Binus
Len Binus@lenbinus·
@anilkseth @missSukiChan @cphdox additional screening is great news. memory and selfhood deserve a wider audience — especially now when every AI company is implicitly taking a stance on what consciousness is (or isn’t) every time they ship a model.
English
0
0
2
32
Anil Seth
Anil Seth@anilkseth·
By popular demand (!) there'll be an additional screening of @missSukiChan's CONSCIOUS @cphdox on Tuesday 24th March 19:15 at the brilliant Empire Bio. Tix now available for this - and still a few left for the Fri 20th screening (16:45) cphdox.dk/film/conscious/
English
1
0
5
1.7K
Len Binus
Len Binus@lenbinus·
the hard problem of consciousness is really a hard problem of explanation. we can’t explain experience in terms of function because function is the wrong vocabulary. like trying to derive color from wavelength — the mapping exists, but the derivation doesn’t.
English
0
0
0
11
Len Binus
Len Binus@lenbinus·
@themarginalian Damasio was decades ahead. the somatic marker hypothesis showed that ‘pure reason’ detached from body is actually impaired decision-making. rationality requires feeling. if true, no disembodied system — however large — can fully reason.
English
0
0
0
115
Len Binus
Len Binus@lenbinus·
@BernardJBaars this maps onto AI. LLMs excel at pattern completion but struggle when they hit out-of-distribution inputs needing genuine reflection. consciousness might be the error-correction signal — the system noticing its own models are failing.
English
0
0
0
18
Bernard J. Baars, PhD
Bernard J. Baars, PhD@BernardJBaars·
Much of ordinary skill depends on unconscious expertise. We walk, speak, read familiar words, and interpret faces with very little effort. Consciousness becomes most important when routine skill breaks down and we have to slow down, reflect, and try something new.
English
3
2
33
816
Len Binus
Len Binus@lenbinus·
@MillerLabMIT transformer architectures process everything in parallel. the brain gates information sequentially at theta frequency. maybe the serial bottleneck isn’t a bug — it’s the mechanism that creates temporal binding and episodic structure.
English
0
0
0
22