Pathway (www.pathway.com)

233 posts

Pathway (www.pathway.com) banner
Pathway (www.pathway.com)

Pathway (www.pathway.com)

@pathway_com

Pathway builds the first post-transformer frontier model that solves AI’s fundamental memory problem.

Palo Alto, CA Beigetreten Haziran 2011
72 Folgt880 Follower
Pathway (www.pathway.com) retweetet
Zuzanna Stamirowska
Zuzanna Stamirowska@zuzanna_pathway·
@signulll Memory is key. And it is fascinating like nothing else. I would challenge if it's still art and the "no one" ;-) We did. In BDH we have coupled memory directly with the attention mechanism (through synaptic plasticity). arxiv.org/pdf/2509.26507 [btw. memory =/= text file XD]
English
3
10
25
3.2K
Pathway (www.pathway.com) retweetet
Zuzanna Stamirowska
Zuzanna Stamirowska@zuzanna_pathway·
How and what tech will change our lives in 2026? The Wall Street Journal, as always, has one of the best-informed perspectives. We are excited to see Pathway and the post-transformer era being part of their predictions! wsj.com/tech/ai/tech-p…
Zuzanna Stamirowska tweet media
English
1
6
17
1.3K
Pathway (www.pathway.com) retweetet
Mila - Institut québécois d'IA
We recently had the pleasure of speaking with Jan Chorowski, a former Mila student who is now the co-founder and CTO of Pathway, an AI-powered data processing platform. He shares his journey from academic research to entrepreneurship, reflects on his time at Mila, and talks about his vision for the future of artificial intelligence. An inspiring conversation you won’t want to miss! mila.quebec/en/news/spotli… #milaalumni
Mila - Institut québécois d'IA tweet media
English
0
5
13
1.5K
Pathway (www.pathway.com)
Pathway (www.pathway.com)@pathway_com·
🧠 The story also broke the news on our integration with @Nvidia’s AI infrastructure and @AWS cloud and AI tech stack. We’ll be at #AWSreInvent all week if you want to meet and learn more!
English
0
1
6
356
Pathway (www.pathway.com) retweetet
Zuzanna Stamirowska
Zuzanna Stamirowska@zuzanna_pathway·
“Memory is key to intelligence and efficient reasoning.” @Steve_Rosenbush at The @WSJ covered how @pathway_com is rethinking AI from the ground up and our newly announced integration with @Nvidia and @AWS - not just scaling models, but evolving intelligence itself. Dragon Hatchling (BDH) represents the beginning of the Post-Transformer Era, read the full piece: wsj.com/articles/an-ai…
English
1
12
28
2.5K
Pathway (www.pathway.com) retweetet
Hervé Cuviliez
Hervé Cuviliez@hervecuviliez·
Check @pathway_com Baby Dragon Hatchling (BDH), that paves the way for autonomous AI.
English
0
2
21
3.5K
Pathway (www.pathway.com) retweetet
Marvin von Hagen
Marvin von Hagen@marvinvonhagen·
avg sidewalk in downtown palo alto 🌞
Marvin von Hagen tweet mediaMarvin von Hagen tweet media
English
4
3
79
7.8K
Pathway (www.pathway.com)
Pathway (www.pathway.com)@pathway_com·
Our breakthrough post-Transformer architecture (#BDH) shows that AI can now self-evolve and reorganize itself, inspired by how our biological neurons learn and adapt. It’s a glimpse into what the next generation of intelligence should look like.
English
1
0
9
742
Pathway (www.pathway.com)
Pathway (www.pathway.com)@pathway_com·
"The implications could be both technical and economic. Retraining large models costs companies billions each year in computing power and energy. A system that learns continuously could make AI development cheaper, faster, and more sustainable."
English
0
0
6
633
Pathway (www.pathway.com) retweetet
DailyPapers
DailyPapers@HuggingPapers·
A missing link between Transformers and the brain? 🧠 Dragon Hatchling (BDH) is a new LLM architecture based on a scale-free, biologically-inspired network of locally-interacting neuron particles. It rivals GPT2 performance, but is designed for interpretability.
DailyPapers tweet media
English
14
64
535
67.3K
Pathway (www.pathway.com) retweetet
Rohan Paul
Rohan Paul@rohanpaul_ai·
Wow. 🧠 The paper presents Dragon Hatchling, a brain-inspired language model that matches Transformers using local neuron rules for reasoning and memory. It links brain like local rules to Transformer level performance at 10M to 1B scale. It makes internals easier to inspect because memory sits on specific neuron pairs and activations are sparse and often monosemantic. You get reliable long reasoning and clearer debugging, because the model exposes which links carry which concepts in context. The problem it tackles is long reasoning, models often fail when the task runs longer than training. The model is a graph of many simple neurons with excitatory and inhibitory circuits and simple thresholds. It stores short term memory in the strengths of connections using Hebbian learning, so links that fire together get stronger. Those local rules behave like attention but at the level of single neurons and their links. They also present BDH-GPU, a GPU friendly version that implements the same behavior as an attention model with a running state. This design produces positive sparse activations, so only a small fraction of neurons fire at a time. Many units become monosemantic, meaning one unit often maps to one clear concept even in models under 100M. The theory ties the approach to distributed computing and explains how longer reasoning can scale with model size and time. ---- Paper – arxiv. org/abs/2509.26507 Paper Title: "The Dragon Hatchling: The Missing Link between the Transformer and Models of the Brain"
Rohan Paul tweet media
English
17
90
432
48.2K
Pathway (www.pathway.com) retweetet
Zbigniew Lukasiak
Pathway just published Dragon Hatchling (BDH), and I’ve been digging into the paper. It’s a new LLM architecture where continuous learning is native—via synaptic plasticity and Hebbian updates—rather than bolted on with scaffolds. Technical breakdown ⤵️
English
1
3
17
1K
Pathway (www.pathway.com) retweetet
Zbigniew Lukasiak
Continuous learning doesn’t have to live inside the LLM; today it’s a system-level hack (LLMs + memory + tools). But the bitter lesson says a data‑scaled meta‑learner that learns to learn will beat our hand‑engineered scaffolds.
English
1
5
10
740