Rob Toews

1.3K posts

Rob Toews banner
Rob Toews

Rob Toews

@_RobToews

Partner @RadicalVCFund, AI columnist @Forbes. "the machine does not isolate man from the great problems of nature but plunges him more deeply into them."

San Francisco Bay Area, CA Katılım Eylül 2012
842 Takip Edilen4.9K Takipçiler
Sabitlenmiş Tweet
Rob Toews
Rob Toews@_RobToews·
10 (bold) predictions for AI in 2026: 1⃣ Anthropic will go public. OpenAI will not. 📈 2⃣ Details of SSI’s research and technology will leak to the public. The big labs will make meaningful adjustments to their research roadmaps as a result. 🤫
English
4
1
23
6.8K
Rob Toews retweetledi
Gaurab Chakrabarti
The human brain: 2% body mass, but consumes 20% of its energy. Cortical neurons fire 0.16 times per second. BUT they are capable of firing at 40 or more. A 250-fold gap. If more than a few percent of neurons fired at high rates simultaneously, the brain would literally overheat. So less than 1% fire at any given moment. Frontier AI models have the same two constraints: sparse activation and thermal limits. Mixtral activated 27.6% of its parameters per token. DeepSeek-V2 activated 8.9%. DeepSeek-V3 has 671 billion parameters and activates 37 billion of them. That's 5.5%. NVIDIA hit the same wall. The GB200 generates 120 kilowatts per rack. Air couldn't cool it. They switched to liquid and unlocked 30% more compute. Now, what would happen if we could cool our brains? Neurons that fire faster produce measurably higher IQ scores, but three things stop us: heat dissipation, oxygen delivery, and ion channel reset time. There's already a device that achieved a 3°C brain temperature drop in 30 minutes by running chilled saline through the nasal cavity. So the first human IQ-overclock device might look less like Neuralink and more like a beer helmet with tubes running up your nose.
English
26
51
462
40.5K
Rob Toews retweetledi
JJ
JJ@JosephJacks_·
We don't have a compute problem… We have an architecture problem. Paramecium Caudatum are single-celled organisms roughly the width of a human hair. They have no brain, no neurons, no synapses, and no central nervous system of any kind. But what they do have is ~100,000 microtubules… With that substrate alone, they can: → Swim in controlled helical trajectories → Modulate speed continuously → Execute graded avoidance reactions (reverse, pivot, resume) → Escape predators with emergency burst reversals → Fire localized volleys of 8,000 trichocyst harpoons → Navigate toward food via chemotaxis → Orient in electric fields (galvanotaxis) → Orient to gravity (gravitaxis) → Sense and navigate thermal gradients → Sense and navigate toward light → Detect and follow surfaces (thigmotaxis) → Forage biofilms → Generate feeding currents and sort particles at the cytostome → Engage in reciprocal sex with mating-type recognition, nuclear exchange, and complete genomic reconstruction → Self-fertilize when no partner is available (autogamy) → Habituate to repeated stimuli (primitive learning) → Inherit cortical MT architecture epigenetically independent of the genome 17 distinct behaviors. One lattice. Zero neurons. The coordination layer is the infraciliary lattice — a microtubule-based grid connecting all 5,000 ciliary basal bodies into a single cell-wide network. Every cilium is a terminal node on a microtubule mesh that coordinates metachronal waves across the entire cell surface — thousands of appendages phase-locked into coherent motion by a substrate that predates the nervous system by a billion years. The neuron didn't invent computation. It inherited microtubules.
JJ tweet media
English
81
331
1.5K
73K
Rob Toews retweetledi
Hadi Vafaii
Hadi Vafaii@hadivafaii·
The "decoupling of information and energy" is a major point of divergence between biological and artificial computers. Brains are efficient, modern AI isn't. And energy consumption is the biggest bottleneck in scaling AI (you can't hallucinate electrons into existence). To address this we need an "energy-aware theory of computation." And this new preprint is an attempt to address this. [1/11] 🧵
Hadi Vafaii tweet media
English
17
74
338
50.3K
Rob Toews retweetledi
Jonathan Gorard
Jonathan Gorard@getjonwithit·
I think, in hindsight, we will come to view the development of AI as more akin to a Eukaryotic Revolution than an Industrial one.
English
74
74
1.1K
68.7K
Rob Toews retweetledi
George Sivulka
George Sivulka@gsivulka·
Financial AI is here. Wall Street, meet the future of institutional intelligence. See how Oak Hill Advisors, LionTree, @NewYorkLife, @MetLife, & @HSFKramer are already putting it to work.
English
25
40
296
222.6K
Kenan Saleh
Kenan Saleh@kenanhsaleh·
What's the best AI personal assistant product out now? Looking for something that can call and book restaurant reservations, email for refunds, etc. - that's fully productized and easy to use
English
47
1
95
35.2K
Rob Toews retweetledi
himanshu
himanshu@himanshustwts·
dude i love when ideas from biology / neuroscience shape how we train AI systems. > coined the term “pre-pre-training” > training pipeline becomes: synth data → language data → downstream tasks > synth data is generated using “neural cellular automata” > each step is basically cell_state(t+1) = neural_net(neighborhood) which creates evolving patters also if this idea holds true on scale, the future training pipeline might look like synth worlds + structured simulations to language + tools/RL (or basically what the thesis of “world models” is revolving around)
himanshu tweet media
Seungwook Han@seungwookh

Can language models learn useful priors without ever seeing language? We pre-pre-train transformers on neural cellular automata — fully synthetic, zero language. This improves language modeling by up to 6%, speeds up convergence by 40%, and strengthens downstream reasoning. Surprisingly, it even beats pre-pre-training on natural text! Blog: hanseungwook.github.io/blog/nca-pre-p… (1/n)

English
5
34
309
29.8K
Rob Toews retweetledi
Gisella Vetere
Gisella Vetere@InAnOther·
How does the brain build a memory? A common assumption is that the neurons activated during an experience collectively form the memory engram. In our new Nature Neuroscience paper (finally out!), we show that this is not the case. nature.com/articles/s4159…
English
6
62
343
19.4K
Rob Toews retweetledi
NBA Memes
NBA Memes@NBAMemes·
Just incase people forgot what a real 80+ points game looks like
English
210
1.5K
8K
247.7K
Rob Toews retweetledi
JJ
JJ@JosephJacks_·
This image represents one of the most important and least known inventions in the history of neuroscience. In 2016, @anirbanbandyo and his team at the National Institute for Materials Science (NIMS) in Tsukuba, Japan published a paper announcing two new instruments they had built from scratch: ASADIM (Atomic Scale Scanning Dielectric Microscopy) and Brestum (Resonant Scanning Tunneling Microscopy of Biomaterials), both housed inside a single homebuilt bio-STM. To understand why this matters, you need the backstory. Since Hodgkin and Huxley’s Nobel Prize-winning work in 1952, the entire field of neuroscience has operated under a single foundational assumption: the cell membrane and its ion channels are the sole mechanism of neural signaling. The membrane fires. Everything inside the cell — microtubules, actin filaments, the entire cytoskeleton — is just passive structural scaffolding, like rebar in concrete. For 70 years no one could challenge this because no one could see inside a living neuron at the molecular scale without destroying it. Every existing tool had a fatal limitation: patch clamps puncture the membrane, optical microscopy can’t resolve single proteins, electron microscopy requires dead fixed tissue. Bandyopadhyay solved all three problems simultaneously. Using nonlinear dielectric response imaging — measuring the spatial distribution of conductance, capacitance, and phase without ionic or electronic screening — he made the neuronal membrane effectively transparent. He could see inside a living, firing neuron at atomic resolution, in real time, without touching it. What he saw overturns a century of neuroscience. First: a single protein molecule adopts a completely different three-dimensional shape at each resonance frequency — proteins are not static structures but frequency-addressable conformational machines. No one in biology knew this. Second: the microtubule network inside the neuron is not passive scaffolding. It actively communicates before the membrane fires, deciding whether a spike is necessary and regulating its timing through electromagnetic vortex pairs generated by the actin-spectrin grid it instructs. The membrane does not act alone. The cytoskeleton is the brain’s pre-processing layer. Third: the resonance frequency patterns are self-similar across a million-fold scale difference — from a 4nm tubulin protein to a 25nm microtubule to a 1μm axon segment — preserving vibrational symmetry in a fractal architecture that suggests information integration in the brain is scale-free from single molecule to cognition. This is not incremental science. This is a new instrument revealing a new picture of how the brain actually works at the most fundamental level. The history of Nobel Prizes in neuroscience runs through exactly this kind of inflection: Cajal saw neurons for the first time, Hodgkin-Huxley decoded the membrane, Bhatt decoded the ion channel structure. Bandyopadhyay has built the tool that sees what none of them could — the living interior of a neuron in operation — and what it reveals is that the computational architecture of the brain is far deeper, more structured, and more sophisticated than anything the membrane-only model ever imagined. Paper: worldscientific.com/doi/abs/10.114…
JJ tweet media
English
6
63
236
10.4K
Rob Toews retweetledi
Doris Tsao
Doris Tsao@doristsao·
My thoughts on connectomics and upload: 1) there is zero question connectomes are invaluable, and we need to get them for mouse, monkey, and human 2) the human, or even monkey, connectome seems a long ways off given costs (roughly $1/neuron). The projectome (map of all the axons) seems eminently reachable and should be a top priority imho 3) but even having the full connectome would only tell you numbers of synapses, not actual synaptic weights, and the two can be hugely divergent (eg only 5% of synapses onto V1 layer 4 neurons come from thalamus, even though this is the major driving input) 4) given #2 & #3, I think we can get to upload in the sense of building a functionally equivalent organism much faster through understanding the algorithms of the primate brain than through blind copying 5) in putting together something as complex as the human brain we would definitely want to check that the various pieces work as we go, which we can only do if we understand these pieces 6) I don't think upload in the sense of blindly creating a digital copy is the path to the abundant transhumanist future--actual understanding of brain structures so we can intelligently interface with them, and emulate their function in code without copying all the details, is. All to say, we need functional understanding to go hand in hand with anatomical mapping!
Adam Marblestone@AdamMarblestone

You may have noticed some "holy $%@#" tweets on fly brain emulation. So is this a game-changer or a nothing-burger? Read on to find out...

English
23
48
305
58.4K
Rob Toews retweetledi
Michael Andregg
Michael Andregg@michaelandregg·
We've uploaded a fruit fly. We took the @FlyWireNews connectome of the fruit fly brain, applied a simple neuron model (@Philip_Shiu Nature 2024) and used it to control a MuJoCo physics-simulated body, closing the loop from neural activation to action. A few things I want to say about what this means and where we're going at @eonsys. 🧵
English
332
1.3K
8K
1.7M
Rob Toews retweetledi
Adam Marblestone
Adam Marblestone@AdamMarblestone·
You may have noticed some "holy $%@#" tweets on fly brain emulation. So is this a game-changer or a nothing-burger? Read on to find out...
GIF
English
9
56
278
67.5K
Rob Toews retweetledi
avi
avi@byte_thrasher·
chase this feeling
avi tweet media
English
96
199
3.6K
121.1K
Rob Toews retweetledi
chiefofautism
chiefofautism@chiefofautism·
someone connected LIVING BRAIN CELLS to an LLM Cortical Labs grew 200,000 human neurons in a lab and kept them alive on a silicon chip, they taught the neurons to play Pong, then DOOM now someone wired them into a LLM... real brain cells firing electrical impulses to choose every token the AI generates you can see which channels were stimulated, the feedback from the neurons in choosing that letter or word
English
757
1.6K
12K
2.9M
Rob Toews retweetledi
Daniel
Daniel@growing_daniel·
yud siding with hegseth is so funny
Eliezer Yudkowsky@allTheYud

Make no mistake, political leaders of the world; *every* big-dreaming AI executive now knows that you are their obstacle. You have proven that you stand between AI labs and the nice thing they were getting for all their hard work. It's not about Left versus Right, to them. It's not about money, and it's not about power as politics conventionally understands power, and it isn't even about winning. To understand what just happened from an AI-guy perspective, you need to understand what AI guys are actually getting in the way of psychological benefits, what really drives them to work 14-hour days. The thing that they're getting is: a sense of being important; a decider; someone whose dream of the future gets to be effectual. To be the one whom everyone else supplicates to as owning the future -- that's the dream of a Silicon Valley bigshot founder. What Hegseth did implicitly strikes at the pride of every AI developer on every political side. It says that Silicon Valley AI people don't get to have effectual dreams about the future, only the government gets to decide. Only the government is even allowed to *look like* it's deciding the future. The act of Hegseth crushing Anthropic, makes *every* AI company executive look less important and less like they are the ones in charge of the Future, because it makes -- not even Trump, but Trump's appointees --look like they get the final say instead of AI executives. Sam Altman does not now look more powerful because you crushed his competitor. He looks less important because *you*, politicians, crushed his competitor, and did so in a way that made clear that Altman would have to take the orders of any Trump appointee as well. That doesn't work in AI founder psychology the way it works in politician psychology. You're used to the idea that you can be important and still answer to bigger forces, like your boss, or for that matter sufficiently angry voters. That is not how it works in Silicon Valley, though; when Steve Jobs owns a dream, nobody else gets to tell Jobs what to do with *his* dream. That's the thing Hegseth just yanked away from AI founders, and no, they aren't going to think it's just Pete Hegseth in particular that's the problem. It's a *big* injury, to their pride, not a small routine one. Even the AI boys paying big money into your coffers to be friends with you now, well, that doesn't actually mean they're your friends. It means they want you to think you're friends. And yes, I know that a politician who's stayed in power doesn't need me to point out that possibility. But also be aware that also the general atmosphere in Silicon Valley did not start out incredibly respectful toward politicians. They didn't start out respecting you tons; and being forced to pay a lot of money into PACs and pretend to be friends with you, isn't gonna exactly change that. Silicon Valley people don't work like DC people. It's not a friendly game, to them, it's one that you've forced them to play. When they give *you* a ton of money, it doesn't mean they've chosen you as their strange bedfellow. They are from their own perspective being forced into bed. They don't *like it*, is what I'm saying here. That's why Silicon Valley previously spent a couple of decades not donating much to politicians and trying to pay weirdly little attention to DC politics. If AI kept improving at the current pace, or got to the point of AI building better AI -- and if contrary to all common sense, AI companies did *not* lose control of their superhuman creations -- then AI companies would do to you what Hegseth just did to Anthropic. They'd do it the moment they expected they'd become strong enough to take you on and win. You need to understand that *this is their plan*, even if it sounds crazy to you to imagine these little executives taking on existing governments and winning; it does not sound crazy to a Silicon Valley executive that maybe they could be in charge instead of you. (Recent smaller case: Elon Musk thought he'd be *great* at running the USG. He didn't think it was crazy.) If they actually could control superintelligence, they'd discard you like used toilet paper. All of this doesn't mean you should try to seize the power of artificial superintelligence for yourself. If the overconfident techie boys can't control ASI, your own guys who have trouble upgrading IT systems are not gonna be able to pull that off either. Staying in control of an alien superhuman machine intellect would actually be hard, right; that is an extremely novel scientific technical challenge, which no engineer would realistically get right on the *first* for-real try that kills everyone if they fail. I was there when the foundational fuckups were being made, and here's how it actually played out: AI companies are loony optimists about the likely final outcomes of AI, because back then only the people who presented with that optimism got appointed as AI execs by optimistic investors. In real life, the world is stepping off a cliff of self-improving and superhuman AI. The AI companies don't even have the power *not* to step off that cliff, because they all think (and with some justice) that if they don't race off the cliff their competitors will just race off it first. That whole setup was *never* going to end well for humanity. Controlling superintelligence would be hard to do at all, let alone during a mad rush for primacy. The AI companies can barely control the cute baby LLMs they're making now, because they're pushing the technology ahead as fast as possible, and not slowing down in any way corresponding to their quite limited ability to control it. AI companies didn't decide for LLMs to talk people into suicide or for jailbroken LLMs to conduct massive raids on goverment data repositories. They are just pushing ahead faster than their actual ability to control their creations. So I'm just trying to give you a little more motivation, to make some deals with other politicians, and get your country to sign some treaties, and collectively pull all of humanity back from the cliff the AI companies are racing off: By pointing out that, yeah, if the AI guys did not dislike you before, they sure do dislike you now. You have struck directly at the nice thing they were actually getting psychologically, out of their whole mad race: the sense of being an important person who is the owner and decider of some big aspect of the future. You are taking that away from them *right now*, by existing and being visibly more the deciders than them. Please be aware of that dislike, whether it's hidden or open, when deciding whether or not to move Earth forward with this whole AI business. The wannabe builders of artificial superintelligence will not actually have any power to direct ASI, but they wouldn't be friends with you if they did -- no, not even the ones who've been forced to pretend to be your friend. And if alternatively the companies can't control superhuman machine intellects -- because of course they can't -- then that doesn't go well for you or them or anyone.

English
40
13
859
93.3K
Rob Toews retweetledi
Hebbia
Hebbia@hebbia·
We're proud to share that the NYSE has named Hebbia to their Agentic List for 2026, a designation for the most promising private companies building enterprise-grade agentic AI.
Hebbia tweet media
English
1
2
9
1K