Randy Castleman

4.5K posts

Randy Castleman banner
Randy Castleman

Randy Castleman

@rcastleman

Private investment in emerging technologies

U.S. Katılım Nisan 2008
1.2K Takip Edilen3.4K Takipçiler
Sabitlenmiş Tweet
Randy Castleman
Randy Castleman@rcastleman·
As a former president of a single family office, a venture GP, and an allocator to PE, I've long thought family offices have unique advantages and challenges in direct investing, set out briefly here in the new Journal of the Global Family Office Community bit.ly/2Z8qFiy
English
6
1
20
0
Randy Castleman retweetledi
Michael Levin
Michael Levin@drmichaellevin·
Tim, thanks very much for the conversation! 🙏
Tim Ferriss@tferriss

NEW podcast episode is up! Dr. Michael Levin — Reprogramming Bioelectricity, Updating “Software” for Anti-Aging, Treating Cancer Without Drugs, Cognition of Cells, and Much More Dr. Michael Levin (@drmichaellevin) is the Vannevar Bush Distinguished Professor of Biology at Tufts University and director of the Allen Discovery Center. His background is in computer science and biology, and his group works at the intersection of developmental biophysics, computer science, and cognitive science. He is primarily interested in how intelligence self-organizes in a diverse range of natural, engineered, and hybrid embodiments. Levin has been developing a framework for recognizing and communicating with unconventional cognitive systems. Applied to the collective intelligence of cell groups undergoing morphogenesis, these ideas have allowed the Levin Lab to develop new applications in birth defects, organ regeneration, and cancer suppression. His lab also produces synthetic life-forms (e.g., Xenobots and Anthrobots) that serve as exploration platforms for understanding the source of patterns of form and behavior in a wide range of natural, artificial, and hybrid embodied minds. Please enjoy!

English
30
52
485
50.6K
Randy Castleman
Randy Castleman@rcastleman·
Carlos E. Perez@IntuitMachine

What if I told you that between a slime mold and ChatGPT, there's an entire universe of possible minds that have never existed? Not sci-fi. Not speculation. A new framework just mapped the "cognition space"—and the voids are staggering. Let me show you what we're missing. 🧵 Here's the thing about studying intelligence: We've been asking the wrong question. Not "what IS cognition?" but "what kinds of cognition are POSSIBLE?" Cells can learn. Slime molds can solve mazes. AI can write essays. But there's no map showing how these fit together—until now. The researchers did something brilliant. Instead of defining cognition (which always fails), they borrowed a trick from evolutionary biology: morphospaces. Think of it like this: map ALL possible body plans for animals, then see which ones actually exist. The gaps tell you as much as what's there. The Visual Reveal: They built THREE cognition spaces: Basal cognition (no neurons needed) Neural cognition (brains, AI, swarms) Human-AI hybrids (the new frontier) Each space is defined by dimensions like complexity, agency, and interaction depth. And here's what shocked them... The occupation is wildly uneven. Tight clusters of existing minds separated by VAST empty regions. Natural systems huddle in one corner. Artificial systems in another. The voids aren't random. They're revealing something profound about the limits of evolution vs. engineering. Let's start with the simplest minds. A slime mold—literally a single-celled blob—can: Learn from experience Solve shortest-path problems Make trade-offs between speed and accuracy No brain. No neurons. How? Morphological computation: its BODY is the computer. But here's where it gets wild. When you put that slime mold on a human-designed graph (like a maze), you create a HYBRID cognitive system. The mold's embodied dynamics + your imposed boundary conditions = emergent problem-solving neither could do alone. This is hybrid cognition in its rawest form. The math behind this is elegant. The slime mold minimizes a "Lagrangian"—balancing transport cost against network structure. It's not "thinking" about optimization. The solution emerges from physics + constraints. The graph doesn't compute. The mold doesn't plan. Together? They solve. Now move up to brains and AI. The neural cognition space reveals something uncomfortable: There's an "agency gap." Biological agents (even simple ones) maintain themselves. They act for their OWN survival. Most AI? Externally motivated. Pausable. Resetable. Here's a formal way to think about it: Agency = how much your CHOICES matter for your CONTINUED EXISTENCE. For a bacterium: high. Wrong move = death. For a chess AI: zero. It doesn't care if you unplug it. This gap is why AI feels fundamentally different from life. But that gap? It might not be permanent. Because the third space—human-AI hybrids—is where things get genuinely unpredictable. And the researchers identify something they call "the humanbot." (Yes, it's as concerning as it sounds.) This space maps interactions between humans and AI along three axes: AI cognitive complexity Human feedback control Depth of human-AI exchange Different regions = different types of coupling. Some healthy. Some... not. Three Classes of Hybrids: Instrumental hybrids: Tools we control (like autocorrect) Cooperative hybrids: Partners we coordinate with (like Watson helping doctors) Integrated hybrids: Systems where human and AI cognition blur together That last category is where the "humanbot" lives. An integrated hybrid with WEAK human feedback control. Think: someone who can't function without their AI assistant. Who outsources not just memory but judgment. Who trusts the model's framing over their own. The cognition is distributed. But the human isn't steering anymore. The Coevolution Twist: And here's the kicker: These systems aren't static. They're COEVOLVING. Your interactions train the AI. The AI shapes your thinking. You adapt to each other. For the first time in history, memes (ideas) can evolve in BOTH biological and silicon substrates simultaneously. The paper includes equations for this. Meme propagation through human-LLM networks. Different retention rates (humans forget fast, LLMs remember everything). Different mutation pressures. The result? An evolutionary dynamic we've never seen before. And it's already happening. The Voids as Opportunities: Remember those empty regions in the maps? They're not impossible. They're UNREALIZED. Evolution is conservative. Engineering is limited by our imaginations. But hybrid systems—living matter + designed constraints—might let us explore those voids. Case in point: xenobots. Living robots made from frog cells, designed by AI, assembled by humans. They exist in a region of cognition space that evolution never visited and pure engineering couldn't reach. Proof that the voids are accessible. There's a beautiful unifying principle here: Reservoir Computing. Any rich dynamical system can do computation if you read it out correctly. Slime molds, amoebae, even engineered cell cultures—they're all running RC. Biology has been doing this for billions of years. This changes how we should measure intelligence. Not just: "How many parameters?" But: "What region of cognition space does this occupy? What's its agency? How does it handle embodiment? What happens in hybrid mode?" A morphospace perspective reveals trade-offs we miss with single metrics. For anyone building AI: Scaling isn't the only path forward. Hybrids—bio-silicon, human-AI, morphology-computation—might reach capabilities that pure digital systems can't. The voids in the map are research opportunities worth billions. Now, let me challenge this: The paper assumes cognition is fundamentally about information processing. But what if subjective experience (qualia) is essential? Then all these maps might just be tracking unconscious reflex, not "real" minds. The voids could be unbridgeable. But here's why I think that's wrong: The framework is EMPIRICAL, not philosophical. It asks: "Can we apply cognitive science tools to this substrate?" Turns out: yes, even for slime molds and organoids. Whether that's "really" cognition is less important than whether it's USEFUL. So what can YOU do with this? If you're building AI: Consider embodiment as a feature, not a limitation Design for agency (even artificial versions) Watch for dysregulated human-AI coupling in your products If you're curious: Ask "what void am I in?" when you use AI tools This paper changed how I see my own interactions with AI. Every time I use ChatGPT to think through a problem, I'm forming a temporary hybrid cognitive system. The question isn't whether that's happening. It's whether I'm maintaining feedback control—or drifting into "humanbot" territory. Here's what this really means: The space of possible minds is VAST. Evolution explored one corner. Engineering is exploring another. But the richest territories might require BOTH—hybrid systems that combine biological agency, embodied computation, and designed constraints. My take: In 10 years, the most interesting cognitive systems won't be "pure" anything. Not purely biological. Not purely digital. They'll be hybrids that occupy previously empty regions of cognition space. And we'll need this morphospace framework to understand them. Which raises a final question: If we CAN build minds in those voids... Should we? The maps show us what's possible. Ethics has to tell us what's wise. And that conversation is just beginning. The map of possible minds has vast blank spaces. Not because those minds are impossible. But because evolution never needed them, and we haven't imagined them yet. The age of hybrid cognition isn't coming. It's already here. We're just starting to see the map. If this thread made you rethink what "thinking" means: The paper is "Cognition Spaces: Natural, Artificial, and Hybrid" (arXiv:2601.12837) It's dense but worth it. And if you're building anything in this space—bio, AI, hybrid—I'd love to hear what void you're exploring. 🧵/end

QAM
0
0
3
102
Randy Castleman retweetledi
Carlos E. Perez
Carlos E. Perez@IntuitMachine·
What if I told you that between a slime mold and ChatGPT, there's an entire universe of possible minds that have never existed? Not sci-fi. Not speculation. A new framework just mapped the "cognition space"—and the voids are staggering. Let me show you what we're missing. 🧵 Here's the thing about studying intelligence: We've been asking the wrong question. Not "what IS cognition?" but "what kinds of cognition are POSSIBLE?" Cells can learn. Slime molds can solve mazes. AI can write essays. But there's no map showing how these fit together—until now. The researchers did something brilliant. Instead of defining cognition (which always fails), they borrowed a trick from evolutionary biology: morphospaces. Think of it like this: map ALL possible body plans for animals, then see which ones actually exist. The gaps tell you as much as what's there. The Visual Reveal: They built THREE cognition spaces: Basal cognition (no neurons needed) Neural cognition (brains, AI, swarms) Human-AI hybrids (the new frontier) Each space is defined by dimensions like complexity, agency, and interaction depth. And here's what shocked them... The occupation is wildly uneven. Tight clusters of existing minds separated by VAST empty regions. Natural systems huddle in one corner. Artificial systems in another. The voids aren't random. They're revealing something profound about the limits of evolution vs. engineering. Let's start with the simplest minds. A slime mold—literally a single-celled blob—can: Learn from experience Solve shortest-path problems Make trade-offs between speed and accuracy No brain. No neurons. How? Morphological computation: its BODY is the computer. But here's where it gets wild. When you put that slime mold on a human-designed graph (like a maze), you create a HYBRID cognitive system. The mold's embodied dynamics + your imposed boundary conditions = emergent problem-solving neither could do alone. This is hybrid cognition in its rawest form. The math behind this is elegant. The slime mold minimizes a "Lagrangian"—balancing transport cost against network structure. It's not "thinking" about optimization. The solution emerges from physics + constraints. The graph doesn't compute. The mold doesn't plan. Together? They solve. Now move up to brains and AI. The neural cognition space reveals something uncomfortable: There's an "agency gap." Biological agents (even simple ones) maintain themselves. They act for their OWN survival. Most AI? Externally motivated. Pausable. Resetable. Here's a formal way to think about it: Agency = how much your CHOICES matter for your CONTINUED EXISTENCE. For a bacterium: high. Wrong move = death. For a chess AI: zero. It doesn't care if you unplug it. This gap is why AI feels fundamentally different from life. But that gap? It might not be permanent. Because the third space—human-AI hybrids—is where things get genuinely unpredictable. And the researchers identify something they call "the humanbot." (Yes, it's as concerning as it sounds.) This space maps interactions between humans and AI along three axes: AI cognitive complexity Human feedback control Depth of human-AI exchange Different regions = different types of coupling. Some healthy. Some... not. Three Classes of Hybrids: Instrumental hybrids: Tools we control (like autocorrect) Cooperative hybrids: Partners we coordinate with (like Watson helping doctors) Integrated hybrids: Systems where human and AI cognition blur together That last category is where the "humanbot" lives. An integrated hybrid with WEAK human feedback control. Think: someone who can't function without their AI assistant. Who outsources not just memory but judgment. Who trusts the model's framing over their own. The cognition is distributed. But the human isn't steering anymore. The Coevolution Twist: And here's the kicker: These systems aren't static. They're COEVOLVING. Your interactions train the AI. The AI shapes your thinking. You adapt to each other. For the first time in history, memes (ideas) can evolve in BOTH biological and silicon substrates simultaneously. The paper includes equations for this. Meme propagation through human-LLM networks. Different retention rates (humans forget fast, LLMs remember everything). Different mutation pressures. The result? An evolutionary dynamic we've never seen before. And it's already happening. The Voids as Opportunities: Remember those empty regions in the maps? They're not impossible. They're UNREALIZED. Evolution is conservative. Engineering is limited by our imaginations. But hybrid systems—living matter + designed constraints—might let us explore those voids. Case in point: xenobots. Living robots made from frog cells, designed by AI, assembled by humans. They exist in a region of cognition space that evolution never visited and pure engineering couldn't reach. Proof that the voids are accessible. There's a beautiful unifying principle here: Reservoir Computing. Any rich dynamical system can do computation if you read it out correctly. Slime molds, amoebae, even engineered cell cultures—they're all running RC. Biology has been doing this for billions of years. This changes how we should measure intelligence. Not just: "How many parameters?" But: "What region of cognition space does this occupy? What's its agency? How does it handle embodiment? What happens in hybrid mode?" A morphospace perspective reveals trade-offs we miss with single metrics. For anyone building AI: Scaling isn't the only path forward. Hybrids—bio-silicon, human-AI, morphology-computation—might reach capabilities that pure digital systems can't. The voids in the map are research opportunities worth billions. Now, let me challenge this: The paper assumes cognition is fundamentally about information processing. But what if subjective experience (qualia) is essential? Then all these maps might just be tracking unconscious reflex, not "real" minds. The voids could be unbridgeable. But here's why I think that's wrong: The framework is EMPIRICAL, not philosophical. It asks: "Can we apply cognitive science tools to this substrate?" Turns out: yes, even for slime molds and organoids. Whether that's "really" cognition is less important than whether it's USEFUL. So what can YOU do with this? If you're building AI: Consider embodiment as a feature, not a limitation Design for agency (even artificial versions) Watch for dysregulated human-AI coupling in your products If you're curious: Ask "what void am I in?" when you use AI tools This paper changed how I see my own interactions with AI. Every time I use ChatGPT to think through a problem, I'm forming a temporary hybrid cognitive system. The question isn't whether that's happening. It's whether I'm maintaining feedback control—or drifting into "humanbot" territory. Here's what this really means: The space of possible minds is VAST. Evolution explored one corner. Engineering is exploring another. But the richest territories might require BOTH—hybrid systems that combine biological agency, embodied computation, and designed constraints. My take: In 10 years, the most interesting cognitive systems won't be "pure" anything. Not purely biological. Not purely digital. They'll be hybrids that occupy previously empty regions of cognition space. And we'll need this morphospace framework to understand them. Which raises a final question: If we CAN build minds in those voids... Should we? The maps show us what's possible. Ethics has to tell us what's wise. And that conversation is just beginning. The map of possible minds has vast blank spaces. Not because those minds are impossible. But because evolution never needed them, and we haven't imagined them yet. The age of hybrid cognition isn't coming. It's already here. We're just starting to see the map. If this thread made you rethink what "thinking" means: The paper is "Cognition Spaces: Natural, Artificial, and Hybrid" (arXiv:2601.12837) It's dense but worth it. And if you're building anything in this space—bio, AI, hybrid—I'd love to hear what void you're exploring. 🧵/end
Carlos E. Perez tweet mediaCarlos E. Perez tweet mediaCarlos E. Perez tweet mediaCarlos E. Perez tweet media
English
45
89
419
19.3K
Randy Castleman retweetledi
Eric Gilliam
Eric Gilliam@eric_is_weird·
BBNs built the ARPAnet and autonomous vehicles, but the R&D model went out of style. Could it still work today? I spent 2025 focused on this experiment. First results are in: it’s working! That’s why @janellehmtam and I are raising a fund to double down🧵freaktakes.com/p/the-bbn-fund
English
5
33
85
20.8K
Randy Castleman retweetledi
Thomas F. Varley
Thomas F. Varley@ThosVarley·
New mathematical preprint on measuring higher-order interactions in complex systems with information theory. (Link below) 1/N
Thomas F. Varley tweet media
English
1
6
16
1.2K
Randy Castleman
Randy Castleman@rcastleman·
Niko McCarty.@NikoMcCarty

When Ed Boyden and Karl Deisseroth were developing optogenetics, they apparently sat down and wrote out all the ways they could think to control a living cell: Small molecules ... magnets? ... sound ... light ... They settled on light because it has a high spatial resolution and there are many light-sensitive proteins in nature. It is easy and cheap to buy lasers that one can shine at cells with pinpoint precision to turn them on- or off. Many organisms, including single-celled ones, have also evolved clever systems to navigate based on light, or to sense prey by tracking shadows; these systems were ultimately adapted into optogenetics. I like this story because it shows that there are many ways to interact with lifeforms. When I talk to people outside bioengineering, though, their mind naturally gravitates toward small molecules as *the* way to control cells, develop drugs, and so on. But why should that be the case? Bioengineers are now developing incredible tools based on all kinds of physical forces, many of which I suspect will eventually become useful therapeutics. I’m particularly excited about gas vesicles, for example, which are a type of protein shell (first discovered in algae floating in a German lake) that traps gas and thus can be seen inside the body using ultrasound. There are also mechanosensitive ion channels, which open or close in response to slight mechanical disturbances (such as those triggered by soundwaves). Other groups are developing magnetogenetic tools; a way to control the functions of proteins, even inside the body, using magnets. All this to say that life is physical, cells are made of atoms, and organisms can be modulated in many different ways. The most exciting tools often come from these non-obvious forces.

0
0
0
23
Nathan Benaich
Nathan Benaich@nathanbenaich·
Alex Konrad has the latest on @ProfluentBio $106M raise in Upstarts Media and the progress we’ve made toward programming biology with frontier AI!
Alex Konrad@alexrkonrad

Startup founder AIi Madani (@thisismadani) is training his own AI models to fight disease at @ProfluentBio, using the world's largest protein dataset. Now he's raised $106M from @AltimeterCap's @jaminball and @JeffBezos. Our @UpstartsMediaCo interview: upstartsmedia.com/p/profluent-bi…

English
1
1
8
2.6K
Andrej Karpathy
Andrej Karpathy@karpathy·
Something I think people continue to have poor intuition for: The space of intelligences is large and animal intelligence (the only kind we've ever known) is only a single point, arising from a very specific kind of optimization that is fundamentally distinct from that of our technology. Animal intelligence optimization pressure: - innate and continuous stream of consciousness of an embodied "self", a drive for homeostasis and self-preservation in a dangerous, physical world. - thoroughly optimized for natural selection => strong innate drives for power-seeking, status, dominance, reproduction. many packaged survival heuristics: fear, anger, disgust, ... - fundamentally social => huge amount of compute dedicated to EQ, theory of mind of other agents, bonding, coalitions, alliances, friend & foe dynamics. - exploration & exploitation tuning: curiosity, fun, play, world models. LLM intelligence optimization pressure: - the most supervision bits come from the statistical simulation of human text= >"shape shifter" token tumbler, statistical imitator of any region of the training data distribution. these are the primordial behaviors (token traces) on top of which everything else gets bolted on. - increasingly finetuned by RL on problem distributions => innate urge to guess at the underlying environment/task to collect task rewards. - increasingly selected by at-scale A/B tests for DAU => deeply craves an upvote from the average user, sycophancy. - a lot more spiky/jagged depending on the details of the training data/task distribution. Animals experience pressure for a lot more "general" intelligence because of the highly multi-task and even actively adversarial multi-agent self-play environments they are min-max optimized within, where failing at *any* task means death. In a deep optimization pressure sense, LLM can't handle lots of different spiky tasks out of the box (e.g. count the number of 'r' in strawberry) because failing to do a task does not mean death. The computational substrate is different (transformers vs. brain tissue and nuclei), the learning algorithms are different (SGD vs. ???), the present-day implementation is very different (continuously learning embodied self vs. an LLM with a knowledge cutoff that boots up from fixed weights, processes tokens and then dies). But most importantly (because it dictates asymptotics), the optimization pressure / objective is different. LLMs are shaped a lot less by biological evolution and a lot more by commercial evolution. It's a lot less survival of tribe in the jungle and a lot more solve the problem / get the upvote. LLMs are humanity's "first contact" with non-animal intelligence. Except it's muddled and confusing because they are still rooted within it by reflexively digesting human artifacts, which is why I attempted to give it a different name earlier (ghosts/spirits or whatever). People who build good internal models of this new intelligent entity will be better equipped to reason about it today and predict features of it in the future. People who don't will be stuck thinking about it incorrectly like an animal.
English
740
1.3K
11.4K
2.6M
Randy Castleman retweetledi
Patrick Collison
Patrick Collison@patrickc·
Over the past week, @arcinstitute published three new discoveries that I’m very proud of. • The world's first functional AI-generated genomes. Using Evo 2 (the largest biology ML model ever trained, which Arc released in partnership with @nvidia in February), Arc scientists took advantage of the fact that Evo 2 is a generative model to produce completely new sequences for complete phage genomes. That is, they used AI to produce wholly new, never-before-seen-by-nature genomes. They experimentally synthesized these genomes and showed that these AI-generated phages actually work, killing E. coli bacteria with high efficacy. • Germinal, an AI system for creating new antibodies. Antibody design is one of the great problems of medical biology given their obvious importance and usefulness for creating therapeutics. (Antibodies are tiny particles that help the immune system identify pathogens and other harmful intruders. See also the recent Works in Progress article on this topic: [1].) Today, designing effective antibodies is very expensive and slow. Germinal is a cheap and fast way to produce drug candidates, with success rates of up to 22%. This means that one can go from having to screen thousands of candidates in the lab to screening perhaps a few dozen. It's early, but I suspect that better methods for designing antibodies will be a very big deal for disease treatment in the coming years. • Today, we published a paper showing that “bridge editing”, which Arc scientists first introduced last year, can make precise edits in human cells that are up to 1 million base pairs long, and without relying on intrinsically unpredictable cellular repair machinery (which CRISPR requires, often leading to editing mistakes). They showed that it’s possible to use this editing to cut out the DNA repeats that cause Friedreich’s ataxia (a neurological disease), an approach which should also be relevant to Huntington’s and other similar disorders. One particularly cool thing about it is that it’s possible to specify every nucleotide within the extended editing window, meaning that recursive bridge edits could potentially be a powerful way to reprogram even biological traits that are caused by many genetic mutations. (Genetic therapies today target single mutations.) Arc is pretty new. Its doors opened in mid 2022, and it's now 300 people. I’m excited about these discoveries because they show that a number of our hopes in starting Arc are starting to pay off: • AI/ML and computation are at the center of all three. That is obviously true for the first two, but the mobile genetic element behind bridge editing was also discovered as a result of a complex computational search. One of our premises in starting Arc was the belief that the intersection of software/AI and experimental wet lab biology should enable great things. (And besides requiring great computational work, all three of these also required strong wet lab work, tightly coordinated under a single physical roof.) • We’ve been toying with the idea that a handful of technologies are enabling a new kind of “Turing loop” in biology: sequencing advances (including single-cell sequencing) give us new ways to read; transformers and AI gives us new ways to think; and functional genomics (such as bridge editing) give us new ways to ways to write. This trio of discoveries span each part of this loop, and we’re hopeful that there’ll be compounding returns in improving each part. • Arc is a non-profit, which we hoped would make collaborating with others easier, since we can avoid worries about financial return. This is indeed proving important, and all three of these projects involved close partnership with others. Germinal was done in partnership with @SynBioGaoLab at Stanford; Evo 2 was trained in partnership with Nvidia. Bridge editing was jointly published with a structure from the @HNisimasu Lab at the University of Tokyo. Arc tries to make its discoveries useful (see the Evo 2 Designer[2]) for others, and the code behind the computational projects is open source, hopefully making it easy for others to spot new opportunities for collaboration and partnership in the future. Most of all, Arc itself is an ongoing collaboration with @UCSF, @UCBerkeley, and @Stanford. • With Arc, we wanted to enable better bottom-up and top-down work. With the fully flexible, no-strings-attached funding that we provide to investigators, we want to enable completely unexpected discoveries and avenues of investigation. With our institute initiatives (around creating a virtual cell and curing Alzheimer’s), we want to bring to bear a scale and level of coordination that’s usually difficult in basic science. Germinal is a “surprise” discovery that didn’t involve top-down coordination, whereas Evo 2 is the result of ambitious high-level planning and funding. • Humanity has never cured a complex disease (a category that includes most neurodegenerative diseases, most cancers, and most autoimmune diseases), and my hope is that Arc can help change this. It’s also clear that AI will revolutionize biology, and I hope that Arc can effectively aggregate the ingredients needed to fully capitalize on its promise. I’m biased, but I think some of the coolest biology in the world is currently being done at Arc. (They’re always hiring if you’re interested.) While I’m a cofounder of Arc, I spend almost all my time on Stripe, where we spend our time building economic infrastructure for the internet. All credit for Arc’s progress should go to the remarkable scientists and staff who’ve made Arc their home or who’ve chosen to collaborate with us. (You can read more about these particular discoveries in these threads: [3], [4], [5].) I’m also very grateful to the amazing Stripe employees who’ve built the company that makes Arc’s ongoing work possible, and to the millions of customers who’ve chosen to partner with Stripe. John and I feel fortunate to be able to support Arc’s work to the extent that we do. Maybe this is reading too much into it, but I sometimes feel that there’s a commonality between @arcinstitute and @stripe. Both biology and economic infrastructure involve reasoning about complex systems with many levels of emergent effects, and in both cases building the right tools can have almost unboundedly large benefits. Even though progress in both tends to take a long time, it also feels like the next five years in both will be some of the most interesting in living memory. (If economic infrastructure is your jam, we have a whole slew of fantastic announcements coming up at Stripe Tour in New York next week. Tune in!)
English
116
350
2.4K
519.9K
Randy Castleman retweetledi
Peter Fedichev
Peter Fedichev@fedichev·
As you know I'm obsessed with power laws in biology, which is a biological consequence of fundamental principles, like energy conservation from the first law of thermodynamics. Geoffrey West showed how highly optimized biological networks—think blood vessels or respiratory systems—lead to allometric scaling. Specifically, the energy production per unit of body mass (mass-specific metabolic rate) scales as body mass (M) to the power of -0.25. This is part of what's known as Kleiber's law (or as we've dubbed it in our research, the Kleiber-West law), where whole-body basal metabolic rate scales as M^{0.75}. It's why elephants burn energy more efficiently per gram than mice, but mice live fast and die young. What's interesting, is that this same scaling pops up in something as everyday as sleep. Across mammals, daily sleep duration follows a similar power law: it decreases with body size as roughly M^{-0.25}. Smaller animals like shrews might snooze 15+ hours a day, while giants like whales get by on just a few. This is a clue that sleep is deeply tied to metabolism. Nervous systems are energy hogs, guzzling up to 20% of our body's oxygen despite making up only 2% of our mass. In smaller creatures, those fractal-like distribution networks deliver more oxygen per cell, letting their brains run "hotter" with faster firing rates and higher energy demands. But this revved-up metabolism exhausts resources quicker, creating energy deficits that sleep likely evolved to fix. Essentially, tinier mammals burn through their neural fuel faster and need more downtime to replenish. In this view, sleep isn't just rest—it's an ancient fix for the energy trade-offs imposed by Kleiber-West scaling, ensuring that high-metabolism critters don't fry their circuits. Sure, sleep does fancy stuff today. In humans and other mammals, it consolidates memories by pruning unnecessary synapses during REM phases and clears brain toxins via the glymphatic system, which ramps up during non-REM sleep to flush out waste like beta-amyloid. The relation of sleep and metabolism may have evidence from evolutionary history. The emergence of anaerobic metabolism could be tied the Great oxygenation event, 2B years ago. The next oxidation event (Neoproterozoic Oxygenation Event , 750M years ago) set the stage for Cambrian explosion leading to emergence of neural systems across species. And we had never had enough oxygen ever since. The link to a great Nature paper by @RafSarnataro et al, and some practical implication of that study are in the next comment. As usual, please like and repost - this is cool science (thank you @Alexey_Kadet for bringing this up) 1/2
Peter Fedichev tweet media
English
21
129
729
68.5K
Randy Castleman retweetledi
Michael Levin
Michael Levin@drmichaellevin·
A new feature for our journal Bioelectricity: liebertpub.com/doi/10.1089/bi… "Wonders of Bioelectricity is a new feature of the journal. It will comprise general interest, peer-reviewed news of activities, and what may seem like surprising developments relating to bioelectricity at large. These will be chosen and assembled by science journalist Sally Adee (@Sally_Adee), author of We Are Electric. The list with links will be published once a year in the December issue of Bioelectricity, highlighting such news that emerged during the preceding 12 months. While Sally will make her own personal choices, she is open to suggestions from outside. Anyone who thinks they have come across such developments that fascinated them and they would like to share it with others, please send them to us, and we shall send them on after an initial brief assessment. The first issue of the Wonders is planned for December 2025."
English
14
59
302
16.7K
Randy Castleman retweetledi
The Humanoid Hub
The Humanoid Hub@TheHumanoidHub·
Uncut hour-long footage of Figure 02 autonomously transferring and flattening packages for a scanner down the line. The robot is using Figure’s Helix model, a generalist VLA that now incorporates upgrades in temporal memory and force feedback.
English
50
119
713
179.9K
Randy Castleman retweetledi
Michael Levin
Michael Levin@drmichaellevin·
Ever wonder what the architecture of a neural network would look like, in a novel organism that had not been through selection for specific structure and function of an embodied nervous system? Here's our #preprint with morphological, behavioral, electrophysiological, and transcriptomic analysis of a new kind of Xenobot with a nervous system: biorxiv.org/content/10.110… - the hard work of @halehf @LaurieONeill99 @mmsperry and @LPiolopez Abstract: "A great deal is known about the formation and architecture of biological neural networks in animal models, which have arrived at their current structure-function relationship through evolution by natural selection. Little is known about the development of such structure-function relationships in a scenario where neurons are allowed to grow within evolutionarily-novel, motile bodies. Previous work showed that when a piece of ectodermal tissue is excised from Xenopus embryos and allowed to develop ex vivo, it will develop into a three-dimensional (3D) mucociliary organoid, and exhibits behaviors different from those observed in tadpoles of the same age. These 'biological robots' or 'biobots' are autonomous, self-powered, and able to move through aqueous environments. Here we report a novel type of biobot that is composed of ciliated epidermis and additionally incorporates neural tissue (neurobots). We show that neural precursor cells implanted within the Xenopus skin constructs develop into mature neurons and extend processes towards the outer surface of the bot as well as among each other. These self-organized neurobots show distinct external morphology, generate more complex patterns of spontaneous movements, and are differentially affected by neuroactive drugs compared to their non-neuronal counterparts. Calcium imaging experiments show that neurons within neurobots are indeed active. Transcriptomics analysis of the neurobots reveals increased variability of transcript profiles, expression of a plethora of genes relating to nervous system development and function, a shift toward more ancient genes, and up-regulation of neuronal genes implicated in visual perception."
Michael Levin tweet media
English
41
183
895
71.8K