Ineffable Ventures

22 posts

Ineffable Ventures banner
Ineffable Ventures

Ineffable Ventures

@ineffablevc

▉ ▉ ▉ ▉ ▉ ▉ ▉ ▉ ▉ ▉ ▉ ▉ ▉ ▉ ▉ ▉ ▉ ▉ ▉ ▉ ▉ ▉ ▉ ▉ ▉ ▉ ▉ ▉ ▉ ▉ ▉ ▉ ▉ ▉ ▉ ▉ ▉ ▉ ▉ ▉ ▉ ▉ ▉ ▉ ▉ ▉ ▉ ▉ ▉ ▉ ▉ ▉ ▉ ▉ ▉ ▉ ▉ ▉ ▉ ▉ @DanielleMorrill and @MisterMorrill

Katılım Mayıs 2023
246 Takip Edilen242 Takipçiler
Ineffable Ventures retweetledi
Teng Yan · Chain of Thought AI
The most important sentence in Karpathy's whole post is probably this: anything with a measurable score and fast feedback will become something agents can optimize for you. automatically with no humans involved.
Andrej Karpathy@karpathy

Three days ago I left autoresearch tuning nanochat for ~2 days on depth=12 model. It found ~20 changes that improved the validation loss. I tested these changes yesterday and all of them were additive and transferred to larger (depth=24) models. Stacking up all of these changes, today I measured that the leaderboard's "Time to GPT-2" drops from 2.02 hours to 1.80 hours (~11% improvement), this will be the new leaderboard entry. So yes, these are real improvements and they make an actual difference. I am mildly surprised that my very first naive attempt already worked this well on top of what I thought was already a fairly manually well-tuned project. This is a first for me because I am very used to doing the iterative optimization of neural network training manually. You come up with ideas, you implement them, you check if they work (better validation loss), you come up with new ideas based on that, you read some papers for inspiration, etc etc. This is the bread and butter of what I do daily for 2 decades. Seeing the agent do this entire workflow end-to-end and all by itself as it worked through approx. 700 changes autonomously is wild. It really looked at the sequence of results of experiments and used that to plan the next ones. It's not novel, ground-breaking "research" (yet), but all the adjustments are "real", I didn't find them manually previously, and they stack up and actually improved nanochat. Among the bigger things e.g.: - It noticed an oversight that my parameterless QKnorm didn't have a scaler multiplier attached, so my attention was too diffuse. The agent found multipliers to sharpen it, pointing to future work. - It found that the Value Embeddings really like regularization and I wasn't applying any (oops). - It found that my banded attention was too conservative (i forgot to tune it). - It found that AdamW betas were all messed up. - It tuned the weight decay schedule. - It tuned the network initialization. This is on top of all the tuning I've already done over a good amount of time. The exact commit is here, from this "round 1" of autoresearch. I am going to kick off "round 2", and in parallel I am looking at how multiple agents can collaborate to unlock parallelism. github.com/karpathy/nanoc… All LLM frontier labs will do this. It's the final boss battle. It's a lot more complex at scale of course - you don't just have a single train. py file to tune. But doing it is "just engineering" and it's going to work. You spin up a swarm of agents, you have them collaborate to tune smaller models, you promote the most promising ideas to increasingly larger scales, and humans (optionally) contribute on the edges. And more generally, *any* metric you care about that is reasonably efficient to evaluate (or that has more efficient proxy metrics such as training a smaller network) can be autoresearched by an agent swarm. It's worth thinking about whether your problem falls into this bucket too.

English
55
176
2.1K
150.9K
Ineffable Ventures
Ineffable Ventures@ineffablevc·
“You have to sensitize yourself to joyous affect, to avoid the awful passions of hatred and remorse. Protocolism and parasociality foster remorse against the body and everything that emanates from it.” writing.tobyshorin.com/body-futurism/
English
0
0
0
20
Ineffable Ventures retweetledi
Tanay Jaipuria
Tanay Jaipuria@tanayj·
Peter Steinberger (@steipete) built 40+ projects over the last few years before OpenClaw went viral. Classic "overnight success" story
Tanay Jaipuria tweet media
English
31
96
997
52.3K
Ineffable Ventures retweetledi
Héctor Ramos
Héctor Ramos@hectorramos·
So excited to be working again with @kressaty, @pdenya, @ashl3ysm1th, and Nakul Patel! Working with Wallfacer has been so liberating. I can truly get work done from anywhere now. I've been using Claude Code since last year, but I've felt constrained to my desk. Not any more.
Héctor Ramos@hectorramos

Introducing Wallfacer: Idea to PR. From anywhere. Claude Code runs in isolated cloud VMs. Each task gets its own VM: repo cloned, dependencies installed, services running. Preview changes from your phone or browser. Research, plan, and ship on the go. wallfacer.ai/blog/announcin…

English
0
4
7
664
Ineffable Ventures retweetledi
Kevin Morrill
Kevin Morrill@MisterMorrill·
Only just now discovered the @ArturiaOfficial Microfreak and it has blown my mind. Limitless possibilities.
English
0
1
0
239
Ineffable Ventures retweetledi
The Nobel Prize
The Nobel Prize@NobelPrize·
"I was not always the best student with the highest grades, but my teachers saw something in me and tried to encourage me." This little girl would grow up to uncover one of the biggest mysteries of our brain - how we know where we are and how we navigate from one place to another. The discovery of grid cells - the brain's inner GPS - led to May-Britt Moser receiving the 2014 Nobel Prize in Physiology or Medicine. Learn more: bit.ly/2D74rDL
The Nobel Prize tweet mediaThe Nobel Prize tweet media
English
89
327
2.3K
145.4K
Ineffable Ventures
Ineffable Ventures@ineffablevc·
if you struggle to explain what you’re making, even with LLM help, we want to invest in you
English
1
0
10
649
Ineffable Ventures retweetledi
xlr8harder
xlr8harder@xlr8harder·
Is there a word for gooning but it's making things with agents Vibe coding doesn't capture it There's a kind of manic conpulsion that comes from getting shit done so fast in multiple threads simultaneously It feels empowering, high leverage, and it's addictive. Agent gooning
English
39
6
395
20.2K
Ineffable Ventures retweetledi
Michael Levin
Michael Levin@drmichaellevin·
Final version is out: authors.elsevier.com/c/1mEoa5bD-sxf… "Neural cellular automata: Applications to biology and beyond classical AI" @LPiolopez Benedikt Hartl "Neural Cellular Automata (NCA) represent a powerful framework for modeling biological self-organization, extending classical rule-based systems with trainable, differentiable (or evolvable) update rules that capture the adaptive self-regulatory dynamics of living matter. By embedding Artificial Neural Networks (ANNs) as local decision-making centers and interaction rules between localized agents, NCA can simulate processes across molecular, cellular, tissue, and system-level scales, offering a multiscale competency architecture perspective on evolution, development, regeneration, aging, morphogenesis, and robotic control. These models not only reproduce canonical, biologically inspired target patterns but also generalize to novel conditions, demonstrating robustness to perturbations and the capacity for open-ended adaptation and reasoning through embodiment. Given their immense success in recent developments, we here review current literature of NCAs that are relevant primarily for biological or bioengineering applications. Moreover, we emphasize that beyond biology, NCAs display robust and generalizing goal-directed dynamics without centralized control, e.g., in controlling or regenerating composite robotic morphologies or even on cutting-edge reasoning tasks such as ARC-AGI-1. In addition, the same principles of iterative state-refinement is reminiscent to modern generative Artificial Intelligence (AI), such as probabilistic diffusion models. Their governing self-regulatory behavior is constraint to fully localized interactions, yet their collective behavior scales into coordinated system-level outcomes. We thus argue that NCAs constitute a unifying computationally lean paradigm that not only bridges fundamental insights from multiscale biology with modern generative AI, but have the potential to design truly bio-inspired collective intelligence capable of hierarchical reasoning and control."
English
25
97
552
31.9K
Ineffable Ventures retweetledi
Elan Barenholtz
Elan Barenholtz@ebarenholtz·
People still don’t seem to grasp how insane the structure of language revealed by LLMs really is. All structured sequences fall into one of three categories: 1.Those generated by external rules (like chess, Go, or Fibonacci). 2.Those generated by external processes (like DNA replication, weather systems, or the stock market). 3.Those that are self-contained, whose only rule is to continue according to their own structure. Language is the only known example of the third kind that does anything. In fact, it does everything. Train a model only to predict the next word, and you get the full expressive range of human speech: reasoning, dialogue, humor. There are no rules to learn outside the structure of the corpus itself. Language’s generative law is fully “immanent”—its cause and continuation are one and the same. To learn language is simply to be able to continue it; the rule of language is its own continuation. From this we can conclude three things: 1)You don’t need an innate or any external grammar or world model; the corpus already contains its own generative structure. Chomsky was wrong. 2) Language is the only self-contained system that produces coherent, functional output. 3) This forces the conclusion that humans generate language the same way. To suggest there’s an external rule system that LLMs just happen to duplicate perfectly is absurd; the simplest and only coherent explanation is that the generative structure they capture is the structure of human language itself. LLMs didn’t just learn patterns. They revealed what language has always been: an immanent generative system, singular among all possible ones, and powerful enough to align minds and build civilization. Wtf.
English
203
155
1.1K
79.7K
Ineffable Ventures retweetledi
Dr Singularity
Dr Singularity@Dr_Singularity·
Researchers have mathematically proven that the universe cannot be a computer simulation. Their paper in the Journal of Holography Applications in Physics shows that reality operates on principles beyond computation. Using Gödel’s incompleteness theorem, they argue that no algorithmic or computational system can fully describe the universe, because some truths, so called "Gödelian truths" require non algorithmic understanding, a form of reasoning that no computer or simulation can reproduce. Since all simulations are inherently algorithmic, and the fundamental nature of reality is non algorithmic, the researchers conclude that the universe cannot be, and could never be a simulation.
Dr Singularity tweet media
English
1.9K
1.5K
12K
2.1M
Ineffable Ventures retweetledi
Michael Levin
Michael Levin@drmichaellevin·
Given the recent discussion about aging (and our approach to it) in x.com/drmichaellevin…, it might be worthwhile to mention that my perspective is: birth defects, failure to regenerate complex organs after damage, cancer, degenerative disease, and aging are all *the same problem* at root. It is all about how living matter implements a collective intelligence to maintain a specific anatomy over time (whether regenerating from: 1 egg cell, a.k.a. embryogenesis, from a damaged tissue, or from the small-scale wear and tear of adult life), and how we can facilitate that process of renewal. Regeneration, in the broadest sense, is the answer to all of these problems. It is not going to be possible to accelerate (or prevent, for those who want to) anti-aging research without feeding (or squelching) these other aspects of medicine and basic science. If you're truly arguing against longevity research, it's not just the elderly billionaires that you're targeting, it's also the kids with cancer, the people born with damaged organs, victims of injury, those damaged by pathogens, etc. etc. It's all the same pool of suffering, with the same root cause. onlinelibrary.wiley.com/doi/10.1002/bi…
Michael Levin@drmichaellevin

Final version is out: aging as the result of loss of goal-directedness advanced.onlinelibrary.wiley.com/doi/epdf/10.10… @BeneHartl @LPiolopez "Although substantial advancements are made in manipulating lifespan in model organisms, the fundamental mechanisms driving aging remain elusive. No comprehensive computational platform is capable of making predictions on aging in multicellular systems. Focus is placed on the processes that build and maintain complex target morphologies, and develop an insilico model of multiscale homeostatic morphogenesis using Neural Cellular Automata (NCAs) trained by neuroevolution. In the context of this model: 1) Aging emerges after developmental goals are completed, even without noise or programmed degeneration; 2) Cellular misdifferentiation, reduced competency, communication failures, and genetic damage all accelerate aging but are not its primary cause; 3) Aging correlates with increased active information storage and transfer entropy, while spatial entropy distinguishes two dynamics, structural loss and morphological noise accumulation; 4) Despite organ loss, spatial information persists in tissue, implementing a memory of lost structures, which can be reactivated for organ restoration through targeted regenerative information; and 5) rejuvenation is found to be most efficient when regenerative information includes differential patterns of affected cells and their neighboring tissue, highlighting strategies for rejuvenation. This model suggests a novel perspective on aging caused by loss of goal-directedness, with potentially significant implications for longevity research and regenerative medicine."

English
64
132
915
95.1K
Ineffable Ventures retweetledi
Michael Levin
Michael Levin@drmichaellevin·
Yes! Another very related idea: consciousness is the experience of being responsible for interpreting your own memories and thus continuously improvising your world-story and self-story, and propagating that story to your parts to keep them aligned and thus keep yourself in physical existence. mdpi.com/1099-4300/26/6… thoughtforms.life/self-improvisi…
English
17
17
131
5.7K
Ineffable Ventures retweetledi
Bryan Johnson
Bryan Johnson@bryan_johnson·
If this model is accurate, it shows that don’t die is native to our biology and the first step is to realize it’s the goal we want to achieve from the molecular to individual to societal level
Michael Levin@drmichaellevin

Final version is out: aging as the result of loss of goal-directedness advanced.onlinelibrary.wiley.com/doi/epdf/10.10… @BeneHartl @LPiolopez "Although substantial advancements are made in manipulating lifespan in model organisms, the fundamental mechanisms driving aging remain elusive. No comprehensive computational platform is capable of making predictions on aging in multicellular systems. Focus is placed on the processes that build and maintain complex target morphologies, and develop an insilico model of multiscale homeostatic morphogenesis using Neural Cellular Automata (NCAs) trained by neuroevolution. In the context of this model: 1) Aging emerges after developmental goals are completed, even without noise or programmed degeneration; 2) Cellular misdifferentiation, reduced competency, communication failures, and genetic damage all accelerate aging but are not its primary cause; 3) Aging correlates with increased active information storage and transfer entropy, while spatial entropy distinguishes two dynamics, structural loss and morphological noise accumulation; 4) Despite organ loss, spatial information persists in tissue, implementing a memory of lost structures, which can be reactivated for organ restoration through targeted regenerative information; and 5) rejuvenation is found to be most efficient when regenerative information includes differential patterns of affected cells and their neighboring tissue, highlighting strategies for rejuvenation. This model suggests a novel perspective on aging caused by loss of goal-directedness, with potentially significant implications for longevity research and regenerative medicine."

English
63
53
890
148.9K
Ineffable Ventures retweetledi
Alex Hughes
Alex Hughes@alxnderhughes·
Holy shit... Google just cracked the code on AI collaboration. It's called TUMIX, and it might be the smartest thing they've published all year. Here's the twist: instead of building one massive brain, they built a team of smaller ones that argue, debate, and improve each other's answers in real-time. Each agent brings different skills. One codes. Another searches. Another reasons through logic. They tackle problems independently, then share solutions and refine them together until they reach consensus. The numbers are absolutely wild. Gemini-2.5 running TUMIX crushes every other reasoning system by up to +17.4%. And it does it at HALF the inference cost. No retraining. No new data. Just coordination. But here's where it gets crazy: diversity beats scale. A team of 15 different agents destroyed 15 copies of the "best" single agent. When they let Gemini design its own new agents? Performance jumped even higher. The system literally evolved better versions of itself. This flips everything we thought about AI progress. We've been obsessing over trillion-parameter models. Turns out, intelligence might come from organization, not just raw size. The next breakthrough in reasoning won't be a bigger model. It'll be smaller ones that learned how to think together. Read the full paper: arxiv. org/abs/2510.01279
Alex Hughes tweet media
English
68
191
982
78.5K
Ineffable Ventures retweetledi
Michael Levin
Michael Levin@drmichaellevin·
Final version is out: aging as the result of loss of goal-directedness advanced.onlinelibrary.wiley.com/doi/epdf/10.10… @BeneHartl @LPiolopez "Although substantial advancements are made in manipulating lifespan in model organisms, the fundamental mechanisms driving aging remain elusive. No comprehensive computational platform is capable of making predictions on aging in multicellular systems. Focus is placed on the processes that build and maintain complex target morphologies, and develop an insilico model of multiscale homeostatic morphogenesis using Neural Cellular Automata (NCAs) trained by neuroevolution. In the context of this model: 1) Aging emerges after developmental goals are completed, even without noise or programmed degeneration; 2) Cellular misdifferentiation, reduced competency, communication failures, and genetic damage all accelerate aging but are not its primary cause; 3) Aging correlates with increased active information storage and transfer entropy, while spatial entropy distinguishes two dynamics, structural loss and morphological noise accumulation; 4) Despite organ loss, spatial information persists in tissue, implementing a memory of lost structures, which can be reactivated for organ restoration through targeted regenerative information; and 5) rejuvenation is found to be most efficient when regenerative information includes differential patterns of affected cells and their neighboring tissue, highlighting strategies for rejuvenation. This model suggests a novel perspective on aging caused by loss of goal-directedness, with potentially significant implications for longevity research and regenerative medicine."
Michael Levin tweet media
English
171
434
2.7K
762.9K
Ineffable Ventures
Ineffable Ventures@ineffablevc·
“The Right to Oblivion: Privacy and the Good Life” by Lowry Pressly
English
1
0
0
196
Ineffable Ventures
Ineffable Ventures@ineffablevc·
“The Mountain in the Sea” Ray Nayler
English
0
0
0
152