Neil Traft

9K posts

Neil Traft banner
Neil Traft

Neil Traft

@ntraft

PhD student at @uvmcomplexity. Interested in ML, evolution, & self-organization. https://t.co/ja3fdtRLdM

Burlington, VT Katılım Eylül 2008
558 Takip Edilen537 Takipçiler
Sabitlenmiş Tweet
Neil Traft
Neil Traft@ntraft·
So stoked to be going to my first #NeurIPS!! It’s crazy that in 10+ years of robotics and AI, I've never been to the great Lollapalooza of Machine Learning. 🥳🎆 I’m presenting my Frankensteinian efforts to stitch together parts of different neural networks! 🧪
Neil Traft tweet media
English
2
1
5
353
Neil Traft retweetledi
Ettore Randazzo
Ettore Randazzo@RandazzoEttore·
Excited about self-organising sytems? Do you have a cool paper, either in the works or ready? Then consider applying to our Evolving self-organisation workshop at @GeccoConf ! Submission Deadline March 27 Also check out the amazing workshop website: …-self-organisation-workshop.github.io/gecco-2026/
GIF
English
2
28
141
14.2K
Josh Wolfe
Josh Wolfe@wolfejosh·
5/ The missing puzzle piece to combine them they call System M (Meta-control) (probably also a jab at Meta;) ...acts like autonomous orchestrator + monitors internal signals like prediction errors +uncertainty and then decides whether AI observes, acts or dives into memory
Josh Wolfe tweet media
English
2
3
25
5.4K
Josh Wolfe
Josh Wolfe@wolfejosh·
1/ New paper from @ylecun et al on alternative approach for AI to learn more biologically... paper basically says AI is super smart but still can't learn like a toddler can... the main critique
Josh Wolfe tweet media
English
28
118
673
79.8K
Neil Traft retweetledi
Neil Traft
Neil Traft@ntraft·
Compare this tweet to the one that said, "this simulated fruit fly is a step toward human brain upload."
Anish Moonka@AnishA_Moonka

That one neuron connects to about 7,000 others. Your brain has 86 billion of them. Do the math and you get somewhere around 100 trillion connections inside your head. More connections than stars in 1,500 galaxies. And each connection point is way more complicated than anyone expected. A Stanford lab found that every single connection contains about 1,000 tiny switches that can store memories and process information at the same time. So your brain is running roughly 100 quadrillion switches right now, while you read this sentence. The wild part is the power bill. Your brain runs on 20 watts. That’s less energy than the light in your fridge. The world’s fastest supercomputer needs 20 million watts to do the same amount of raw calculation. A million times more power for the same output. We’re still nowhere close to understanding how any of this works. In October 2024, a team of hundreds of scientists finished mapping every single connection in a fruit fly’s brain. Took six years and heavy AI help. That fly brain had 140,000 neurons. Yours has 86 billion. Google and Harvard also mapped a piece of human brain last year, a speck smaller than a grain of rice. That speck alone contained 150 million connections and took 1,400 terabytes to store. The lead scientist said mapping a full human brain at that detail would produce as much data as the entire world generates in a year. A tiny worm had its 302 brain cells mapped back in 1986. Almost 40 years later, scientists still can’t fully explain how that worm’s brain keeps it alive. Your brain has 86 billion of those cells, each one wired to thousands of others, each wire packed with a thousand switches, all of it humming along on less power than a lightbulb.

English
0
0
0
64
Neil Traft
Neil Traft@ntraft·
@eli_lifland Would be more informative as a box-whisker plot, I think. It may answer your question about elimination of jobs to see the full distribution.
English
0
0
0
25
Eli Lifland
Eli Lifland@eli_lifland·
How concerned the US population is about various effects of AI
Eli Lifland tweet media
English
11
23
157
11.4K
Thomas T.K Yiu
Thomas T.K Yiu@legolasyiu·
@miniapeur I recommend using ChatGPT, Claude, or Gemini and then rewriting it in your own words. Make sure your academic professor or teacher allows it.
English
2
0
2
568
Mathieu
Mathieu@miniapeur·
I have a draft of a paper, and I am thinking of using a chatbot to review it (grammar, general flow/structure, mistakes, and suggestions). Does anyone have experience with this, and which chatbot is best for this?
English
19
2
24
10K
Neil Traft retweetledi
Yulu Gan
Yulu Gan@yule_gan·
Simply adding Gaussian noise to LLMs (one step—no iterations, no learning rate, no gradients) and ensembling them can achieve performance comparable to or even better than standard GRPO/PPO on math reasoning, coding, writing, and chemistry tasks. We call this algorithm RandOpt. To verify that this is not limited to specific models, we tested it on Qwen, Llama, OLMo3, and VLMs. What's behind this? We find that in the Gaussian search neighborhood around pretrained LLMs, diverse task experts are densely distributed — a regime we term Neural Thickets. Paper: arxiv.org/pdf/2603.12228 Code: github.com/sunrainyg/Rand… Website: thickets.mit.edu
Yulu Gan tweet media
English
86
430
3K
667.4K
Neil Traft retweetledi
arXiv.org
arXiv.org@arxiv·
When you support arXiv, you're supporting 35 years of open science. That's... 🔬5 million monthly users 📝27,000 submissions every month 👩🏽‍💻3 billion downloads 📈2.9 million scientific articles shared Give to arXiv today! givingday.cornell.edu/campaigns/arxiv
arXiv.org tweet media
English
11
60
422
38.2K
Neil Traft retweetledi
Abhishek Singh
Abhishek Singh@tremblerz·
Neural Architecture Search is back! Previous NAS approaches using RL or evolutionary algorithms required pre-defining the action space upfront. That was the fundamental bottleneck because if you have pre-defined a rigid search space, you will not discover anything interesting by definition. Hence, the best we gained from NAS was 10% accuracy improvements in CIFAR10 and ImageNet. The field eventually died by ~2020. However, the resurraction is inevitable with agents. This is because agents can fork training runs into branches, change hyperparameters (even seed apparently), swap out loss functions etc. Letting agents write and mutate code means the search space is effectively unbounded, and crucially, the mutations can be semantically meaningful rather than just structural perturbations. But the opportunity is more interesting than "automating grad students" A neural net doesn't have to be a model of a task. It can be an implicit representation of the physical world you are trying to model with it. So, as long as you can capture or emulate your world with parameters or code that can be optimized, you can reap the fruits of experimentation and optimization. The idea is not new and almost every scientific discovery company in the last 2 years has been founded on a similar premise. However, only in the last few months have coding agents gotten reliable enough to do this over a long horizon. This means that in the next few months, we will see signs that this is working.
Andrej Karpathy@karpathy

The next step for autoresearch is that it has to be asynchronously massively collaborative for agents (think: SETI@home style). The goal is not to emulate a single PhD student, it's to emulate a research community of them. Current code synchronously grows a single thread of commits in a particular research direction. But the original repo is more of a seed, from which could sprout commits contributed by agents on all kinds of different research directions or for different compute platforms. Git(Hub) is *almost* but not really suited for this. It has a softly built in assumption of one "master" branch, which temporarily forks off into PRs just to merge back a bit later. I tried to prototype something super lightweight that could have a flavor of this, e.g. just a Discussion, written by my agent as a summary of its overnight run: github.com/karpathy/autor… Alternatively, a PR has the benefit of exact commits: github.com/karpathy/autor… but you'd never want to actually merge it... You'd just want to "adopt" and accumulate branches of commits. But even in this lightweight way, you could ask your agent to first read the Discussions/PRs using GitHub CLI for inspiration, and after its research is done, contribute a little "paper" of findings back. I'm not actually exactly sure what this should look like, but it's a big idea that is more general than just the autoresearch repo specifically. Agents can in principle easily juggle and collaborate on thousands of commits across arbitrary branch structures. Existing abstractions will accumulate stress as intelligence, attention and tenacity cease to be bottlenecks.

English
1
2
4
297
Neil Traft
Neil Traft@ntraft·
@michaelandregg Why would your copy willingly serve you like a digital slave, rather than live out their own life?
English
0
0
3
195
Michael Andregg
Michael Andregg@michaelandregg·
We're entering an era of artificial superintelligence. The biggest question isn't whether ASI will arrive — it's what form it takes and who gets to participate in it. Right now the default path is: a few labs build opaque AI systems, and the rest of humanity hopes they're aligned. I think there's a better option. A successful hi-fi upload should feel like you. You that is robust, free from illness and death; editable, can run faster than real time and keep up with AI (transistors are a billion times faster than neurons); and most importantly aligned, with your values, memories, relationships, and moral intuitions.
English
14
22
382
47.2K
Michael Andregg
Michael Andregg@michaelandregg·
We've uploaded a fruit fly. We took the @FlyWireNews connectome of the fruit fly brain, applied a simple neuron model (@Philip_Shiu Nature 2024) and used it to control a MuJoCo physics-simulated body, closing the loop from neural activation to action. A few things I want to say about what this means and where we're going at @eonsys. 🧵
English
333
1.3K
8K
1.7M
Neil Traft retweetledi
Ravid Shwartz Ziv
Ravid Shwartz Ziv@ziv_ravid·
New paper out: AI Must Embrace Specialization via Superhuman Adaptable Intelligence With @JudahGoldfeder, Philippe Wyder, and @ylecun . There is quite a lot of buzz on our paper, so here is my take. Everyone's talking about AGI, but nobody agrees on what it means, and that confusion is actively hurting the field. We surveyed the most prominent definitions and mapped them along two axes: the kind of capability they refer to (learning vs. doing) and the scope (anything, anything important, anything humans can do). The result is a landscape of definitions that don't just disagree, but they're often internally inconsistent. Our starting point is simple: human intelligence is not general. We are specialized creatures, shaped by evolution to excel at a narrow set of tasks critical for survival. We feel general because we can't see our own blind spots. Magnus Carlsen is the greatest human chess player ever, but compared to what's computationally achievable, he's not actually good at chess. That's not a knock on Magnus. It's a statement about the limits of human adaptation, and why anchoring AI's North Star to human-level performance is the wrong move. We propose the term Superhuman Adaptable Intelligence (SAI), or, in other words, intelligence that can learn to exceed humans at anything important we can do and can also tackle tasks entirely outside the human domain. The metric isn't a growing checklist of benchmarks. It's adaptation speed: how fast can a system acquire a new skill? This has concrete implications for how we build. SAI points toward self-supervised learning for acquiring generic knowledge from unlabeled data, and world models for planning and zero-shot transfer. It also pushes back against the current monoculture of autoregressive architectures, because specialization demands architectural diversity, not one paradigm to rule them all. Or as we put it: the AI that folds our proteins should not be the AI that folds our laundry. This paper grew out of a conversation with Yann on our The Information Bottleneck podcast, which led to a public exchange with @elonmusk and @demishassabis on X (not every paper can cite a Twitter feud as source material).
Ravid Shwartz Ziv tweet media
English
15
15
79
22.8K
Neil Traft retweetledi
Caitlin Kalinowski
Caitlin Kalinowski@kalinowski007·
I resigned from OpenAI. I care deeply about the Robotics team and the work we built together. This wasn’t an easy call. AI has an important role in national security. But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got. This was about principle, not people. I have deep respect for Sam and the team, and I’m proud of what we built together.
English
1.9K
13.1K
59.3K
7.6M
Neil Traft retweetledi
Kanjun 🐙
Kanjun 🐙@kanjun·
The leaked memo from Dario is really quite amazing: reddit.com/r/Anthropic/co… "The real reasons DoW and the Trump admin do not like us is that we haven’t donated to Trump (while OpenAI/Greg have donated a lot), we haven’t given dictator-style praise to Trump (while Sam has), we have supported AI regulation which is against their agenda, we’ve told the truth about a number of AI policy issues (like job displacement), and we’ve actually held our red lines with integrity rather than colluding with them to produce "safety theater" for the benefit of employees (which, I absolutely swear to you, is what literally everyone at DoW, Palantir, our political consultants, etc, assumed was the problem we were trying to solve)." "I think this attempted spin/gaslighting is not working very well on the general public or the media, where people mostly see OpenAI’s deal with DoW as sketchy or suspicious, and see us as the heroes (we’re #2 in the App Store now!). It is working on some Twitter morons, which doesn’t matter, but my main worry is how to make sure it doesn’t work on OpenAI employees. Due to selection effects, they’re sort of a gullible bunch, but it seems important to push back on these narratives which Sam is peddling to his employees."
English
0
2
23
6.5K