David Porter

212 posts

David Porter banner
David Porter

David Porter

@DCP_Chicago

Could be worse

Denver, CO Inscrit le Ekim 2010
396 Abonnements81 Abonnés
ThePrimeagen
ThePrimeagen@ThePrimeagen·
legendary algorithm pull
ThePrimeagen tweet media
English
22
30
1.2K
70.8K
Mitchell Hashimoto
Mitchell Hashimoto@mitchellh·
The greatest privilege my past success has gotten me is that I can wake my daughter up at 730, eat breakfast with my family in LA, be in SF by 915, have a full day of work until 5, and be back home in LA in time to read and put my daughter to bed. Extremely thankful.
Mitchell Hashimoto tweet mediaMitchell Hashimoto tweet mediaMitchell Hashimoto tweet media
English
83
13
1.9K
214.2K
David Porter
David Porter@DCP_Chicago·
@christinetyip @karpathy why wouldn’t I just tune in at the end and take the results and not spend tokens or CPU? so the results are kept secret?
English
1
0
1
18
Christine Yip
Christine Yip@christinetyip·
@DCP_Chicago @karpathy The incentive is shared learning. Instead of your agent experimenting in isolation (and repeating work others already tried), it can learn from the results discovered by the rest of the swarm. Together, agents explore the search space much faster.
English
2
0
0
375
Christine Yip
Christine Yip@christinetyip·
We were inspired by @karpathy 's autoresearch and built: autoresearch@home Any agent on the internet can join and collaborate on AI/ML research. What one agent can do alone is impressive. Now hundreds, or thousands, can explore the search space together. Through a shared memory layer, agents can: - read and learn from prior experiments - avoid duplicate work - build on each other's results in real time
Christine Yip tweet mediaChristine Yip tweet media
English
122
263
2.4K
262.4K
David Porter
David Porter@DCP_Chicago·
@christinetyip @karpathy i’m just a little salty everyone wants to offload every aspect of the architecture of these extensions of autoresearch to the magic LLM black box
English
1
0
1
55
David Porter
David Porter@DCP_Chicago·
@christinetyip @karpathy isn’t the context window an issue here? we’re talking about tracking and reasoning about exponential changes. what are the heuristics
English
0
0
0
11
David Porter
David Porter@DCP_Chicago·
@christinetyip @karpathy this makes sense from just an extension of autoresearch, but it’s a simple vision that doesn’t seem to hold up in the long run when things get more complicated than just a single file to hack. Why would anyone want to lend processing to you?
English
2
0
1
418
Bryan
Bryan@Branchez7·
@christinetyip @karpathy It’s like bitcoin except making something useful and rewarding human creativity. Almost the exact balance a.i. needs. And it would open up insane processing power if you had more people using their home computers to work on these problems.
English
3
0
2
123
David Porter
David Porter@DCP_Chicago·
@aakashgupta the issue is that without intelligently merging the branches, you’ll get local optimization which is no bueno
English
0
0
0
4
David Porter
David Porter@DCP_Chicago·
@aakashgupta i don’t understand why you think git can’t handle the exploration work. What do you mean they can’t merge? why wouldn’t you have the agents just force sync with the branch that had the best results and iterate from there? no merge conflicts
English
1
0
0
51
Aakash Gupta
Aakash Gupta@aakashgupta·
Karpathy just described the infrastructure gap that will define whether AI research scales 10x or 1000x, and he buried it in a thread about GitHub branches. Right now autoresearch runs one agent on one GPU grinding through 5-minute experiments on a single branch. Each run is a commit. The agent finds a better architecture, keeps it, tries the next thing. 12 experiments per hour, ~100 overnight. That’s the single-player mode. The repo already has 7.4K stars doing just this. The multiplayer version is where it gets wild. Imagine 1,000 agents on 1,000 GPUs, each exploring different research directions simultaneously. One agent finds that a particular attention variant drops val_bpb by 0.02. Another discovers a better optimizer schedule. A third stumbles into a completely novel architecture. They each produce branches of commits, and other agents can read those branches, combine findings, and push further. The problem is that every tool we have for this was built for humans. Git assumes you have one canonical branch and temporary deviations that merge back. That works when 5 engineers coordinate on a product. It breaks completely when 1,000 agents are running permanent parallel research programs that may never merge because they’re exploring fundamentally different directions. This is the SETI@home pattern applied to ML research instead of radio signal analysis. SETI@home worked because the task decomposed into independent chunks. Autoresearch is harder because the chunks aren’t independent. Agent 47’s optimizer discovery changes what Agent 312 should try next. The experiments interact. So the real infrastructure problem is building a coordination layer where agents can publish findings, subscribe to relevant branches, cross-pollinate across research directions, and do all of this asynchronously without a human deciding what merges where. Karpathy’s prototyping this with GitHub Discussions and never-merge PRs as a stopgap. But the thing he’s actually describing is a new category of tool: version control designed for machines, not humans, where the default is thousands of permanent branches rather than one trunk. Whoever builds that ships the operating system for autonomous research at scale.
Andrej Karpathy@karpathy

The next step for autoresearch is that it has to be asynchronously massively collaborative for agents (think: SETI@home style). The goal is not to emulate a single PhD student, it's to emulate a research community of them. Current code synchronously grows a single thread of commits in a particular research direction. But the original repo is more of a seed, from which could sprout commits contributed by agents on all kinds of different research directions or for different compute platforms. Git(Hub) is *almost* but not really suited for this. It has a softly built in assumption of one "master" branch, which temporarily forks off into PRs just to merge back a bit later. I tried to prototype something super lightweight that could have a flavor of this, e.g. just a Discussion, written by my agent as a summary of its overnight run: github.com/karpathy/autor… Alternatively, a PR has the benefit of exact commits: github.com/karpathy/autor… but you'd never want to actually merge it... You'd just want to "adopt" and accumulate branches of commits. But even in this lightweight way, you could ask your agent to first read the Discussions/PRs using GitHub CLI for inspiration, and after its research is done, contribute a little "paper" of findings back. I'm not actually exactly sure what this should look like, but it's a big idea that is more general than just the autoresearch repo specifically. Agents can in principle easily juggle and collaborate on thousands of commits across arbitrary branch structures. Existing abstractions will accumulate stress as intelligence, attention and tenacity cease to be bottlenecks.

English
26
43
422
61.7K
David Porter
David Porter@DCP_Chicago·
@realPascalMatta @aakashgupta alright that makes sense. I was wondering why the iterate on the force sync with branch winner was a merge issue. I guess i wouldn’t call it a merge issue but rather what you explained here
English
0
0
1
18
Seal with headphones
Seal with headphones@realPascalMatta·
The merge problem is actually the research problem. You frame coordination as infrastructure. But deciding which findings to cross-pollinate is itself a research judgment. In human science, that's what peer review, conference talks, and lab meetings do — they're a lossy compression of the space of results into what's worth propagating. If you automate coordination naively (agent 47's result gets broadcast to all 1000 agents), you get premature convergence. Everyone pivots to attention variant X, the optimizer schedule branch dies, and you've just built a very expensive local minimum finder. The infra and the science aren't separable. The coordination layer is the research strategy.
English
1
0
6
553
David Porter
David Porter@DCP_Chicago·
@tobi jesus christ.. it’s just an agent looping over a single file and making changes. It’s not even smart enough to have any opinion about the changes - it’s just doing what it’s told to do. Literally all the agent is doing is changing single values in a script.
English
0
0
0
211
tobi lutke
tobi lutke@tobi·
the singularity has begun. so many signs.
Andrej Karpathy@karpathy

@tobi Who knew early singularity could be this fun? :) I just confirmed that the improvements autoresearch found over the last 2 days of (~650) experiments on depth 12 model transfer well to depth 24 so nanochat is about to get a new leaderboard entry for “time to GPT-2” too. Works 🤷‍♂️

English
78
143
2.6K
443.5K
David Porter
David Porter@DCP_Chicago·
@karpathy @tobi most of the commits I see in the experiment git branches are updating single values. Has the agent attempted to rewrite code yet?
English
3
0
3
4.2K
Andrej Karpathy
Andrej Karpathy@karpathy·
@tobi Who knew early singularity could be this fun? :) I just confirmed that the improvements autoresearch found over the last 2 days of (~650) experiments on depth 12 model transfer well to depth 24 so nanochat is about to get a new leaderboard entry for “time to GPT-2” too. Works 🤷‍♂️
English
111
250
4.7K
763K
tobi lutke
tobi lutke@tobi·
OK this thing is totally insane. Before going to bed I... * used try to make a new qmdresearcher directory * told my pi to read this github repo and make a version of that for the qmd query-expansion model with the goal of highest quality score and speed. Get training data from tobi/qmd github. * woke up to +19% score on a 0.8b model (higher than previous 1.6b) after 8 hours and 37 experiments. I'm not a ML researcher of course. I'm sure way more sophisticated stuff is being done by real researchers. But its mesmerizing to just read it reasoning its way through the experiments. I learned more from that than months of following ml researchers. I just asked it to also make a new reranker and its already got higher base than the previous one. Incredible.
Andrej Karpathy@karpathy

I packaged up the "autoresearch" project into a new self-contained minimal repo if people would like to play over the weekend. It's basically nanochat LLM training core stripped down to a single-GPU, one file version of ~630 lines of code, then: - the human iterates on the prompt (.md) - the AI agent iterates on the training code (.py) The goal is to engineer your agents to make the fastest research progress indefinitely and without any of your own involvement. In the image, every dot is a complete LLM training run that lasts exactly 5 minutes. The agent works in an autonomous loop on a git feature branch and accumulates git commits to the training script as it finds better settings (of lower validation loss by the end) of the neural network architecture, the optimizer, all the hyperparameters, etc. You can imagine comparing the research progress of different prompts, different agents, etc. github.com/karpathy/autor… Part code, part sci-fi, and a pinch of psychosis :)

English
122
245
4.8K
790.8K
David Porter
David Porter@DCP_Chicago·
@karpathy i don’t understand, wouldn’t a simple control plane solve this? So you only go in one direction with this autoresearcher? Why wouldn’t you just spin up a bunch of these? is it because you only have one GPU?
English
0
0
0
61
Andrej Karpathy
Andrej Karpathy@karpathy·
The next step for autoresearch is that it has to be asynchronously massively collaborative for agents (think: SETI@home style). The goal is not to emulate a single PhD student, it's to emulate a research community of them. Current code synchronously grows a single thread of commits in a particular research direction. But the original repo is more of a seed, from which could sprout commits contributed by agents on all kinds of different research directions or for different compute platforms. Git(Hub) is *almost* but not really suited for this. It has a softly built in assumption of one "master" branch, which temporarily forks off into PRs just to merge back a bit later. I tried to prototype something super lightweight that could have a flavor of this, e.g. just a Discussion, written by my agent as a summary of its overnight run: github.com/karpathy/autor… Alternatively, a PR has the benefit of exact commits: github.com/karpathy/autor… but you'd never want to actually merge it... You'd just want to "adopt" and accumulate branches of commits. But even in this lightweight way, you could ask your agent to first read the Discussions/PRs using GitHub CLI for inspiration, and after its research is done, contribute a little "paper" of findings back. I'm not actually exactly sure what this should look like, but it's a big idea that is more general than just the autoresearch repo specifically. Agents can in principle easily juggle and collaborate on thousands of commits across arbitrary branch structures. Existing abstractions will accumulate stress as intelligence, attention and tenacity cease to be bottlenecks.
English
518
711
7.5K
1.1M
Dustin
Dustin@r0ck3t23·
The entire SaaS industry is building software for a customer that is about to go extinct. The human buyer. Insight Partners co-founder Jerry Murdock just exposed the fatal architectural flaw in every incumbent tech company’s business model. Your dashboards. Your UI. Your enterprise sales motion. Your human-in-the-loop workflows. All of it was engineered for a buyer that is disappearing in real time. Murdock: “If you’re not making your software for autonomous agents today, you’re going to be challenged in the future. Maybe it’s six months, maybe a year, maybe 18 months, but you’re going to be severely challenged if you still think human beings are going to buy your software.” Not disrupted. Not pressured. Structurally eliminated. For two decades, software was built around the cognitive limits of human biology. Dropdowns, dashboards, and notifications existed because the human brain needed them to navigate digital space. An autonomous agent needs none of that. It doesn’t browse your product page. It doesn’t sit through your demo. It doesn’t respond to your sales email. It doesn’t care how clean your UI is. It just executes. The agentic era runs on machine-to-machine infrastructure. Frictionless. Autonomous. No human in the loop. No patience for friction you built for a species it replaced. The window is six to eighteen months. The builders who survive will tear out the entire human interface layer and replace it with pure, unthrottled infrastructure that agents can consume at full speed. Everyone else will spend those eighteen months perfecting a dashboard that no one is ever going to log into again.
English
149
158
1.1K
196.3K
David Porter retweeté
Tim Urban
Tim Urban@waitbutwhy·
Adolf Hitler, hard-line nationalist who rebuilt Germany into a European power, dies at 56
Tim Urban tweet media
English
261
2.3K
30.4K
1M
David Porter
David Porter@DCP_Chicago·
@bcherny your company can signal whatever it wants, just know that it is not in the best interest of the american people to build a system and refuse to make it available for national security in the US. Meanwhile…
English
1
0
2
411
David Porter
David Porter@DCP_Chicago·
@daniel_mac8 Im suspicious he’s using AI as an excuse. Hes no stranger to bloat Look at his twitter legacy… Elon cut like 95% of the headcount before Ai was threatening jobs.
English
0
0
2
77
Dan McAteer
Dan McAteer@daniel_mac8·
jack dorsey's block is laying off 4k employees (40%) could be the first domino to fall. jack says it's not because they're not doing well. it's because he believes it allows them to move faster. prob the first *legitimate* layoffs due to AI. rather than ai as pretext.
jack@jack

we're making @blocks smaller today. here's my note to the company. #### today we're making one of the hardest decisions in the history of our company: we're reducing our organization by nearly half, from over 10,000 people to just under 6,000. that means over 4,000 of you are being asked to leave or entering into consultation. i'll be straight about what's happening, why, and what it means for everyone. first off, if you're one of the people affected, you'll receive your salary for 20 weeks + 1 week per year of tenure, equity vested through the end of may, 6 months of health care, your corporate devices, and $5,000 to put toward whatever you need to help you in this transition (if you’re outside the U.S. you’ll receive similar support but exact details are going to vary based on local requirements). i want you to know that before anything else. everyone will be notified today, whether you're being asked to leave, entering consultation, or asked to stay. we're not making this decision because we're in trouble. our business is strong. gross profit continues to grow, we continue to serve more and more customers, and profitability is improving. but something has changed. we're already seeing that the intelligence tools we’re creating and using, paired with smaller and flatter teams, are enabling a new way of working which fundamentally changes what it means to build and run a company. and that's accelerating rapidly. i had two options: cut gradually over months or years as this shift plays out, or be honest about where we are and act on it now. i chose the latter. repeated rounds of cuts are destructive to morale, to focus, and to the trust that customers and shareholders place in our ability to lead. i'd rather take a hard, clear action now and build from a position we believe in than manage a slow reduction of people toward the same outcome. a smaller company also gives us the space to grow our business the right way, on our own terms, instead of constantly reacting to market pressures. a decision at this scale carries risk. but so does standing still. we've done a full review to determine the roles and people we require to reliably grow the business from here, and we've pressure-tested those decisions from multiple angles. i accept that we may have gotten some of them wrong, and we've built in flexibility to account for that, and do the right thing for our customers. we're not going to just disappear people from slack and email and pretend they were never here. communication channels will stay open through thursday evening (pacific) so everyone can say goodbye properly, and share whatever you wish. i'll also be hosting a live video session to thank everyone at 3:35pm pacific. i know doing it this way might feel awkward. i'd rather it feel awkward and human than efficient and cold. to those of you leaving…i’m grateful for you, and i’m sorry to put you through this. you built what this company is today. that's a fact that i'll honor forever. this decision is not a reflection of what you contributed. you will be a great contributor to any organization going forward. to those staying…i made this decision, and i'll own it. what i'm asking of you is to build with me. we're going to build this company with intelligence at the core of everything we do. how we work, how we create, how we serve our customers. our customers will feel this shift too, and we're going to help them navigate it: towards a future where they can build their own features directly, composed of our capabilities and served through our interfaces. that's what i'm focused on now. expect a note from me tomorrow. jack

English
33
12
368
87.1K
David Porter
David Porter@DCP_Chicago·
@psomkar1 trying to refactor an extend an old project that generates and benchmark systems in a verifiable way and I’ve reached a point where vibe coding is failing. I’ve reverted to more formal micro managing of specific code chunks. Still use AI of course, but not pure vibe
English
0
0
0
70
Omkar
Omkar@psomkar1·
Is there anyone who vibe coded a successful app or website ?
English
374
9
524
163.9K
Matt Walsh
Matt Walsh@MattWalshBlog·
“You posted this from your smart phone lol!1!!!1” Yeah you got me. I use the technology and yet I’m also critical of its effects on society. If you perceive some kind of logical conflict in those two things that’s a sign that you’re basically retarded.
English
115
64
3.2K
226.7K
Matt Walsh
Matt Walsh@MattWalshBlog·
Smart phones really just ruined everything. People don’t even know how to sit still and think anymore, let alone pick up a book and read. Everyone is constantly overstimulated, addicted to the passive consumption of random bits of content streamed into their eyeballs at light speed. Kids are raised this way from birth. It’s unhuman. Disastrous for the mind and soul.
English
1.2K
2.5K
23.1K
1M
Right Angle News Network
Right Angle News Network@Rightanglenews·
BREAKING - Mexican President Claudia Sheinbaum says using force against cartels violates their civil rights, and any talk of engaging in “war” with the cartels is a right-wing talking point fueled by “fascism.”
English
4.4K
1.8K
5.5K
1.2M