

Autoscience Institute
45 posts

@AutoScienceAI
Automating AI research






Dario: “The biggest thing to watch is this issue of AI systems building AI systems.” Today, Autoscience is announcing $14M in funding by General Catalyst, Perplexity Fund and Toyota Ventures to create autonomous AI research labs. Human AI researchers can't keep up with the pace of new AI research. They’re out of time to run the experiments they want. So we’re building an autonomous AI lab that can. The era of human-scale R&D is over. Machine-scale development has begun. 🚀

Dario: “The biggest thing to watch is this issue of AI systems building AI systems.” Today, Autoscience is announcing $14M in funding by General Catalyst, Perplexity Fund and Toyota Ventures to create autonomous AI research labs. Human AI researchers can't keep up with the pace of new AI research. They’re out of time to run the experiments they want. So we’re building an autonomous AI lab that can. The era of human-scale R&D is over. Machine-scale development has begun. 🚀



Dario: “The biggest thing to watch is this issue of AI systems building AI systems.” Today, Autoscience is announcing $14M in funding by General Catalyst, Perplexity Fund and Toyota Ventures to create autonomous AI research labs. Human AI researchers can't keep up with the pace of new AI research. They’re out of time to run the experiments they want. So we’re building an autonomous AI lab that can. The era of human-scale R&D is over. Machine-scale development has begun. 🚀




Our AI discovery system just entered a Silver Medal spot on a live $50k Kaggle leaderboard. 🥈 The system autonomously engineers solutions without human input.

I packaged up the "autoresearch" project into a new self-contained minimal repo if people would like to play over the weekend. It's basically nanochat LLM training core stripped down to a single-GPU, one file version of ~630 lines of code, then: - the human iterates on the prompt (.md) - the AI agent iterates on the training code (.py) The goal is to engineer your agents to make the fastest research progress indefinitely and without any of your own involvement. In the image, every dot is a complete LLM training run that lasts exactly 5 minutes. The agent works in an autonomous loop on a git feature branch and accumulates git commits to the training script as it finds better settings (of lower validation loss by the end) of the neural network architecture, the optimizer, all the hyperparameters, etc. You can imagine comparing the research progress of different prompts, different agents, etc. github.com/karpathy/autor… Part code, part sci-fi, and a pinch of psychosis :)

AI systems can write working code … but can they earn a medal on a live Kaggle competition with $50k in prizes?


