Nati KohnShvartz (vi/vim)🎗
2K posts

Nati KohnShvartz (vi/vim)🎗
@NathanKohn
VP AI Data Science at @Lemonade_Inc (NYSE:LMND) | ex VP, AI @Citibank | ex Intel AI products | @TechnionLive Graduate #AI #DataScience #GenAI #Insurtech

🚨 JUST IN: President Trump says America has WON THE WAR against Iran "Oh, I think we've won. We've knocked out their navy, their air force. We've knocked out their anti-aircraft. We've knocked out everything. We're roaming free!" "From a military standpoint, all they're doing is clogging up the strait. But from a military standpoint, they're finished!" LFG! Victory is coming! 🇺🇸





White House AI & Crypto Czar David Sacks: "Israel is getting hit harder than they've ever been hit before in their history. And we're only two weeks into this. If this war continues for weeks or months, then Israel could just be destroyed… And then you have to worry about Israel escalating the war by contemplating using a nuclear weapon, which would truly be catastrophic"




I packaged up the "autoresearch" project into a new self-contained minimal repo if people would like to play over the weekend. It's basically nanochat LLM training core stripped down to a single-GPU, one file version of ~630 lines of code, then: - the human iterates on the prompt (.md) - the AI agent iterates on the training code (.py) The goal is to engineer your agents to make the fastest research progress indefinitely and without any of your own involvement. In the image, every dot is a complete LLM training run that lasts exactly 5 minutes. The agent works in an autonomous loop on a git feature branch and accumulates git commits to the training script as it finds better settings (of lower validation loss by the end) of the neural network architecture, the optimizer, all the hyperparameters, etc. You can imagine comparing the research progress of different prompts, different agents, etc. github.com/karpathy/autor… Part code, part sci-fi, and a pinch of psychosis :)


I packaged up the "autoresearch" project into a new self-contained minimal repo if people would like to play over the weekend. It's basically nanochat LLM training core stripped down to a single-GPU, one file version of ~630 lines of code, then: - the human iterates on the prompt (.md) - the AI agent iterates on the training code (.py) The goal is to engineer your agents to make the fastest research progress indefinitely and without any of your own involvement. In the image, every dot is a complete LLM training run that lasts exactly 5 minutes. The agent works in an autonomous loop on a git feature branch and accumulates git commits to the training script as it finds better settings (of lower validation loss by the end) of the neural network architecture, the optimizer, all the hyperparameters, etc. You can imagine comparing the research progress of different prompts, different agents, etc. github.com/karpathy/autor… Part code, part sci-fi, and a pinch of psychosis :)















