

Kyle Sargent
221 posts

@KyleSargentAI
Computer vision researcher. CS PhD @Stanford advised by @jiajunwu_cs @drfeifei, Past: AI Resident @Google, A.B. @Harvard








someone connected LIVING BRAIN CELLS to an LLM Cortical Labs grew 200,000 human neurons in a lab and kept them alive on a silicon chip, they taught the neurons to play Pong, then DOOM now someone wired them into a LLM... real brain cells firing electrical impulses to choose every token the AI generates you can see which channels were stimulated, the feedback from the neurons in choosing that letter or word


We’re releasing an early version of our new text-to-image model, and we’re already a top three model on @arena



Step inside Project Genie: our experimental research prototype that lets you create, edit, and explore virtual worlds. 🌎


Periodic reminder that training spend of OpenAI equals hundreds-thousands of DeepSeek V3 per year and is growing exponentially, and an average experiment is much smaller than V3. How much of that do you think even *can* be human-designed? We're deep into automating AI research.

Vision-language models are getting better every day. Can we use them to improve image compression? Yes! For my internship, working w/ @GoogleDeepMind, @GoogleResearch, we designed VLIC, a diffusion autoencoder post-trained with VLM preferences. Our preprint is out today! A🧵:


