
alex zook
3.1K posts

alex zook
@zookae
machine learning engineer at NVIDIA (prev Unity, Blizzard; AI PhD @ Georgia Tech) all tweets my own


Today, we’re announcing the first major discovery made by our AI Scientist with the lab in the loop: a promising new treatment for dry AMD, a major cause of blindness. Our agents generated the hypotheses, designed the experiments, analyzed the data, iterated, even made figures for the paper. The resulting manuscript is a first-of-a-kind in the natural sciences, in which everything that needed to be done to write the paper was done by AI agents, apart from actually conducting the physical experiments in the lab and writing the final manuscript. We are also introducing Robin, the first multi-agent system that fully automates the in-silico components of scientific discovery, which made this discovery. This is the first time that we are aware of that hypothesis generation, experimentation, and data analysis have been joined up in closed loop, and is the beginning of a massive acceleration in the pace of scientific discovery that will be driven by these agents. We will be open-sourcing the code and data next week. Robin is a multi-agent system that uses Crow, Falcon, and Finch, the agents on our platform, to generate novel hypotheses, plan experiments, and analyze data. We asked Robin to find a new treatment for dry age-related macular degeneration. Robin considered the disease mechanisms associated with dry AMD, proposed a specific experimental assay that could be used to evaluate hypotheses in the wet lab, and proposed specific molecules we could test in that assay. We tested the molecules and gave it the resulting data, which it analyzed before proposing more experiments. In the end, it identified Ripasudil, a Rho Kinase inhibitor (ROCK inhibitor) that is approved in Japan for several other diseases, which seems very promising as potential treatment for dry AMD. It also identified specific molecular mechanisms that might underlie the effects of Ripasudil in RPE cells, from an RNA sequencing experiment it proposed. To be clear, no one has proposed using ROCK inhibitors to treat dry AMD in the literature before, as far as we can find, and I think it would have been very difficult for us to come up with this hypothesis without the agents. We have also run the proposed treatment by several experts in AMD, who confirm that it is interesting and novel. Moreover, this project was fast: with Robin in hand, the entire project took about 10 weeks, which is way shorter than it would have taken if we had been doing all of the in-silico components ourselves. Important caveats: We are real biologists at FutureHouse, so I want to be clear that although the discovery here is exciting, we are not claiming that we have cured dry AMD. Fully validating this hypothesis as a treatment for dry AMD will take human trials, which will take much longer. Also, this discovery is cool, but it is not yet a "move 37"-style discovery. At the current rate of progress, I'm sure we will get to that level soon. Congratulations to the team. Congratulations in particular to Robin, which generated the hypotheses, proposed the experiments, analyzed the data and generated the figures. And major congratulations also to the human team, which built Robin: @MichaelaThinks, @agreeb66, @benjamin0chang, @ludomitch, Mo Razzak, Kiki Szostkiewicz, and Angela Yiu.


The 2nd workshop on Computer Vision for Videogames will be organized at CVPR 2025: this is a great venue for gaming-related research (think AI, genAI, graphics, RL, agents, HCI — with applications to videogames). There is still time to submit: sites.google.com/view/cv2-2025/ #CVPR2025





Training RL/robot policies requires extensive experience in the target environment, which is often difficult to obtain. How can we “distill” embodied policies from foundational models? Introducing FactorSim! #NeurIPS2024 We show that by generating prompt-aligned simulations and training a policy on them without collecting any experience in the target environment, we can achieve zero-shot performance close to policies trained on millions of target environment experiences in many classic RL environments. You can generate RL simulations on our project website: cs.stanford.edu/~sunfanyun/fac… More in 🧵 1/7





BTW, as someone who learned his Mandarin in Beijing, one of the more annoying things is listening to people--Chinese included--who try to fake the Beijing "r" rolls into their Mandarin. It just sounds so fake. Don't do it. It's like me, who learned his English in the US, trying to fake an Irish accent. It just does not work. I was once in an amateur theater troupe in San Jose, California called The Mostly Irish Theater Company. And they were mostly first and second generations Irish in it too, except me and a couple of other Americans. And we put up a play where my role was suppose to have an Irish accent. I just could not do it. So we had to change the script where I was the American doctor.



So proud of my team for presenting the first interactive #texture #painting with #AI at #SIGGRAPHAsia2023 Real-Time Live. Well done Anita Hu and team!! We want the artist to stay in control 🎨🖌️🤗 developer.nvidia.com/blog/nvidia-re…


#AIIDE23 kicks off tomorrow with the Experimental AI in Games workshop! Now in its 10th year, @exag20xx has become a mainstay of our workshop series, and emphasizes showing, teaching, and inventing, alongside traditional paper presentations. Check it out! exag.org/schedule









