Pim de Witte
8K posts

Pim de Witte
@PimDeWitte
Building @gen_intuition: models for envs that require deep spatiotemporal reasoning. I like games, OSS, AI, and once built the world’s largest RuneScape server.


There is a tremendous amount of progress happening in World Models. Multiple labs have raised more than $1B. WMs were the star of GTC. They are a real path to embodied AI. So @PimDeWitte & I wrote a comprehensive 19k word overview of World Models. notboring.co/p/world-models


The way we hack is changing and we're building what comes next We've raised a total of $40M to create the AI-native platform for offensive security


This guy truly does not believe in the existence of human beings. Totally unrecognizable anthropology at work here.








Advanced Machine Intelligence (AMI) is building a new breed of AI systems that understand the world, have persistent memory, can reason and plan, and are controllable and safe. We’ve raised a $1.03B (~€890M) round from global investors who believe in our vision of universally intelligent systems centered on world models. This round is co-led by Cathay Innovation, Greycroft, Hiro Capital, HV Capital, and Bezos Expeditions, along with other investors and angels across the world. We are a growing team of researchers and builders, operating in Paris, New York, Montreal and Singapore from day one. Read more: amilabs.xyz AMI - Real world. Real intelligence.


I packaged up the "autoresearch" project into a new self-contained minimal repo if people would like to play over the weekend. It's basically nanochat LLM training core stripped down to a single-GPU, one file version of ~630 lines of code, then: - the human iterates on the prompt (.md) - the AI agent iterates on the training code (.py) The goal is to engineer your agents to make the fastest research progress indefinitely and without any of your own involvement. In the image, every dot is a complete LLM training run that lasts exactly 5 minutes. The agent works in an autonomous loop on a git feature branch and accumulates git commits to the training script as it finds better settings (of lower validation loss by the end) of the neural network architecture, the optimizer, all the hyperparameters, etc. You can imagine comparing the research progress of different prompts, different agents, etc. github.com/karpathy/autor… Part code, part sci-fi, and a pinch of psychosis :)







