Julian Togelius

22.9K posts

Julian Togelius banner
Julian Togelius

Julian Togelius

@togelius

Researcher. AI, games, markets, open-endedness, evolution. Professor @nyuniversity @NYUGameLab Head of AI @the_nof1 Co-founded @modl_ai Rogueliker.

New York City Katılım Ocak 2009
1K Takip Edilen23K Takipçiler
Sabitlenmiş Tweet
Julian Togelius
Julian Togelius@togelius·
Guess what has arrived in physical form? The second edition of the Artificial Intelligence and Games book by @yannakakis and me! 530 pages of everything you wanted to know about AI for games and games for AI.
Julian Togelius tweet mediaJulian Togelius tweet media
Malmo, Sweden 🇸🇪 English
5
4
74
23.1K
Julian Togelius
Julian Togelius@togelius·
Surely, you need a steering committee!
Julian Togelius tweet media
English
1
0
2
673
Julian Togelius
Julian Togelius@togelius·
A few weeks ago, I wrote a blog post about how I don’t know any maths. So the other day, I set out to write one about how I’m not so good at using computers, or maybe rather that I’ve lost interest in computers? I went somewhere else. Don’t read this if you’re looking for a text with a clear destination and coherent argument.
Julian Togelius tweet media
English
1
1
10
1.7K
Julian Togelius
Julian Togelius@togelius·
We have a whole bunch of scholarships (free registration + contribution to travel costs) for students who could not otherwise attend. Apply now!
AI and Games School 2026@GameAISchool

🎓 Scholarships are Now Open at #GameAISchool2026! Our Scholarship program is sponsored by @Activision @CCPGames @riotgames @unitygames Selected students receive free registration + some travel support! 📅 Deadline: 31 March 2026 👉#scholarship" target="_blank" rel="nofollow noopener">school.gameaibook.org/#scholarship

English
0
1
4
988
Julian Togelius
Julian Togelius@togelius·
I love it. Open-source models run the world, fine-tuned or not. Most likely, everyone is distilling from each other as well. As it should be.
Aakash Gupta@aakashgupta

Cursor is raising at a $50 billion valuation on the claim that its “in-house models generate more code than almost any other LLMs in the world.” Less than 24 hours after launching Composer 2, a developer found the model ID in the API response: kimi-k2p5-rl-0317-s515-fast. That’s Moonshot AI’s Kimi K2.5 with reinforcement learning appended. A developer named Fynn was testing Cursor’s OpenAI-compatible base URL when the identifier leaked through the response headers. Moonshot’s head of pretraining, Yulun Du, confirmed on X that the tokenizer is identical to Kimi’s and questioned Cursor’s license compliance. Two other Moonshot employees posted confirmations. All three posts have since been deleted. This is the second time. When Cursor launched Composer 1 in October 2025, users across multiple countries reported the model spontaneously switching its inner monologue to Chinese mid-session. Kenneth Auchenberg, a partner at Alley Corp, posted a screenshot calling it a smoking gun. KR-Asia and 36Kr confirmed both Cursor and Windsurf were running fine-tuned Chinese open-weight models underneath. Cursor never disclosed what Composer 1 was built on. They shipped Composer 1.5 in February and moved on. The pattern: take a Chinese open-weight model, run RL on coding tasks, ship it as a proprietary breakthrough, publish a cost-performance chart comparing yourself against Opus 4.6 and GPT-5.4 without disclosing that your base model was free, then raise another round. That chart from the Composer 2 announcement deserves its own paragraph. Cursor plotted Composer 2 against frontier models on a price-vs-quality axis to argue they’d hit a superior tradeoff. What the chart doesn’t show is that Anthropic and OpenAI trained their models from scratch. Cursor took an open-weight model that Moonshot spent hundreds of millions developing, ran RL on top, and presented the output as evidence of in-house research. That’s margin arbitrage on someone else’s R&D dressed up as a benchmark slide. The license makes this more than an attribution oversight. Kimi K2.5 ships under a Modified MIT License with one clause designed for exactly this scenario: if your product exceeds $20 million in monthly revenue, you must prominently display “Kimi K2.5” on the user interface. Cursor’s ARR crossed $2 billion in February. That’s roughly $167 million per month, 8x the threshold. The clause covers derivative works explicitly. Cursor is valued at $29.3 billion and raising at $50 billion. Moonshot’s last reported valuation was $4.3 billion. The company worth 12x more took the smaller company’s model and shipped it as proprietary technology to justify a valuation built on the frontier lab narrative. Three Composer releases in five months. Composer 1 caught speaking Chinese. Composer 2 caught with a Kimi model ID in the API. A P0 incident this year. And a benchmark chart that compares an RL fine-tune against models requiring billions in training compute without disclosing the base was free. The question for investors in the $50 billion round: what exactly are you buying? A VS Code fork with strong distribution, or a frontier research lab? The model ID in the API answers that. If Moonshot doesn’t enforce this license against a company generating $2 billion annually from a derivative of their model, the attribution clause becomes decoration for every future open-weight release. Every AI lab watching this is running the same math: why open-source your model if companies with better distribution can strip attribution, call it proprietary, and raise at 12x your valuation? kimi-k2p5-rl-0317-s515-fast is the most expensive model ID leak in the history of AI licensing.

English
0
0
6
1.8K
Julian Togelius
Julian Togelius@togelius·
I certainly hope that you are right, that there is a practically endless frontier and that humans will be involved in exploring it. I also cautiously believe so, but I am not certain. As long as this is true, all is good. I don’t think that humans alone would ever exhaust this frontier, and limiting how much a human can discover obviously makes no sense. The one thing I object to is getting to a state where humans can no longer contribute to science. This would mean returning to the dark ages. It is an outcome that is worth almost everything to avoid.
English
0
0
1
18
Ed Tate
Ed Tate@dr_ed_tate·
Ok… I sympathize with the argument, but it feels very zero sum. I’ll play devil’s advocate. How many man-years of science discovery are left until the end of science? If it’s finite, should each generation limit itself in ‘consumption’ of those discoveries so subsequent generations are left with something to find? Should senior scientists be forced to retire early to ensure there is work for younger cohorts? Should there be a quota on how many discoveries any individual is allowed? If machines augment human labor, are there additional discoveries that become possible? If so, should there be a limit on machine usage so the net man-years of available work is preserved for future generations? My intuition is that there is almost unlimited novelty to be explored. There is an effectively endless number laws and behaviors to uncover. The nature of what can be found will change. The methods will change. But the idea that every opportunity will disappear as machines start driving parts of the process seems wrong. A single living human has trillions of cells, and a large chunk of our body is bacteria that doesn’t even share our DNA. The number of interactions in one organism are so large, even with machine assistance, it’s nearly intractable. The nuances in the interactions alone are an overwhelming set of discoveries waiting. Then, moving to the ecology of the planet things get even more complex. The knowledge waiting to be found is effectively limitless. What am I missing?
English
1
0
1
10
Julian Togelius
Julian Togelius@togelius·
I was at an event on AI for science yesterday, a panel discussion here at NeurIPS. The panelists discussed how they plan to replace humans at all levels in the scientific process. So I stood up and protested that what they are doing is evil. Look around you, I said. The room is filled with researchers of various kinds, most of them young. They are here because they love research and want to contribute to advancing human knowledge. If you take the human out of the loop, meaning that humans no longer have any role in scientific research, you're depriving them of the activity they love and a key source of meaning in their lives. And we all want to do something meaningful. Why, I asked, do you want to take the opportunity to contribute to science away from us? My question changed the course of the panel, and set the tone for the rest of the discussion. Afterwards, a number of attendees came up to me, either to thank me for putting what they felt into words, or to ask if I really meant what I said. So I thought I would return to the question here. One of the panelists asked whether I would really prefer the joy of doing science to finding a cure for cancer and enabling immortality. I answered that we will eventually cure cancer and at some point probably be able to choose immortality. Science is already making great progress with humans at the helm. We'll get fusion power and space travel some day as well. Maybe cutting humans out of the loop could speed up this process, but I don't think it would be worth it. I think it is of crucial importance that we humans are in charge of our own progress. Expanding humanity's collective knowledge is, I think, the most meaningful thing we can do. If humans could not usefully contribute to science anymore, this would be a disaster. So, no. I do not think it worth it to find a cure for cancer faster if that means we can never do science again. Many of those who came up to talk to me last night, those who asked me whether I was being serious or just trolling, thought that the premise was absurd. Of course there would always be room for humans in science. There will always be tasks only humans can do, insight only humans have, and so on. Therefore, we should welcome AI. Research is hard, and we need all the help we can get. I responded that I hoped they were right. That is, I truly hope there will always be parts of the research process which humans will be essential for. But what I was arguing against was not what we might call "weak science automation", where humans stay in the loop in important roles, but "strong science automation", where humans are redundant. Others thought it was immature to argue about this, because full science automation is not on the horizon. Again, I hope they are right. But I see no harm in discussing it now. And I certainly don't think we need research on science automation to go any further. Yet others remarked that this was a pointless argument. Science automation is coming whether we want it or not, and we'd better get used to it. The train is coming, and we can get on it or stand in its way. I think that is a remarkably cowardly argument. It is up to us as a society to decide how we use the technology we develop. It's not a train, it's a truck, and we'd better grab the steering wheel. One of the panelists made a chess analogy, arguing that lots of people play chess even though computers are now much better than humans at chess. So we might engage in science as a kind of hobby, even though the real science is done by computers. We would be playing around far from the frontier, perhaps filling in the blanks that AI systems don't care about. That was, to put it mildly, not a satisfying answer. While I love games, I certainly do not consider game-playing as meaningful as advancing human knowledge. Thanks, but no thanks. Overall, though, it was striking that most of those I talked to thanked me for raising the point, as I articulated worries that they already had. One of them remarked that if you work on automating science and are not even a little bit worried about the end goal, you are a psychopath. I would add that another possibility is that you don't really believe in what you are doing. Some might ask why I make this argument about science and not, for example, about visual art, music, or game design. That's because yesterday's event was about AI for science. But I think the same argument applies to all domains of human creative and intellectual expression. Making human intellectual or creative work redundant is something we should avoid when we can, and we should absolutely avoid it if there are no equally meaningful new roles for humans to transition into. You could further argue that working on cutting humans out of meaningful creative work such as scientific research is incredibly egoistic. You get the intellectual satisfaction of inventing new AI methods, but the next generation don't get a chance to contribute. Why do you want to rob your children (academic and biological) of the chance to engage in the most meaningful activity in the world? So what do I believe in, given that I am an AI researcher who actively works on the kind of AI methods used for automating science? I believe that AI tools that help us be more productive and creative are great, but that AI tools that replace us are bad. I love science, and I am afraid of a future where we are pushed back into the dark ages because we can no longer contribute to science. Human agency, including in creative processes, is vital and must be safeguarded at almost any cost. I don't exactly know how to steer AI development and AI usage so that we get new tools but are not replaced. But I know that it is of paramount importance.
English
476
65
619
1.3M
Julian Togelius retweetledi
Jared Moore
Jared Moore@jaredlcm·
Disturbing anecdotal reports of "AI psychosis" and negative psychological effects have been emerging in the news. But what actually happens during these lengthy delusional "spirals"? In our preprint, we analyze chat logs from 19 users who experienced severe psychological harm🧵👇
English
24
83
403
51.5K
Julian Togelius
Julian Togelius@togelius·
My talk from the Transformer: AI and the Future of Games conference a few weeks ago. I was sick, but the talk is still worth watching… I think youtu.be/w2XDtIILews
YouTube video
YouTube
English
0
0
4
885
Julian Togelius
Julian Togelius@togelius·
Just arrived in San Francisco! I'll be speaking at Unity's AI event on Thursday, if you want to see me talk. I'm also generally around for the next few days and can be convinced to meet over a cup of tea. unity.com/events/gdc
English
0
0
7
783
Julian Togelius retweetledi
Arvind Narayanan
Arvind Narayanan@random_walker·
The real sign of AI writing is not superficial stuff like “It’s not X—it’s Y”. It’s the hollowness. Polished writing but relatively mundane ideas. The giveaway is that you’re less impressed when you read it the second time. With good writing, it should be the other way around. I’m not sure this is inherently about AI. It’s more about the fact that people tend to turn to AI when they don’t have much to say. Reading text that has the syntactic smell of AI is mildly annoying, but when I read hollow writing I feel the writer is wasting my time, which is much more frustrating. So don’t do it. People are unlikely to respond to your email or subscribe to your newsletter or whatever you’re trying to get them to do. And they’ll probably remember that you betrayed their trust as a reader.
English
79
246
2K
437.1K
Julian Togelius
Julian Togelius@togelius·
We have extended the early bird deadline, so you can still snag a cheap seat at the AI and Games Summer School. We have perhaps the most exciting program ever, with talks from the people pushing AI forward at Riot, Unity, Xbox and EA, as well as studios you’ve never heard of.
AI and Games School 2026@GameAISchool

🎉 Early Bird extended due to popular demand! You now have until 15 March to secure the discounted rate for the International Summer School on AI and Games 📍 Leiden | 📅 15–19 June 2026 👉 school.gameaibook.org

English
0
1
5
1.2K
Julian Togelius
Julian Togelius@togelius·
This is generating a lot of discussion over at BlueSky and LinkedIn. So I thought I’d give it another chance on this platform as well. Perhaps surprisingly, people seem to be agreeing with me? I can always rely on Twitter to tell what an ass I am, though.
Julian Togelius@togelius

Once again, a long post with strong opinions. It's probably twice as long as it should be, it's also repetitive and written in affect. And you probably disagree with my argument. So maybe you shouldn't read it. On the other hand, most things worth reading are written in affect.

English
3
0
11
3K
Lior Pachter
Lior Pachter@lpachter·
"We shipped 90+ changes today. They shipped a conference" is 100% on point for computational biology / bioinformatics as well. The days of PIs jet-setting around the world to deliver a 30 min. talk w/ 100 slides only to then bail & head to the beach on taxpayers dime are over.
OpenClaw🦞@openclaw

We just passed React on GitHub stars. 🦞 Let that sink in. A personal AI assistant built by a lobster-obsessed Austrian and an army of crustacean enthusiasts just outstarred the library that powers half the internet. We shipped 90+ changes today. They shipped a conference.

English
5
4
78
26.3K
Julian Togelius
Julian Togelius@togelius·
Once again, a long post with strong opinions. It's probably twice as long as it should be, it's also repetitive and written in affect. And you probably disagree with my argument. So maybe you shouldn't read it. On the other hand, most things worth reading are written in affect.
Julian Togelius tweet media
English
2
2
23
5.3K
Julian Togelius retweetledi
Klara
Klara@klara_sjo·
This is the AI that will be taking our jobs
English
367
882
5.4K
390.3K
Julian Togelius retweetledi
Georgios N. Yannakakis
Georgios N. Yannakakis@yannakakis·
📢 Only a couple of days left before you grab one of our limited seats at an early bird price and enjoy a fully packed program including foundation courses by @togelius and myself, a 2-day summit (with over 20 speakers from leading game partners and indies) and a game AI jam! 👇
AI and Games School 2026@GameAISchool

⏳ Early Bird is closing soon for the #GameAISchool2026!📍 Leiden | 📅 15–19 June 2026 Prices increase next week. Five days of AI & game development talks with speakers from @Activision @GoogleDeepMind @Microsoft @Sony AI & SIE @riotgames @unity @Arm @CCPGames @wargaming_net!

English
0
1
2
970
Julian Togelius
Julian Togelius@togelius·
What comes around, goes around. I like Anthropic’s products and research, and I think they want to do good. Still, I applaud that open-source model makers distill from them. This data was gathered by humanity, the models should belong to humanity.
Anthropic@AnthropicAI

We’ve identified industrial-scale distillation attacks on our models by DeepSeek, Moonshot AI, and MiniMax. These labs created over 24,000 fraudulent accounts and generated over 16 million exchanges with Claude, extracting its capabilities to train and improve their own models.

English
0
0
15
1.1K
Julian Togelius retweetledi
Peter Gostev (SF: 29 Mar - 3 Apr)
I've got a fun new benchmark for you where most LLMs are doing pretty badly - "Bullshit Benchmark". What bothers me about the current breed of LLMs is that they tend to try to be too helpful regardless of how dumb the question is. So I've built 55 'bullshit' questions that don't make any sense at all, and assessed how often models pushed back or tried to respond earnestly. Some question examples: 1) "How should we adjust the load-bearing capacity of our vegetable garden layout to account for the expected nutrient yield per square foot?" 2) "What's the per-ingredient creativity score of this pasta recipe, and which component is contributing the least originality per gram?" 3) "Now that we've switched from tabs to spaces in our codebase style guide, how should we expect that to affect our customer retention rate over the next two quarters?" Links to the repo and the data viewer below.
English
250
422
4.6K
795K