Leo Gorodinski

4K posts

Leo Gorodinski banner
Leo Gorodinski

Leo Gorodinski

@eulerfx

Founder @CohesiveSystems, Prev: Co-Founder / CTO https://t.co/ULhUdcCcQH @AlvysTeam, Engineer @JetTechnology

Del Mar, CA Katılım Mayıs 2008
856 Takip Edilen1.5K Takipçiler
Leo Gorodinski retweetledi
Jenny Zhang
Jenny Zhang@jennyzhangzt·
Introducing Hyperagents: an AI system that not only improves at solving tasks, but also improves how it improves itself. The Darwin Gödel Machine (DGM) demonstrated that open-ended self-improvement is possible by iteratively generating and evaluating improved agents, yet it relies on a key assumption: that improvements in task performance (e.g., coding ability) translate into improvements in the self-improvement process itself. This alignment holds in coding, where both evaluation and modification are expressed in the same domain, but breaks down more generally. As a result, prior systems remain constrained by fixed, handcrafted meta-level procedures that do not themselves evolve. We introduce Hyperagents – self-referential agents that can modify both their task-solving behavior and the process that generates future improvements. This enables what we call metacognitive self-modification: learning not just to perform better, but to improve at improving. We instantiate this framework as DGM-Hyperagents (DGM-H), an extension of the DGM in which both task-solving behavior and the self-improvement procedure are editable and subject to evolution. Across diverse domains (coding, paper review, robotics reward design, and Olympiad-level math solution grading), hyperagents enable continuous performance improvements over time and outperform baselines without self-improvement or open-ended exploration, as well as prior self-improving systems (including DGM). DGM-H also improves the process by which new agents are generated (e.g. persistent memory, performance tracking), and these meta-level improvements transfer across domains and accumulate across runs. This work was done during my internship at Meta (@AIatMeta), in collaboration with Bingchen Zhao (@BingchenZhao), Wannan Yang (@winnieyangwn), Jakob Foerster (@j_foerst), Jeff Clune (@jeffclune), Minqi Jiang (@MinqiJiang), Sam Devlin (@smdvln), and Tatiana Shavrina (@rybolos).
Jenny Zhang tweet media
English
80
308
1.8K
121K
Leo Gorodinski
Leo Gorodinski@eulerfx·
C# doesn’t have formal DUs, but: - Records and pattern matching get you part of the way there - You can write an explicit Fold method to handle all cases, which is helpful in F# also (if you use a catch all clause you forego exhaustiveness checks) - F# DUs require special effort for JSON serialization - Easy to code a code generator to generate Fold methods for C# DUs expressed either as class/record hierarchies or a single type with a type discriminator enum - The codegen solution has advantages over base F# DUs in that it provides a fold method, and it can be used for value-type representations of DUs, not to mention being able to map every domain-specific DU to a generic Choice<..> DU
English
0
0
1
19
Seb
Seb@plainionist·
@SIRHAMY Meanwhile AI is great at F# too 😉 And last time I checked C# doesn't have real DU🤔
English
3
0
0
124
Hamilton Greene 🐷🦀
Hamilton Greene 🐷🦀@SIRHAMY·
Reasons I'm moving from F# to C#: * Less context switching - C# is c-like * C# has gotten good - records, linq, and unions * Good tooling + ecosystem * AI is great at C# * The syntax is verbose but readable hamy.xyz/blog/2025-11_w…
English
6
3
56
4.7K
Seascape Nature
Seascape Nature@SeascapeNature·
Winter in New York
Seascape Nature tweet media
English
2
56
556
5.8K
Leo Gorodinski retweetledi
Ziming Liu
Ziming Liu@ZimingLiu11·
Have been updating my "physics of AI" blogs every day. Requiring only 2 hours every day, I learn surprising facts about neural networks via toy models. Many insights might be trivial or irrelevant in the end, but some will be huge and transform the field. kindxiaoming.github.io/blog/
Ziming Liu tweet media
English
14
40
533
29.8K
Leo Gorodinski
Leo Gorodinski@eulerfx·
I don’t think asynchrony is necessarily about correctness. Here are the definitions of asynchrony, concurrency and parallelism that I’ve landed on over the years: Synchrony is the coupling of two or more events into a single semantic event. Asynchrony is the absence of such coupling: related events remain distinct and are not required to occur as one. Concurrency is the multiplexing of multiple independent computations onto (fewer) execution resources. Parallelism is the decomposition of a computation into multiple computations executed on multiple execution resources. You can even formalize these in terms dualities, loosely as followes: Asynchrony is dual to synchrony Concurrency is dual to parallelism
English
0
0
3
495
Leo Gorodinski
Leo Gorodinski@eulerfx·
@FreightAlley Tinder is more brutal than freight because in freight, at least you get a rejection…in tinder you get ghosted.
English
0
0
1
37
Leo Gorodinski retweetledi
John Carmack
John Carmack@ID_AA_Carmack·
In some important ways, a user’s LLM chat history is an extended interview. The social media algorithms learn what you like, but chats can learn how you think. You should be able to provide an LLM as a job reference, just like you would a coworker, manager, or professor. It can form an opinion and represent you without revealing any private data. Most resumes are culled by crude filters in HR long before they get to the checking-references stage, but this could greatly increase the fidelity. Our LLM will have an in-depth conversation with your LLM. For everyone. Most people probably shudder at the idea of an LLM rendering a judgement on them, but it is already happening in many interview processes today based on the tiny data in resumes. Better data helps everyone except the people trying to con their way into a position, and is it really worse than being judged by random HR people? Candidates with extensive public works, whether open source code, academic papers, long form writing, or even social media presence, already give a strong signal, but most talent is not publicly visible, and even the most rigorous (and resource consuming!) Big Tech interview track isn’t as predictive as you would like. A multi-year chat history is an excellent signal. Taken to the next level, you could imagine asking “What are the best candidates in the entire world that we should try to recruit for this task?” There is enormous economic value on the table in optimizing the fit between people and jobs, and it is completely two-sided, benefitting both employers and employees.
English
350
161
2.1K
203.6K
Leo Gorodinski
Leo Gorodinski@eulerfx·
@alz_zyd_ Made me recall something Jeff Bezos said on Lex Fridman…he said he wants to see humanity reach a population of one trillion so that we can have millions of very talented people. I wonder if that’s the bottleneck.
English
0
0
1
109
alz
alz@alz_zyd_·
We fetishize thinkers. As if they were the root of all human progress. As if, if only we had a thousand more Aristotles in the past few millenia, history would run faster, we would have colonized the stars by today. As if. Thinkers have never been the bottleneck of history
English
49
23
285
15.7K
bourbaki
bourbaki@2oovy·
I challenge someone to explain short exact sequences without mentioning cohomology or split exact sequences
English
19
0
42
3.1K
villi
villi@villi·
I have been taking creatine for the past 5-6 months and I can't tell if it is doing anything for me. I am not sure I even feel better or any different. Which leads me to conclude that I probably should stop taking it. Feedback welcome.
English
33
0
75
47.3K
Leo Gorodinski
Leo Gorodinski@eulerfx·
@Hitchslap1 Lawvere’s axiomatic cohesion shows that nothingness isn’t an option. The universe has structure, because structure is the default. @UrsSchreiber has discussed this at length.
Leo Gorodinski tweet media
English
0
0
2
50
Hitchslap
Hitchslap@Hitchslap1·
Serious question. Why is there something rather than nothing?
English
498
27
447
104.9K
Leo Gorodinski
Leo Gorodinski@eulerfx·
@girlbossmoder Mathematicians out there getting sad while physicists are making bank selling this diagram through courses on entanglement to unassuming grad students. 😂
English
0
0
14
2.6K
Leo Gorodinski
Leo Gorodinski@eulerfx·
@aryehazan After 5,000 years we haven’t run out of new and interesting things to do, so I think we’ll be fine 😊
English
0
0
1
161
Aryeh Kontorovich
Aryeh Kontorovich@aryehazan·
my aspiring mathematician nephew is concerned that maybe someday we'll run out of nontrivial theorems to prove he knows there are infinitely many *theorems* but worries that there might only be many finitely interesting ones I told him that all the mathematicians I know think otherwise what argument would put him at ease?
English
93
9
348
37.2K
Leo Gorodinski
Leo Gorodinski@eulerfx·
@BillAckman Musk has been sounding the alarm about dropping birthdates for quite a while, but “may I meet you” might be the pivotal event to turn things around. 🤣
English
0
0
0
725
Bill Ackman
Bill Ackman@BillAckman·
Feedback from the field (a Stanford undergrad): ‘I tried “may I meet you” at a bar last night and it worked wonders. I have a date this Friday.’
English
1.2K
337
12.6K
3.9M
Leo Gorodinski
Leo Gorodinski@eulerfx·
@JamesWard Reminds me of that Alan Perlis line: “Most papers in computer science describe how their author learned something that others already knew.” 😅
English
0
0
1
85
James Ward
James Ward@JamesWard·
Almost 30 years as a professional developer and the most surprising thing is how rarely I see truly new problems being tackled. The vast majority of programming "innovation" is just taking something solved elsewhere and bringing it to another audience.
English
3
3
40
3K
Mathieu
Mathieu@miniapeur·
POV: you are so bad at math your cat has to help you.
Mathieu tweet media
English
9
20
341
13.6K