Baran Hashemi

994 posts

Baran Hashemi banner
Baran Hashemi

Baran Hashemi

@Rythian47

Physicist | Postdoc at MPI for Mathematics in Sciences, AI for Mathematics

Leipzig, Germany Katılım Mayıs 2019
4.6K Takip Edilen831 Takipçiler
Sabitlenmiş Tweet
Baran Hashemi
Baran Hashemi@Rythian47·
1/ Are frontier LLMs the only path to AI math breakthroughs? I think not! We introduce FlowBoost, A lightweight RL+Flow-Matching framework that discovers new extremal geometric structures, beating AlphaEvolve with 100–1000× less compute with zero-shot geometry-aware & reward-guided generation. 🚀 arXiv: arxiv.org/pdf/2601.18005
Baran Hashemi tweet media
English
1
13
57
16.2K
گیو
گیو@givHeisenberg·
۴تا ست ۸ تایی ۶۰ کیلویی سوموآردی‌ال🏋️‍♀️
فارسی
2
0
19
452
Lexaaa
Lexaaa@DayShuai·
What if AI didn’t just solve math problems — but discovered entirely new mathematical structures? Introducing AutoMath from The Omega Institute. From ONE equation (x² = x + 1) and ZERO extra axioms, we derive 9 branches of math: algebra, combinatorics, topology, dynamical systems… all formally verified in Lean 4 (~2,350 theorems, 25k lines of code). Our method: Derive, Discover, Name • Derive → AI exhaustively explores every logical consequence • Discover → Patterns emerge that humans might never notice • Name → Human intuition connects them to deep math (rings, finite fields, golden-ratio p-adics…) This is the future of human-AI mathematical discovery. Read “The Method” 👉 #the-method" target="_blank" rel="nofollow noopener">the-omega-institute.github.io/automath/#the-… GitHub (star & contribute) 👉 github.com/the-omega-inst… What hidden structure will you uncover next? #AutoMath #AI4SCI #Lean4
English
12
25
208
18.9K
Daattavya Aggarwal
Daattavya Aggarwal@DaattavyaA·
Excited to share our new paper on AI-led maths research. We tried to model mathematical practice realistically, using the task of `rediscovering’ the concept of homology as a benchmark. Would love feedback from people working in a variety of areas! arxiv.org/abs/2603.04528
English
6
20
102
7.4K
Pushmeet Kohli
Pushmeet Kohli@pushmeet·
Happy to share new progress in AI for Maths @GoogleDeepMind . In extremal combinatorics, AlphaEvolve has helped establish new lower bounds for FIVE classical Ramsey numbers - a problem so challenging that even Erdős commented on its difficulty. Historically, computationally deriving these bounds required bespoke, human-designed search algorithms. For many of these bounds, the best previous results are at least a decade old. AlphaEvolve changes this by acting as a single meta-algorithm that automatically discovers the search procedures needed to find these new bounds. 📷
English
60
324
3.1K
459.1K
Baran Hashemi
Baran Hashemi@Rythian47·
@givHeisenberg اینا کیه اطرافتن! گیو جون، شانست چرا اینجوریه آخه…
فارسی
1
0
1
49
گیو
گیو@givHeisenberg·
دختر هندیه بهم پیام داد و از اینکه جنگ به یه کشور محدود نمیمونه ابراز نگرانی کرد. خودم رو کنترل کردم که کسکش جاکشش نکنم.
فارسی
3
0
33
634
Baran Hashemi retweetledi
گیو
گیو@givHeisenberg·
الان اونجای هری پاتریم که ولدمورت سقط شده. اما مرگ‌خواران ما با قدرت به گه‌خوری ادامه میدن.
گیو@givHeisenberg

فارسی
0
3
30
720
BURKOV
BURKOV@burkov·
When you train a neural network on a new task using examples (through supervised finetuning), it tends to forget what it already knew — a well-known problem called catastrophic forgetting. The standard fix in reinforcement learning is to have the model learn from its own outputs rather than from a fixed dataset (this is called "on-policy" learning), but that requires a reward function telling the model how good its outputs are, which is often hard to define. This recent MIT paper finds a way around that constraint: it uses the same model twice — once with a demonstration stuffed into its input as context (the 'teacher'), and once without (the 'student') — then has the student generate text, and updates the student's weights so that its token probability distributions get closer to the teacher's token probability distributions at each position in that generated text. The trick works because LLMs can already adapt their behavior when shown an example in context, so the teacher is essentially a better version of the model that stays close to the original, making the learning signal gentle enough to avoid wrecking existing capabilities. Across multiple experiments, this approach lets a single model sequentially learn three different skills while keeping all of them, where standard supervised training destroys earlier skills as soon as it moves to the next one. The math also turns out to be equivalent to a form of reinforcement learning with an implicit reward, which gives the method a clean theoretical grounding beyond just "it works empirically." Read this paper with an AI tutor: chapterpal.com/s/1b7dae70/sel… Read it alone: arxiv.org/pdf/2601.19897
BURKOV tweet media
English
11
68
366
20.5K
ᴉʌɐɹsoɥʞ uɐɟɹǝ ◆ عرڡاں حـســـروے
تاریخ یک کلان‌روایت خطی و غایت‌مند نیست. فهم این نکته ما را از چارچوب‌هایی که تحت تأثیر باورهای جزم‌اندیشانه پیرامون «مسیرِ محتومِ تاریخ» شکل گرفته‌اند، رها می‌کند. علم جاده‌ای یک‌طرفه نیست که اگر مانعی چون «غزالی» نباشد، لزوماً به سرمنزلِ «گالیله» و «نیوتن» ختم شود.
ᴉʌɐɹsoɥʞ uɐɟɹǝ ◆ عرڡاں حـســـروے@Erfan_khosravi

امیرمحمد گمینی در کتاب «ما چگونه ما نشدیم» به‌درستی خاطرنشان می‌کند که نقد غزالی به طبیعیات ارسطویی نه تنها مانع علم نشد، بلکه محرک مکتب مراغه در نقد نظام بطلمیوسی بود. علم در جهان اسلام نه با «تهافت»، که در بسترِ دیالوگ با آن به تکامل رسید.

فارسی
4
2
43
2.3K
Ronak Malde
Ronak Malde@rronak_·
My favorite paper of 2026 so far 🔥 They took On-Policy Distillation (ie the Thinking Machines blog post), but then showed that the policy can be both the teacher and the student model. The idea is to condition the teacher off of a golden trajectory, and then train on the conditioned logprobs of the same model. The crazy part is, you can literally condition the teacher on anything!! This opens up an entire pandora's box of bridging prompt optimization/ICL + weight optimization that I'm very excited about for continual learning Authors: @IdanShenfeld @MehulDamani2 Jonas Hübotter @pulkitology
Ronak Malde tweet media
English
16
43
422
34.4K
Baran Hashemi
Baran Hashemi@Rythian47·
@ZimingLiu11 Cool Stuff! We also explored the world models of Transformers for Mathematical physics (enumerative geometry) and investigated how and what it learned. It might be a good reference: arxiv.org/abs/2408.14915
English
0
1
17
1.2K
Ziming Liu
Ziming Liu@ZimingLiu11·
🚨Transformers don't learn Newton's laws? They learn Kepler's laws! Like us, transformers don't predict a flying ball via a differential equation, but by fitting a curve. Moreover, reducing context length steers a transformer from Keplerian to Newtonian. Compression in play.
Ziming Liu tweet media
English
25
205
1.2K
116.5K
Sebastien Bubeck
Sebastien Bubeck@SebastienBubeck·
This year we will see more and more LLM-aided scientific discoveries. If you want to get up to speed on where we are today, especially in mathematics, check out this video of my talk "Recent Advances in LLMs for Mathematics" youtu.be/MH3lG7V7SuU?si…
YouTube video
YouTube
English
13
60
344
37.6K
Baran Hashemi retweetledi
Baran Hashemi
Baran Hashemi@Rythian47·
1/ Are frontier LLMs the only path to AI math breakthroughs? I think not! We introduce FlowBoost, A lightweight RL+Flow-Matching framework that discovers new extremal geometric structures, beating AlphaEvolve with 100–1000× less compute with zero-shot geometry-aware & reward-guided generation. 🚀 arXiv: arxiv.org/pdf/2601.18005
Baran Hashemi tweet media
English
1
13
57
16.2K