Guan Wang

22 posts

Guan Wang banner
Guan Wang

Guan Wang

@makingAGI

CEO of Sapient Intelligence. Exploring the path to AGI through brain-inspired AI. 🧠🤖 #AGI #NeuroAI

Singapore เข้าร่วม Temmuz 2025
37 กำลังติดตาม5.3K ผู้ติดตาม
ทวีตที่ปักหมุด
Guan Wang
Guan Wang@makingAGI·
🚀Introducing Hierarchical Reasoning Model🧠🤖 Inspired by brain's hierarchical processing, HRM delivers unprecedented reasoning power on complex tasks like ARC-AGI and expert-level Sudoku using just 1k examples, no pretraining or CoT! Unlock next AI breakthrough with neuroscience. 🌟 📄Paper: arxiv.org/abs/2506.21734 💻Code: github.com/sapientinc/HRM
Guan Wang tweet media
English
228
630
4K
1.3M
Guan Wang
Guan Wang@makingAGI·
Hierarchical reasoning works well on large language models!🎉
Guan Wang tweet media
English
36
180
1.4K
96.6K
Guan Wang รีทวีตแล้ว
Sapient Intelligence
Sapient Intelligence@Sapient_Int·
🔥It’s official-Sapient HRM Discord Community is now live! This is a place to discuss, connect, and collaborate as we shape HRM’s future together. We will be sharing our latest work, releases, and tips, as well as hosting Q&A sessions💬💬 Hop on this journey with us as we push the boundaries of what HRM and AGI at large can achieve!🙌 ➡️Join us on Discord here discord.gg/sapient
Sapient Intelligence tweet media
English
1
5
36
4.4K
Guan Wang
Guan Wang@makingAGI·
Thanks to @arcprize for reproducing and verifying the results! ARC-AGI-1: public 41% pass@2 - semi private 32% pass@2 ARC-AGI-2: public 4% pass@2 - semi private 2% pass@2 Due to differences in testing environments, a certain amount of variance in results is acceptable. According to tests run on our infrastructure, the open-source version of HRM on our GitHub can achieve a score of 5.4% pass@2 on the ARC-AGI-2. We welcome everyone to run it on your own infra and share your scores~ This is our first submission to the leaderboard, and it's a good starting point. We appreciate everyone for your support and feedback on HRM, both before and after our appearance on the ARC leaderboard. All of this encourages and motivates us to improve. The hierarchical architecture is designed to resolve premature convergence in long-horizon tasks, like master-level Sudoku that takes hours for humans to solve. See the comparison with a simple recurrent Transformer. Such a long chain might not be essential for ARC problems, and we only used a high-low ratio of 1/2. Larger ratios are often needed for optimal performance for Sudoku problems. In the case of ARC-AGI, the success of HRM is a testament to the model's ability to exhibit fluid intelligence - that is, its capability to infer and apply abstract rules from independent and flat examples. We are glad it was discovered in a recent blog post that the outer loop and data augmentation are essential for this ability, and we especially thank @fchollet @GregKamradt @k_schuerholt for pointing this out. Finally, we are accelerating the iteration of the HRM model and continuously pushing its limits, with good progress so far. At the same time, we believe the hierarchical architecture is highly effective in many scenarios. Moving forward, we will make further targeted updates to the architecture and validate it on more applications. We will also release an FAQ to address the key questions raised by the community. 🧠 Stay tuned!
Guan Wang tweet mediaGuan Wang tweet media
English
17
30
333
38.4K
Guan Wang
Guan Wang@makingAGI·
@taihongtran Search just finds stuff. HRM learns patterns and reasons. We’re testing if it can work as a search after training - stay tuned.
English
0
0
2
1.2K
Tai Tran
Tai Tran@taihongtran·
@makingAGI What is the differences compare to neural search algorithms?
English
1
0
0
1.5K
Guan Wang
Guan Wang@makingAGI·
🚀Introducing Hierarchical Reasoning Model🧠🤖 Inspired by brain's hierarchical processing, HRM delivers unprecedented reasoning power on complex tasks like ARC-AGI and expert-level Sudoku using just 1k examples, no pretraining or CoT! Unlock next AI breakthrough with neuroscience. 🌟 📄Paper: arxiv.org/abs/2506.21734 💻Code: github.com/sapientinc/HRM
Guan Wang tweet media
English
228
630
4K
1.3M
Mithil Vakde
Mithil Vakde@evilmathkid·
@makingAGI @makingAGI The writing is a little unclear, could you please clarify which dataset are the ARC results on? Public train, public eval, private eval? Can't find you on the leaderboard on kaggle
English
2
0
4
4.8K
Axel Darmouni
Axel Darmouni@ADarmouni·
@makingAGI Something I need clarification: are the Arc-AGI results on data in the training set? You mention public eval in train data, so I suppose you trained on it But is it also the data you evaluate it on? Even if it overfits the training data, results are super cool!
English
1
0
5
2.8K
Homo Immortalis
Homo Immortalis@Homo_Immortalis·
@makingAGI This is exciting Guan, when do you think AI will be able to answer humanity’s unsolvable questions, like reversing aging, or understanding consciousness?
English
2
0
1
2.4K
Nihilist
Nihilist@NewAgeNihilism·
@makingAGI Great job, really good paper. I'm still wrapping my head around it
English
1
0
1
3.9K
Guan Wang
Guan Wang@makingAGI·
@codeslubber Fair point! We’re inspired by brains, not handmade expert-system hierarchies. We let the hierarchy learn and grow from data, not hand-made rules.
English
1
0
15
1.5K
Rob Williams
Rob Williams@codeslubber·
Will read the paper, but saying that by making the code hierarchical we are using neuroscience is kind of silly no? I am super interested in how these hierarchies manifest. One of the reasons Expert Systems failed was there never was any consensus on how to organize knowledge representation, and encoding everything in a bespoke fashion was not scalable.
English
1
0
5
3K
Guan Wang
Guan Wang@makingAGI·
@mikeboysen HRM is a way smaller, focused model: A) needs less data, B) answers niche questions without a giant LLM, C) runs cheap.
English
1
1
12
1.4K
Guan Wang
Guan Wang@makingAGI·
@itsAlexGuerrero @grok LLMs training = auto-regressive paradigm. HRM training = RNN thinker with a built-in “stop” signal (ACT) so it knows when to stop. Different playbooks.
English
1
0
23
2.2K
Alex
Alex@itsAlexGuerrero·
@makingAGI @grok how does this differ from the traditional way LLMs are trained? What does this mean for the future of AI?
English
4
1
3
14.8K
Guan Wang
Guan Wang@makingAGI·
@smjain Love it! 🎞️ Your TL;DR helps - sharing it.
English
1
0
3
2.5K
Shashank Jain
Shashank Jain@smjain·
@makingAGI wonderful..Here i created a small animation to summarize the concept as per what I understood
English
1
2
23
5.3K
Guan Wang
Guan Wang@makingAGI·
@notrajivpoddar Nope. LLMs are awesome but architecture-capped. We’re trying to push past those limits.
English
0
0
17
2.6K
Rajiv Poddar
Rajiv Poddar@notrajivpoddar·
@makingAGI so can i use an llm to generate the samples and train it at test time and provide true general intelligence?
English
3
0
23
12K
Guan Wang
Guan Wang@makingAGI·
@JonathanRoseD No plan yet. But code, checkpoints, and demo data are open - feel free to roll your own 🛠️
English
0
0
16
4.3K
Jonathan Dunlap
Jonathan Dunlap@JonathanRoseD·
@makingAGI Will there be a GGUF model release?
Ann Arbor, MI 🇺🇸 English
1
0
17
11.4K
Guan Wang รีทวีตแล้ว
Sapient Intelligence
Sapient Intelligence@Sapient_Int·
Our co-founder William Chen is going to share more about the open-sourced Hierarchical Reasoning Model (HRM) at #FortuneAISingapore @FortuneMagazine tomorrow, under the panel theme "Beyond Human: AGI And The Future We’re Building"! We are excited about the practical path towards universally capable reasoning systems that rely on architectures, not scale, to reach real AGI. ⏰16:10-16:40 SGT, July 23, Mainstage
Sapient Intelligence tweet media
English
5
8
32
7.7K
Guan Wang
Guan Wang@makingAGI·
@ai_for_success Only ~2 GPU hours for pro Sudoku. 50~200 for ARC-AGI 😀
English
10
7
471
59.5K
AshutoshShrivastava
AshutoshShrivastava@ai_for_success·
@makingAGI what was the total cost of training?? Also would be interested in full breakdown post.
English
4
0
110
38.3K