Zhe Wang

21 posts

Zhe Wang

Zhe Wang

@zhexwang

Frontier AI @GoogleDeepMind. Generative modelling, Robotics and AI4Science.

London, England Katılım Aralık 2010
114 Takip Edilen79 Takipçiler
Sabitlenmiş Tweet
Zhe Wang
Zhe Wang@zhexwang·
(1/6) Can Autoregressive Models (ARMs) go beyond fixed or random generation orders and learn the optimal order to generate new samples? We believe so! We are excited to present our paper, “Learning-Order Autoregressive Models with Application to Molecular Graph Generation,” at ICML 2025 in Vancouver on July 18th. Feel free to drop by! In our new paper, we introduce Learning-Order Autoregressive Models (LO-ARMs), which are trained to find a consistent, context-dependent generation order on their own. Our model matches and surpasses state-of-the-art results on two major molecular graph generation tasks. Read our paper here: arxiv.org/abs/2503.05979 Joint work with @thjashin, Nicolas Heess, @ArthurGretton, Michalis K. Titsias More in 🧵👇
Zhe Wang tweet media
English
3
9
40
6K
Zhe Wang retweetledi
Eddy Xu
Eddy Xu@eddybuild·
today, we’re open sourcing the largest egocentric dataset in history. - 10,000 hours - 2,153 factory workers - 1,080,000,000 frames the era of data scaling in robotics is here. (thread)
English
267
674
6.5K
2.1M
Zhe Wang retweetledi
Zoubin Ghahramani
Zoubin Ghahramani@ZoubinGhahrama1·
Google DeepMind has been leading the revolution in Foundation Models for Robotics. Come join us. We are hiring!
Carolina Parada@parada_car88104

It takes a village to build an awesome 🦾 🧠 . We are #hiring! Come transform robotics at Google DeepMind. I've received a lot of request for applied and engineering roles. Indeed we are hiring for those too! Robotics engineering: lnkd.in/gDt6CeAq Applied ML SWEs: lnkd.in/gCmY69XQ Research Scientist: lnkd.in/g47zS2Xz

English
11
29
508
55.2K
Zhe Wang
Zhe Wang@zhexwang·
@YuchenZhu_ZYC Thanks! Shall we catch up in person if you're also at ICML? Feel free to let me know if you have any questions.
English
2
0
0
96
Zhe Wang
Zhe Wang@zhexwang·
(1/6) Can Autoregressive Models (ARMs) go beyond fixed or random generation orders and learn the optimal order to generate new samples? We believe so! We are excited to present our paper, “Learning-Order Autoregressive Models with Application to Molecular Graph Generation,” at ICML 2025 in Vancouver on July 18th. Feel free to drop by! In our new paper, we introduce Learning-Order Autoregressive Models (LO-ARMs), which are trained to find a consistent, context-dependent generation order on their own. Our model matches and surpasses state-of-the-art results on two major molecular graph generation tasks. Read our paper here: arxiv.org/abs/2503.05979 Joint work with @thjashin, Nicolas Heess, @ArthurGretton, Michalis K. Titsias More in 🧵👇
Zhe Wang tweet media
English
3
9
40
6K
Zhe Wang
Zhe Wang@zhexwang·
(6/6) Finally, can LO-ARMs generate "better" molecules? Yes, they can. On both the QM9 and ZINC250k datasets, LO-ARMs match or exceed state-of-the-art (SOTA) results. Performance was evaluated across key metrics for distribution similarity and drug-likeness.
Zhe Wang tweet media
English
0
0
2
150
Zhe Wang
Zhe Wang@zhexwang·
(5/6) Can LO-ARMs learn consistent and meaningful generation orders on their own? Yes — and the strategy/generation order they discover is surprisingly intuitive. On two major molecular generation tasks (QM9 and ZINC250k), our models learned a consistent, two-phase process: 1. Build the Skeleton: First, generate the chemical bonds. 2. Infill the Atoms: Then, add the atoms to the skeleton. We find that 99% of the new samples follow this generation order. Here’s an example of this in action for generating a ZINC250k molecule. We start with a blank slate where all tokens are masked, and in each step, the model adds one new piece — either an atom or a bond, until the molecule is complete.
Zhe Wang tweet mediaZhe Wang tweet media
English
1
1
2
233
Zhe Wang
Zhe Wang@zhexwang·
@ZoubinGhahrama1 @geoffreyhinton Just took the Gatsby courses last year (now they are split to two modules approximate inference and unsupervised learning), and the courses are incredibly helpful. I even read your matlab code!😁
English
1
0
0
280
Zoubin Ghahramani
Zoubin Ghahramani@ZoubinGhahrama1·
This is the syllabus of the course @geoffreyhinton and I taught in 1998 at the Gatsby Unit (just after it was founded). Notice anything?
Zoubin Ghahramani tweet mediaZoubin Ghahramani tweet mediaZoubin Ghahramani tweet media
English
65
131
1.6K
236.1K
Zhe Wang retweetledi
Gatsby Computational Neuroscience Unit
📢 We have an opportunity for students to join our PhD programme in Theoretical Neuroscience and Machine Learning this September. Application deadline is 27 May 2025. Information & how to apply 👉 ucl.ac.uk/gatsby/study-a…
English
0
56
140
15.7K
Zhe Wang retweetledi
Jiaxin Shi
Jiaxin Shi@thjashin·
We are hiring a student researcher at Google DeepMind to work on fundamental problems in discrete generative modeling! Examples of our recent work: masked diffusion: arxiv.org/abs/2406.04329 learning-order AR: arxiv.org/abs/2503.05979 If you find this interesting, please send an email to: {jiaxins,mtitsias} AT google DOT com
English
5
62
498
61.5K
Karl Tuyls
Karl Tuyls@karl_tuyls·
I’m happy to share that I’m starting a new position as Director of AI Research at Meta! Looking forward to advancing the next generation of Autonomous AI Agents with the Llama team. Bringing 25 years of experience in AAMAS to develop impactful open-source action and agent models. Stay tuned!
English
29
19
605
54.9K
Zhe Wang retweetledi
Thang Luong
Thang Luong@lmthang·
Excited to share details of AlphaGeometry2 (AG2), part of the system that achieved silver-medal standard at IMO 2024 last July! AG2 now has surpassed the average gold-medalist in solving Olympiad geometry problems, achieving a solving rate of 84% for all IMO geometry problems over the last 25 years, compared to 54% previously! Figure is a crazy problem solved elegantly by AG2 (for the first time according to our knowledge!). Will follow up with more details later in a thread. Paper: arxiv.org/abs/2502.03544 Nature article: nature.com/articles/d4158… Authors: Yuri Chervonyi, Trieu H. Trinh, Miroslav Olsak, Xiaomeng Yang, Hoang Nguyen, Marcelo Menegali, Junehyuk Jung, Vikas Verma, Quoc V. Le, Thang Luong Special thanks to Dawsen Hwang, Edward Lockhart, and Steven Creech for contributions to the development of AlphaGeometry2. We would also like to thank Yifeng Lu, Henryk Michalewski, Ed Chi, David Silver, Pushmeet Kohli, and Demis Hassabis for their thoughtful discussions, help and support.
Thang Luong tweet media
Thang Luong@lmthang

Super thrilled to share that our AI has has now reached silver medalist level in Math at #imo2024 (1 point away from 🥇)! Since Jan, we now not only have a much stronger version of #AlphaGeometry, but also an entirely new system called #AlphaProof, capable of solving many more Olympiad problems. This is a large-scale project that I was fortunate to co-lead at @GoogleDeepMind! See our blog & NYT articel below! Blog: dpmd.ai/imo-silver NYT: nytimes.com/2024/07/25/sci…

English
30
172
971
223.6K
Zhe Wang retweetledi
Jiaxin Shi
Jiaxin Shi@thjashin·
Still a week to submit your work to ICLR 2025 workshop on world models!
Mengyue Yang@Mengyue_Yang_

🚀 Call for Papers: ICLR 2025 Workshop on World Models! 🌍🤖 📅 Submission Deadline: 10th Feb 2025 23:59 AOE 🌐 Website: sites.google.com/view/worldmode… We invite submissions on understanding, modeling, and scaling #WorldModels—from knowledge extraction to model-based RL, multimodal world models, and their applications in AI, robotics, and scientific discovery. 📩 Join us in shaping the future of AI-driven world modeling! #ICLR2025 #WorldModels #AI #ML #RL

English
0
4
9
2.3K