Sai Bi

113 posts

Sai Bi banner
Sai Bi

Sai Bi

@Sai__Bi

Research Scientist @ Adobe Research

San Jose, CA เข้าร่วม Ekim 2011
500 กำลังติดตาม431 ผู้ติดตาม
Sai Bi รีทวีตแล้ว
Tri Dao
Tri Dao@tri_dao·
This is what we've been coking for the last 9 months: make MoEs training goes ~2x faster and ~2x less memory! Highlights: - MoE typically takes the most time and memory in modern models. Turns out one can mathematically rewrite the MoE backward pass to reduce the activation mem you need to store in the fwd by ~2x, resulting in the same gradients with no extra matmul recomputation. I really like this result, as it combines both algorithmic and systems insights. - Analyzing bottlenecks in MoE layer leads to a natural optimization stragegy: reduce mem reads/writes as much as possible! Gathering the input for fwd and output grad for bwd can sometimes take as much time as the grouped GEMMs. We fuse gather with grouped GEMM + overlap mem access and compute to make the whole layer goes ~2x faster. - Computing top-k for expert routing can take surprisingly long, ~15-20% of the whole MoE layer! Standard top-k impl uses radix top-k algo, great for large k but suboptimal for small k. We rewrote top-k using bitonic top-k algo, and it's sometimes 20-30x faster than pytorch's top-k! All the main kernels are written in Cute-DSL so they should be easy to extend (and install :D). Hopper kernels are out, Blackwell kernels are just about ready. MoE models used to be 2x less hardware-efficient to train, hopefully Sonic-MOE will change that.
Wentao Guo@WentaoGuo7

🚀SonicMoE🚀: a blazingly-fast MoE implementation optimized for NVIDIA Hopper GPUs. SonicMoE reduces activation memory by 45% and is 1.86x faster on H100 than previous SOTA😃 Paper: arxiv.org/abs/2512.14080 Work with @MayankMish98, @XinleC295, @istoica05, @tri_dao

English
30
164
1.5K
157.7K
Sai Bi รีทวีตแล้ว
Hanwen Jiang
Hanwen Jiang@hanwenjiang1·
(1/N) Will this be the BERT/GPT moment for 3D vision? Finally, unsupervised pre-training for 3D works. Led by @qitao_zhao , we present E-RayZer — a fully self-supervised 3D reconstruction model that: 🔥Matches or surpasses supervised methods like VGGT 👀Learns transferable 3D representations, outperforming CroCo, VideoMAE, and DINO 📈Scales with more unlabeled data A new recipe for scalable 3D foundation models.
English
4
86
393
57.1K
Sai Bi รีทวีตแล้ว
Ying Sheng
Ying Sheng@ying11231·
We've been running @radixark for a few months, started by many core developers in SGLang @lmsysorg and its extended ecosystem (slime @slime_framework , AReaL @jxwuyi). I left @xai in August — a place where I built deep emotions and countless beautiful memories. It was the best place I’ve ever worked, the place I watched grow from a few dozen people to hundreds, and it truly felt like home. What pushed me to make such a hard decision is the momentum of building SGLang open source and the mission of creating an ambitious future, within an open spirit that I learnt from my first job at @databricks after my PhD. We started SGLang in the summer of 2023 and made it public in January 2024. Over the past 2 years, hundreds of people have made great efforts to get to where they are today. We experienced several waves of growth after its first release. I still remember the many dark nights in the summer of 2024, I spent with @lm_zheng , @lsyincs , and @zhyncs42 debugging, while @ispobaoke single-handedly took on DeepSeek inference optimizations, seeing @GenAI_is_real and the community strike team tag-teaming on-call shifts non-stop. There are so many more who have joined that I'm out of space to call out, but they're recorded on the GitHub contributor list forever. The demands grow exponentially, and we have been pushed to make it a dedicated effort supported by RadixArk. It’s the step-by-step journey of a thousand miles that has carried us here today, and the same relentless Long March that will lead us into the tens of thousands of miles yet to come. The story never stops growing. Over the past year, we’ve seen something very clear: The world is full of people eager to build AI, but the infrastructure that makes it possible is not shared. The most advanced inference and training stacks live inside a few companies. Everyone else is forced to rebuild the same schedulers, compilers, serving engines, and training pipelines again and again — often under enormous pressure, with lots of duplicated effort and wasted insight. RadixArk was born to change that. Today, we’re building an infrastructure-first, deep-tech company with a simple and ambitious mission: "Make frontier-level AI infrastructure open and accessible to everyone." If the two values below resonate with you, come talk to us: (1) Engineering as an art. Infrastructure is a first-class citizen in RadixArk. We care about elegant design and code that lasts. Beneath every line of code lies the soul of the engineer who wrote it. (2) A belief in openness. We share what we build. We bet on long-term compounding through community, contribution, and giving more than we take. A product is defined by its users, yet it truly comes alive the moment functionality transcends mere utility and begins to embody aesthetics. Thanks to all the miles (the name of our first released RL framework; see below). radixark.ai
English
112
128
1.1K
540.6K
Sai Bi รีทวีตแล้ว
Anthea Li
Anthea Li@AntheaYLi·
We look at how Evolution Strategies can be effective to improve reasoning under small population size and low-rank perturbations: • How population size, noise scale, step size, and LoRA rank interact • A trust-region + spectral norm lens on the stability of rank • Forward-only allows for smooth quantization: alleviates training-inference mismatch in RL Blog(WIP): antheali.notion.site/eses
English
6
33
198
44.5K
Sai Bi รีทวีตแล้ว
Bowei Chen
Bowei Chen@bowei_chen_19·
We found that visual foundation encoder can be aligned to serve as tokenizers for latent diffusion models in image generation! Our new paper introduces a new tokenizer training paradigm that produces a semantically rich latent space, improving diffusion model performance🚀🚀.
Bowei Chen tweet media
English
7
71
522
80.7K
Sai Bi รีทวีตแล้ว
Percy Liang
Percy Liang@percyliang·
Wrapped up Stanford CS336 (Language Models from Scratch), taught with an amazing team @tatsu_hashimoto @marcelroed @neilbband @rckpudi. Researchers are becoming detached from the technical details of how LMs work. In CS336, we try to fix that by having students build everything:
English
46
570
4.9K
677.5K
Sai Bi
Sai Bi@Sai__Bi·
I am going to give a talk on scalable 3D reconstructions today at the 3D-LLM/VLA workshop at CVPR at 10:55am today at Room 106A. Welcome to attend! 3d-llm-vla.github.io
English
1
0
28
1.5K
Sai Bi รีทวีตแล้ว
Zhao Dong
Zhao Dong@flycooler_zd·
🚀 Excited to announce our CVPR 2025 Workshop: 3D Digital Twin: Progress, Challenges, and Future Directions 🗓 June 12, 2025 · 9:00 AM–5:00 PM 📢 Incredible lineup: @rapideRobot, Andrea Vedaldi @Oxford_VGG,@richardzhangsfu,@QianqianWang5,Dr. Xiaoshuai Zhang @Hillbot_AI, @xiaolonw, @shuangz, @MilosHasan, James Fort @meta_aria. 🔗Details👉projectaria.com/events/CVPR202… Join us to explore photorealistic, functional & physically-accurate #3DdigitalTwins for Spatial & Contextual AI, Robotics, AR/VR, Digital Content Creation! #CVPR2025 #3DdigitalTwins #3DVision #ContextualAI #SpatialAI #Robotics #Humanoidrobots #Metaverse #Gen3d 🖼👇
Zhao Dong tweet media
English
2
23
58
14.1K
Sai Bi รีทวีตแล้ว
Tianyuan Zhang
Tianyuan Zhang@tianyuanzhang99·
Bored of linear recurrent memories (e.g., linear attention) and want a scalable, nonlinear alternative? Our new paper “Test-Time Training Done Right” propose LaCT (Large Chunk Test-Time Training) — a highly efficient, massively scalable nonlinear memory with: 💡 Pure PyTorch (no custom kernels) 🚀 10× GPU FLOPs utilization compared to previous nonlinear test-time training(ttt) methods. 🧠 Huge memory size (up to 40% of model params) Project page with code: tianyuanzhang.com/projects/ttt-d… (videos generated with our AR video diffusion) 1/9
English
7
80
428
101.2K
Sai Bi รีทวีตแล้ว
Haian Jin
Haian Jin@Haian_Jin·
Excited to attend #ICLR2025 in person this year! I’ll be presenting two papers: 1. LVSM: A Large View Synthesis Model with Minimal 3D Inductive Bias 🔹 Oral Presentation: Session 3C (Garnet 216-218) — Apr 25 (Fri), 11:06–11:18 a.m. 🔹 Poster: Hall 3 + Hall 2B, Poster #593 — Apr 25 (Fri), 3:00–5:30 p.m. 🔹 Website: haian-jin.github.io/projects/LVSM/ 2. RelitLRM: Generative Relightable Radiance for Large Reconstruction Models (led by @tianyuanzhang99) 🔹 Poster: Hall 3 + Hall 2B, Poster #531 — Apr 26 (Sat), 3:00–5:30 p.m. 🔹 Website: relit-lrm.github.io Feel free to drop by—looking forward to chatting with you!
English
1
3
27
2.5K
Sai Bi
Sai Bi@Sai__Bi·
I will be attending ICLR in Singapore this week. Feel free to reach out and chat!
English
0
0
23
2.7K
Sai Bi รีทวีตแล้ว
Sai Bi
Sai Bi@Sai__Bi·
The speaker was fully aware of the implications of her words and the damage they would cause. Yet, instead of preventing harm, she chose to inflict it first and then attempt to repair it with some 'nice' words. That’s not acceptable!
Jiao Sun@sunjiao123sun_

Mitigating racial bias from LLMs is a lot easier than removing it from humans! Can’t believe this happened at the best AI conference @NeurIPSConf We have ethical reviews for authors, but missed it for invited speakers? 😡

English
0
0
20
1.8K
Sai Bi รีทวีตแล้ว
Gene Chou
Gene Chou@gene_ch0u·
We've released our paper "Generating 3D-Consistent Videos from Unposed Internet Photos"! Video models like Luma generate pretty videos, but sometimes struggle with 3D consistency. We can do better by scaling them with 3D-aware objectives. 1/N page: genechou.com/kfcw
English
6
46
227
41.9K
Sai Bi รีทวีตแล้ว
Haian Jin
Haian Jin@Haian_Jin·
Novel view synthesis has long been a core challenge in 3D vision. But how much 3D inductive bias is truly needed? —Surprisingly, very little! Introducing "LVSM: A Large View Synthesis Model with Minimal 3D Inductive Bias"—a fully transformer-based approach that enables scalable, generalizable, and fully data-driven novel view synthesis, from sparse posed inputs. 🧵(1/6) Project Page: haian-jin.github.io/projects/LVSM/
English
22
93
575
114.6K