Aashu Singh

341 posts

Aashu Singh banner
Aashu Singh

Aashu Singh

@iam_aashusingh

ML Engg @Facebook Alum @GeorgiaTech

Katılım Nisan 2010
577 Takip Edilen96 Takipçiler
Aashu Singh retweetledi
Jonathan Whitaker
Jonathan Whitaker@johnowhitaker·
New video, starting to look at Diffusion Language Models. This one introduces some ideas, then shows how I turn ModernBERT into a LLaDA-style generative model. Lots of avenues to explore from here! Join me in playing with this? Project ideas in thread :) youtube.com/watch?v=Ds_cTc…
YouTube video
YouTube
English
7
52
361
51.6K
Aashu Singh retweetledi
Logan Kilpatrick
Logan Kilpatrick@OfficialLoganK·
A deep conversation with @SavinovNikolay, the Gemini long context pre-training co-lead… We go from the basics to what is needed to scale to infinite context to long context best practices for devs:
English
52
85
1.1K
245K
Aashu Singh
Aashu Singh@iam_aashusingh·
Thrilled to share our new paper: MetaQueries! We've created novel approach that bridges MM-LLMs and diffusion models using learnable queries . The method enables knowledge augmented image generation while preserving SOTA understanding capabilities.
Xichen Pan@xichen_pan

We find training unified multimodal understanding and generation models is so easy, you do not need to tune MLLMs at all. MLLM's knowledge/reasoning/in-context learning can be transferred from multimodal understanding (text output) to generation (pixel output) even it is FROZEN!

English
0
0
1
156
Aashu Singh retweetledi
Russ Salakhutdinov
Russ Salakhutdinov@rsalakhu·
Llama4 models are out! Open sourced! Check them out: “Native multimodality, mixture-of-experts models, super long context windows, step changes in performance, and unparalleled efficiency. All in easy-to-deploy sizes custom fit for how you want to use it” llama.com
English
4
20
155
28.6K
Aashu Singh retweetledi
Steven Feng
Steven Feng@stevenyfeng·
We are bringing back Stanford’s CS 25 Transformers Course (cs25.stanford.edu) today! It’s open to everybody! This is one of @Stanford's hottest seminar courses. We open the course through Zoom to the public. Lectures start today (Tuesdays), 3-4:20pm PDT, at stanford.zoom.us/j/91661468474?…. Talks will be recorded and released ~2 weeks afterward. Each week, we invite folks at the forefront of Transformers research to discuss the latest breakthroughs, from LLM architectures like GPT and Gemini to creative use cases in generating art (e.g. DALL-E and Sora), biology and neuroscience applications, robotics, and so forth! Past speakers have included folks from @OpenAI, @GoogleDeepMind, @nvidia, @Meta, @AnthropicAI, etc. such as @karpathy, @geoffreyhinton, @DrJimFan, @ashVaswani, @_jasonwei, @hwchung27, @xiao_ted, @janleike, @YejinChoinka, @douwekiela, and many more! [Attached photos with some of them😎] Our class has an incredibly popular reception within and outside Stanford, and over a million total views of our recordings [web.stanford.edu/class/cs25/rec…] on YouTube. Our class with @karpathy was the second most popular YouTube video [youtube.com/watch?v=XfpMkf…] uploaded by Stanford in 2023 with over 750k views! Also, livestreaming and auditing are available to all. Feel free to audit in person or by joining the Zoom livestream. We also have a Discord server [discord.gg/2vE7gbsjzA] (over 5000 members) used for Transformers discussion. We open it to the public as more of a "Transformers community". Feel free to join and chat with hundreds of others about Transformers! Thanks to my co-instructors @DivGarg9 @_KaranPS_ @boson2photon Jenny Duan and the course's faculty advisor @chrmanning! More details: cs25.stanford.edu @StanfordAILab @stanfordnlp @StanfordHAI @agihouse_org #AI #ArtificialIntelligence #ML #DeepLearning #NLP #NLProc #Transformers #Stanford #Education #Innovation #TechEd #Community #naturallanguageprocessing
YouTube video
YouTube
Steven Feng tweet mediaSteven Feng tweet mediaSteven Feng tweet mediaSteven Feng tweet media
English
1
87
444
42.7K
Aashu Singh retweetledi
Sean Welleck
Sean Welleck@wellecks·
Lecture 15: Quantization (Guest lecture by @Tim_Dettmers) youtu.be/YXZZaje76r4 - Quantization basics - Quantized foundation models: LLM.int8() - Finetuning foundation models: QLoRA - Quantization and users
YouTube video
YouTube
Sean Welleck@wellecks

Excited to teach Advanced NLP at CMU this semester! Slides are on the course page as the course proceeds: cmu-l3.github.io/anlp-spring202… Lectures will be uploaded to Youtube: youtube.com/playlist?list=…

English
3
61
487
58.1K
Aashu Singh retweetledi
Xin Eric Wang
Xin Eric Wang@xwang_lk·
Since launching Agent S2, many folks working on GUI/computer-use agents asked for our tech report. Here we go! 🎉New SOTA on 3 major computer use benchmarks. • OSWorld (15 steps): 27.0% 🚀 (+18.9%) • OSWorld (50 steps): 34.5% 🚀 (+32.7%) • WindowsAgentArena: 29.8% 🚀 (+52.8%) • AndroidWorld: 54.3% 🚀 (+16.5%) We strive for simple solutions that work best. Agent S focused on Memory; S2 crushes Grounding & Planning. Bigger things ahead—stay tuned!
Xin Eric Wang tweet media
Simular@SimularAI

Two weeks ago, we open-sourced Agent S2 — and the response has been amazing. 🙌 Today, we’re excited to share the technical paper that dives into our agent design and key innovations. Agent S2 blends generalist reasoning with specialist grounding for precise, long-horizon computer use tasks: ⚙️ Mixture-of-Grounding 🧠 Proactive Hierarchical Planning 📈 SOTA on OSWorld, AndroidWorld (✨new), and WindowsAgentArena (✨new) 👉 Tech blog: simular.ai/articles/agent… 👉 Paper: arxiv.org/abs/2410.08164

English
7
39
202
49.2K
Aashu Singh retweetledi
Jason Weston
Jason Weston@jaseweston·
🚨Multi-Token Attention🚨 📝: arxiv.org/abs/2504.00927 Attention is critical for LLMs, but its weights are computed by single query & key vectors, limiting capability. MTA combines query, key & head operations over multiple tokens, improving performance in terms of PPL, std benchmarks, and long-range tasks. NOTE: this isn't an April Fool, this is a real paper🏛️👩‍⚖️💯
Jason Weston tweet media
English
1
141
774
97.4K
Aashu Singh
Aashu Singh@iam_aashusingh·
Interesting paper: Video-R1 improves temporal reasoning in MM LLMs using T-GRPO a variant of GRPO and high quality curated data for SFT. Here's a summary: @aashus18_13083/video-r1-advancing-video-reasoning-with-t-grpo-1a1422729953" target="_blank" rel="nofollow noopener">medium.com/@aashus18_1308… Original paper: arxiv.org/abs/2503.21776
English
0
0
0
23
Aashu Singh retweetledi
Vivek Galatage
Vivek Galatage@vivekgalatage·
🎨 Understanding GPU Architecture from Cornell This GPU architecture roadmap is a good starting point for diving deeper, along with the CUDA C++ programming guide PDF - both freely available from Cornell and NVIDIA.
Vivek Galatage tweet mediaVivek Galatage tweet mediaVivek Galatage tweet mediaVivek Galatage tweet media
English
9
230
1.5K
188K
Aashu Singh retweetledi
Kevin Patrick Murphy
Kevin Patrick Murphy@sirbayes·
I read the R1 zero paper and the method is very simple , just a tweak to PPO to fine tune deepseek v3 base using a verifiable sparse binary reward. The fact that they got it to work even though others failed is likely due to better data and/or their very efficient implementation
thebes@voooooogel

why did R1's RL suddenly start working, when previous attempts to do similar things failed? theory: we've basically spent the last few years running a massive acausally distributed chain of thought data annotation program on the pretraining dataset. deepseek's approach with R1 is a pretty obvious method. they are far from the first lab to try "slap a verifier on it and roll out CoTs." but it didn't used to work that well. all of a sudden, though, it did start working. and reproductions of R1, even using slightly different methods, are just working too--it's not some super-finicky method that deepseek lucked out finding. all of a sudden, the basic, obvious techniques are... just working, much better than they used to. in the last couple of years, chains of thought have been posted all over the internet (LLM outputs leaking into pretraining like this is usually called "pretraining contamination"). and not just CoTs--outputs posted on the internet are usually accompanied by linguistic markers of whether they're correct or not ("holy shit it's right", "LOL wrong"). this isn't just true for easily verifiable problems like math, but also fuzzy ones like writing. those CoTs in the V3 training set gave GRPO enough of a starting point to start converging, and furthermore, to generalize from verifiable domains to the non-verifiable ones using the bridge established by the pretraining data contamination. and now, R1's visible chains of thought are going to lead to *another* massive enrichment of human-labeled reasoning on the internet, but on a far larger scale... the next round of base models post-R1 will be *even better* bases for reasoning models.

English
14
38
455
60.2K
Aashu Singh retweetledi
Nathan Lambert
Nathan Lambert@natolambert·
For those trying to understand DeepSeeks Group Relative Policy Optimization (GRPO): GRPO is just PPO without a value function using monte carlo estimates of the advantage. So, study why PPO exists (lots of docs / writing on that) and understand that value functions are tricky with LLMs. Left ppo, right grpo
Nathan Lambert tweet mediaNathan Lambert tweet media
English
26
120
1.1K
77.5K