Langlin Huang

15 posts

Langlin Huang

Langlin Huang

@shrangoh

LLM Reasoning, LLM acceleration. 2nd. Year Ph.D. at Washington University in St. Louis. Lucky advised by Professor Jiaxin Huang

St. Louis Katılım Haziran 2023
80 Takip Edilen35 Takipçiler
Sabitlenmiş Tweet
Langlin Huang
Langlin Huang@shrangoh·
Can adding pure nonsense to a prompt make an LLM reason better? YES! 🤯 Introducing LoPE (Lorem Perturbation for Exploration). We found that prepending meaningless pseudo-Latin placeholder text (Lorem Ipsum) acts like magic—helping GRPO escape the "zero-advantage" trap on hard math questions! Paper page📄:huggingface.co/papers/2605.05… Broadens LLM exploration: increases resample success rate by ~3x📈 Strengthens LLM reasoning: improves average math reasoning by up to 13%📊 Here’s how it works👇
Langlin Huang tweet media
English
1
8
35
25.7K
Langlin Huang
Langlin Huang@shrangoh·
Summary and Takeaways📝: ✨Prompt space perturbation is a shockingly simple yet effective way to broaden LLM reasoning exploration. ✨Lorem Ipsum-like random sequence represents an ideal perturbation: enough to diversify exploration, while preserving response quality.
English
1
0
2
102
Langlin Huang
Langlin Huang@shrangoh·
Can adding pure nonsense to a prompt make an LLM reason better? YES! 🤯 Introducing LoPE (Lorem Perturbation for Exploration). We found that prepending meaningless pseudo-Latin placeholder text (Lorem Ipsum) acts like magic—helping GRPO escape the "zero-advantage" trap on hard math questions! Paper page📄:huggingface.co/papers/2605.05… Broadens LLM exploration: increases resample success rate by ~3x📈 Strengthens LLM reasoning: improves average math reasoning by up to 13%📊 Here’s how it works👇
Langlin Huang tweet media
English
1
8
35
25.7K
Langlin Huang
Langlin Huang@shrangoh·
(6/n) 🧪PosS methods generate draft tokens that are accurate even at large positions! It achieves the best speed-up ratio at larger positions. 🌟It means the benefit brought by drafting deeper outweighs the additional computational cost. The table shows results with Llama2-13B-chat as the target model.
Langlin Huang tweet media
English
1
0
0
64
Langlin Huang
Langlin Huang@shrangoh·
New Research Released! 🚀PosS: Position Specialist Generates Better Draft for Speculative Decoding Is your LLM fast enough? PosS consistently improves over current speculative decoding methods by using position-specialized draft layers to generate high-quality drafts! 🔖Paper: arxiv.org/pdf/2506.03566 🧰Code: github.com/shrango/PosS
English
2
1
10
1.6K
Langlin Huang
Langlin Huang@shrangoh·
Our PosS: 🎓Position Specialists (PosS) use multiple specialized draft layers, where each expert handles specific positions. Think of it as having different experts handling different levels of deviated features! We can flexibly balance between efficiency and accuracy by configuring k positions per specialist (PosS-k).
Langlin Huang tweet media
English
0
0
3
82
Langlin Huang
Langlin Huang@shrangoh·
I'm giving the first oral talk at #NAACL2025! 🗓 April 30, 11:00 AM 📍 Ruidoso 🎙 MoCE: Adaptive Mixture of Contextualization Experts for Byte-based Neural Machine Translation We propose a simple, transferable module for long seq modeling. Come say hi—excited to chat with you!
English
0
0
2
30