

Yupei Du
40 posts

@YupeiDu
Postdoc at Saarland University. LLM reasoning.



Most work on Transformer length generalization assumes a fixed vocabulary. But in real tasks, longer inputs may have new symbols (e.g. more objects in planning). Our new paper introduces C-RASP* to study this and explains the inconsistent performance of Transformers in planning.

I always dreamed of AGI as a wise advisor for humanity. Although LLMs are great for coding & knowledge work, I wouldn’t trust them to give me advice on my career, business strategy, or policy preferences. How can we build AI systems optimized for wisdom? At Mantic we believe the unlock is prediction: predicting world events as accurately as possible, and hill-climbing this single metric. Today we share some recent progress on the Thinking Machines website, having found Tinker a great platform for our RL experiments. TL;DR: We RL-tune gpt-oss-120b to become a better forecaster than any other model. Having good scaffolding is a prerequisite. A fun result: our tuned model + Grok are decorrelated from the other best models, and so are the most indispensable when picking a team.








mHC puts lots of efforts on training stability. In some aspect, stable backprop through depth is similar to stable backprop through time(BPTT) for modern RNN. lots of RNN can be written as: S_t+1 = Gate @ S_t + f(S_t), similar to mHC: x_t+1 = H@x_t + f(x_t). And the backprop for both will has cumulative matmuls, where eigen value might explode or vanish. In RNN, common stable parametrization of the gate include: 1. Decay gate: diagonal or scalar gate with value between 0-1. Used by Retnet, Mamba2 2. Identity: same as original residual connect 3. Householder matrix: used by deltanet(if beta=2), one type orthogonal matrix, singular value as 1. Thus cumulative matmuls also is orthogonal. mHC use double stochastic mat, and the cumulative matmuls also yields double stochastic mat. Interestingly, these design space for residual connections and RNN might be shared, and influence each other. And more tricky point is that, stable might not always mean effectiveness.




Can language models learn implicit reasoning without chain-of-thought? Our new paper shows: Yes, LMs can learn k-hop reasoning; however, it comes at the cost of an exponential increase in training data and linear growth in model depth as k increases. arxiv.org/pdf/2505.17923


