Qianchu (Flora) Liu retweetledi

🧠Excited to present X-Reasoner — a 7B vision-language model post-trained for reasoning purely on general-domain text, without any images or domain-specific data.
X-Reasoner achieves the state of the art 🏆 on challenging multimodal tasks (e.g., 43.0 on MMMU-Pro) and medical benchmarks (e.g., 45.7 on the NEJM Image Challenge).🧵
Most open-source work on reasoning models focuses on text inputs and general domains. But real-world reasoning often spans multiple modalities (like vision) and specialized domains (like healthcare). We ask:
👉Can reasoning be made generalizable with only text-based post-training?
Key idea → A two-stage recipe::
🔹 SFT on text-only general-domain long CoTs
🔹 RL with verifiable rewards on text-only math Qs
No images, no domain-specific data—just general text.
This recipe powers X-Reasoner, a 7B-scale vision-language model. Despite being trained only on general-domain text, it:
✅ Transfers to multimodal tasks (e.g., MathVista, MMMU-Pro)
✅ Outperforms 7B SOTA models trained with multimodal supervision
✅ Excels in unseen domains like medicine
🤔Why it works
🔑 Math as an anchor—RL on maths yields reasoning chains that generalise better than domain-specific RL alone.
🔑 Forced-exit token prevents “infinite thinking,” boosting reliability.
Ablation ✅: Remove every example solvable by text-only… gains persist. The model is truly reading the image, not gaming the benchmark.
🩺We then add a dash of medical text → X-Reasoner-Med. No images needed—just additional MedQA SFT + RL—and we set new 7 B SOTA on MedQA, OmniMedVQA, MMMU-Health, MedXpertQA-MM, and NEJM Image Challenge.
🔬 TL;DR:
General-domain text-based reasoning is more powerful than we thought.
With X-Reasoner, we show that high-quality reasoning models can be trained without costly multimodal or domain-specific supervision—and still outperform those that do.
📌 Paper: arxiv.org/abs/2505.03981
🔗 Models: github.com/microsoft/x-re… (release soon)
📊 Benchmarks: MMMU, MathVista, MedQA, NEJM, and more
🤖 Model size: 7B
🧑🔬 Authors: @QianchuL, @sheng_zh, @hiaoxui, Timothy Ossowski, Yu Gu, Ying Jin, @sidkiblawi, Sam Preston, Mu Wei, Paul Vozila, @TristanNaumann, and @hoifungpoon, from @MSFTResearch

English








