Jakub Macina

118 posts

Jakub Macina

Jakub Macina

@dmacjam

AI/ML Scientist, mountain biker

Zurich Bergabung Haziran 2013
571 Mengikuti230 Pengikut
Jakub Macina
Jakub Macina@dmacjam·
Altogether, this allows us to train smaller LLMs for tutoring that match or surpass the performance of larger specialized tutoring models while navigating a trade-off between leaking and student solve rate.
Jakub Macina tweet media
English
1
0
0
77
Jakub Macina
Jakub Macina@dmacjam·
🔍Thinking assistants instead of homework solvers. Most LLMs are helpful at the turn-level but lack planning for long-term student learning. How can we make LLMs more collaborative and better at tutoring? #EMNLP2025
Jakub Macina tweet media
English
1
4
9
714
Jakub Macina
Jakub Macina@dmacjam·
AI alignment for tutoring🎓 We use full online RL with conversation-level rewards—not just single-turn signals like DPO. Did the student actually learn by the end? Using GRPO, the model learns real teaching strategies like when to hint or when to correct. Explore models below⤵️
Rohan Paul@rohanpaul_ai

This paper introduces an online reinforcement learning framework using simulated student-tutor interactions. It trains LLMs to prioritize guiding students pedagogically instead of simply revealing solutions, aligning models with better teaching methods. This helps students learn how to solve problems independently. Methods 🔧: → The online reinforcement learning method trains the tutor model directly on conversations simulated with a separate student LLM. → A custom reward function scores full conversations based on two objectives: increasing the student's success rate after the dialog and ensuring the tutor follows good pedagogical principles. → This reward system penalizes the tutor for leaking solutions, promoting guided problem-solving. → The framework uses LLM judges to evaluate pedagogical quality. → Controllable reward weighting balances these objectives, enabling navigation of the trade-off between student solving gains and pedagogical support. → Thinking tags are included to enhance the tutor model's interpretability and instructional planning. 📌 Online Reinforcement Learning using model rollouts directly trains on interactive teaching, avoiding static data limitations. 📌 Reward function lambda explicitly controls the crucial pedagogy versus student success trade-off. 📌 Preservation of reasoning benchmarks demonstrates RL's superior transferability compared to Supervised Fine-Tuning baselines. ---------------------------- Paper - arxiv. org/abs/2505.15607 Paper Title: "From Problem-Solving to Teaching Problem-Solving: Aligning LLMs with Pedagogy using Reinforcement Learning"

English
1
3
12
1.6K
Jakub Macina
Jakub Macina@dmacjam·
🚀 𝐇𝐨𝐰 𝐰𝐞𝐥𝐥 𝐜𝐚𝐧 𝐋𝐋𝐌𝐬 𝐭𝐞𝐚𝐜𝐡? Evaluating LLMs for education is key to making real progress, yet we lack a reliable and simple benchmark. Introducing 𝐌𝐚𝐭𝐡𝐓𝐮𝐭𝐨𝐫𝐁𝐞𝐧𝐜𝐡—an open-source benchmark designed to assess holistic tutoring capabilities in AI.
English
4
3
8
1K
Jakub Macina
Jakub Macina@dmacjam·
🤔 𝐌𝐨𝐫𝐞 𝐤𝐧𝐨𝐰𝐥𝐞𝐝𝐠𝐞 ≠ 𝐛𝐞𝐭𝐭𝐞𝐫 𝐭𝐞𝐚𝐜𝐡𝐢𝐧𝐠? Subject expertise does not always correlate with effective teaching; instead, pedagogy and subject knowledge may present a trade-off.
English
0
0
3
96
Jakub Macina
Jakub Macina@dmacjam·
🎯 How do we measure teaching quality? We train a reward model that scores open-ended teacher responses and accurately distinguishes expert-level from novice teaching.
Jakub Macina tweet media
English
0
0
1
86