

MIT Intro to Deep Learning
315 posts

@MITDeepLearning
MIT's introductory course on deep learning!



Today, we release our largest LFM2 model: LFM2-24B-A2B 🐘 > 24B total parameters > 2.3B active per token > Built on our hybrid, hardware-aware LFM2 architecture It combines LFM2’s fast, memory-efficient design with a Mixture of Experts setup, so only 2.3B parameters activate each run. The result: best-in-class efficiency, fast edge inference, and predictable log-linear scaling all in a 32GB, 2B-active MoE footprint. 🧵

📢 Registration of the NINTH (!) year of @MITDeepLearning has officially opened — with over 15M registered students taking the course over the past 9 years. And this this is just the beginning! Sign up TODAY to join the 2026 edition 👉 introtodeeplearning.com

📢 Registration of the NINTH (!) year of @MITDeepLearning has officially opened — with over 15M registered students taking the course over the past 9 years. And this this is just the beginning! Sign up TODAY to join the 2026 edition 👉 introtodeeplearning.com


































