Synapse
1.9K posts

Synapse
@synapse_meta
Ph.D. in AI. Absorbing crypto ₿its via DL👾 Founder @NonFungibleNews Building @Tubular_Tech Advisor on Agentic Systems, LLMs, GenAI. DMs open!









We benchmarked 15 small language models across 9 tasks to find out which one you should actually fine-tune. The most surprising result: Liquid AI's LFM2-350M ranked #1 for tunability. 350M parameters, absorbing training signal more effectively than models 20x its size. The entire LFM2 family swept the top 3 spots. No other architecture came close. LFM2-350M: avg rank 2.11 (±0.89) LFM2-1.2B: avg rank 3.44 LFM2.5-1.2B-Instruct: avg rank 4.89 That tight CI means it's consistent across every task type, not just a few lucky benchmarks.





Don’t let your loved ones use ChatGPT






Yann LeCun says there is no such thing as general intelligence Human intelligence is super-specialized for the physical world, and our feeling of generality is an illusion We only seem general because we can't imagine the problems we're blind to "the concept is complete BS"


















