Latent Node
1.7K posts






The creators of SWE-Bench just dropped a really simple new benchmark every LLM gets 0% on. ProgramBench asks: can models recreate real executable programs (ffmpeg, SQLite, ripgrep) from scratch with no internet? We are far from saturated on model quality.

Introducing SubQ - a major breakthrough in LLM intelligence. It is the first model built on a fully sub-quadratic sparse-attention architecture (SSA), And the first frontier model with a 12 million token context window which is: - 52x faster than FlashAttention at 1MM tokens - Less than 5% the cost of Opus Transformer-based LLMs waste compute by processing every possible relationship between words (standard attention). Only a small fraction actually matter. @subquadratic finds and focuses only on the ones that do. That's nearly 1,000x less compute and a new way for LLMs to scale.











We believe that intelligence should not arrive preconfigured. @togethercompute is now available directly inside the Adaption platform, connecting Adaptive Data with large-scale training in a single workflow. One platform, end to end. Stop inheriting intelligence. Shape it.


this is a pop sciency version of a continual learning evaluation if you're going to go the route of "pretrain on limited data and see if it can bootstrap from natural interaction", a more practical thing would probably be like, train only up to ~2014, "can it teach itself Rust?"


Mark my words: What Chinese tech & AI cos did to disrupt the digital tech world over the last decade, Indian tech & AI cos will do over the next decade to disrupt and deploy at scale @AshwiniVaishnaw @nasscom @nasscomstartups @able_indiabio @BIRAC_2012 @rajesh_gokhale Indian AI & tech talent makes this aspiration possible @sundarpichai @satyanadella @elonmusk @nikhilkamathcio



Demis Hassabis proposed a benchmark for scientific AGI: the "Einstein test" Train a system with a knowledge cutoff at 1901, then test whether it can independently rediscover what Einstein did in 1905, including special relativity Once it can, we're on the verge of genuinely novel invention










