야차완
10.8K posts

야차완
@yakshawan
✦ 웹소설 작가/기획자/1인 개발자 #INTP ✦ 개발한 것들: › 집중을 돕는 브라운 노이즈 익스텐션 〈TAKT〉 › 플리 운영을 위한 음원 마스터링 툴 〈마스터마인드〉 › AI 미소녀 스크린메이트 〈하루나 온 스크린〉 › 그외 다수

Figure taught two robots to make a bed together - fully autonomous Honestly, they’re better at it than most humans


Introducing SubQ - a major breakthrough in LLM intelligence. It is the first model built on a fully sub-quadratic sparse-attention architecture (SSA), And the first frontier model with a 12 million token context window which is: - 52x faster than FlashAttention at 1MM tokens - Less than 5% the cost of Opus Transformer-based LLMs waste compute by processing every possible relationship between words (standard attention). Only a small fraction actually matter. @subquadratic finds and focuses only on the ones that do. That's nearly 1,000x less compute and a new way for LLMs to scale.








