SubQ

1 posts

SubQ banner
SubQ

SubQ

@subqai

SubQ is the first fully sub-quadratic LLM—12M token context, faster, cheaper, and built to reason across codebases, datasets, and long workflows in one pass.

Katılım Nisan 2026
4 Takip Edilen15 Takipçiler
SubQ retweetledi
Alexander Whedon
Alexander Whedon@alex_whedon·
Introducing SubQ - a major breakthrough in LLM intelligence. It is the first model built on a fully sub-quadratic sparse-attention architecture (SSA), And the first frontier model with a 12 million token context window which is: - 52x faster than FlashAttention at 1MM tokens - Less than 5% the cost of Opus Transformer-based LLMs waste compute by processing every possible relationship between words (standard attention). Only a small fraction actually matter. @subquadratic finds and focuses only on the ones that do. That's nearly 1,000x less compute and a new way for LLMs to scale.
English
1.3K
2.5K
19.8K
10.4M