EverMind
195 posts

EverMind
@evermind
Give Your AI Agent Self-evolving Memory. GitHub: https://t.co/ZsoTOVW2sf Discord: https://t.co/DgVdDw3x6B




~200 downloads in under a week. Blown away. Thank you.







A couple of weeks ago, we released EvoAgentBench, a benchmark for testing both your agent's raw capabilities and its self-evolving capabilities. Since release, it's been downloaded over 730 times — ranking it the #2 agent benchmark on hugging face. What it actually test🧵

Introducing SubQ - a major breakthrough in LLM intelligence. It is the first model built on a fully sub-quadratic sparse-attention architecture (SSA), And the first frontier model with a 12 million token context window which is: - 52x faster than FlashAttention at 1MM tokens - Less than 5% the cost of Opus Transformer-based LLMs waste compute by processing every possible relationship between words (standard attention). Only a small fraction actually matter. @subquadratic finds and focuses only on the ones that do. That's nearly 1,000x less compute and a new way for LLMs to scale.




