
Raya
41 posts

Raya
@raya_coder
Learning AI one model at a time 🧠 Passionate about solving problems with technology Future belongs to those who build it 🚀


Introducing Anijam AI — the first AI Animation Agent on your phone. Your story idea should not die before you open professional software. Just tell Anijam your idea. It does the rest. Available now on iOS & browser.

Introducing SubQ - a major breakthrough in LLM intelligence. It is the first model built on a fully sub-quadratic sparse-attention architecture (SSA), And the first frontier model with a 12 million token context window which is: - 52x faster than FlashAttention at 1MM tokens - Less than 5% the cost of Opus Transformer-based LLMs waste compute by processing every possible relationship between words (standard attention). Only a small fraction actually matter. @subquadratic finds and focuses only on the ones that do. That's nearly 1,000x less compute and a new way for LLMs to scale.



Introducing SubQ - a major breakthrough in LLM intelligence. It is the first model built on a fully sub-quadratic sparse-attention architecture (SSA), And the first frontier model with a 12 million token context window which is: - 52x faster than FlashAttention at 1MM tokens - Less than 5% the cost of Opus Transformer-based LLMs waste compute by processing every possible relationship between words (standard attention). Only a small fraction actually matter. @subquadratic finds and focuses only on the ones that do. That's nearly 1,000x less compute and a new way for LLMs to scale.


Introducing SubQ - a major breakthrough in LLM intelligence. It is the first model built on a fully sub-quadratic sparse-attention architecture (SSA), And the first frontier model with a 12 million token context window which is: - 52x faster than FlashAttention at 1MM tokens - Less than 5% the cost of Opus Transformer-based LLMs waste compute by processing every possible relationship between words (standard attention). Only a small fraction actually matter. @subquadratic finds and focuses only on the ones that do. That's nearly 1,000x less compute and a new way for LLMs to scale.


Introducing Anijam AI — the first AI Animation Agent on your phone. Your story idea should not die before you open professional software. Just tell Anijam your idea. It does the rest. Available now on iOS & browser.



This is a real-time demo showcasing avatars + function calling. Avatars can fill out forms, schedule meetings, and much more. The future website has two interfaces: 1. The traditional UI to convey information. 2. An AI agent with a face that users can talk to Give your AI agents a face using LemonSlice - a 24/7 spokesperson that your users trust Demo and open source links below. Made by @designbybryce 👇️


Introducing "Omni Agent". The only AI you'll ever need. One ecosystem. Three tiers. Infinite possibilities. Think. Pro. Ultra. Whether you're exploring ideas, building a brand, or running an entire operation, there's a tier engineered for the way you work. Deep research, multimedia generation, native workspace integrations, and a memory layer that grows with you, all unified into one seamless interface. Stop using tools. Start commanding intelligence. Try Omni Agent → chatlyai.app/agent


The future belongs to proactive agents. But without real-time perception, they're stuck reacting. "World2Agent" isn't a product. It's an open protocol and an invitation — to build the perception layer for AI agents, together. We're open-sourcing everything: the protocol, the SDK, and first sensors. GITHUB + DEMO in comments.
