Sabitlenmiş Tweet
🇺🇸
22.2K posts

🇺🇸
@Y3510X
Sr AI Product Designer • VibeDesign Practitioner • AI-Native 0→1 • Rapid Prototyping (Figma Make + Grok + Stitch) • MIT Alum • 3x Exits • #OpenToWork
California, USA Katılım Mayıs 2008
190 Takip Edilen7.7K Takipçiler
🇺🇸 retweetledi
🇺🇸 retweetledi
🇺🇸 retweetledi

🚨 12 U.S. SCIENTISTS MYSTERIOUSLY DEAD OR MISSING.
@RepEricBurlison raises serious concerns:
A growing list tied to critical tech and national security—
and the pattern doesn’t add up.
“This is really suspicious… our nation is weaker because of this.”
English
🇺🇸 retweetledi
🇺🇸 retweetledi

MIT has done the unthinkable.
They built an AI that doesn't need RAG, and it has perfect memory of everything it's ever read.
It's called Recursive Language Models (RLMs).
Right now, if you want an AI to analyze a massive dataset or document, you have two bad options.
You either stuff it all into a giant context window, where the AI gets confused and suffers from "context rot."
Or you use RAG to chop it up into summaries, permanently deleting the nuance.
This paper replaces both.
Instead of forcing the AI to read a giant prompt in one pass, RLMs treat long documents as an external environment.
The AI is placed in a sandbox. The data is stored as a Python variable.
When you ask it a question, the AI doesn't just blindly try to remember the answer.
It writes code to actively search, slice, and filter the document itself.
Then, it recursively spawns smaller "sub-AIs" to read specific snippets in parallel.
It never summarizes. It never deletes data.
It preserves every single piece of original context.
The results rewrite the limits of AI memory.
It successfully handles inputs up to two orders of magnitude beyond normal context windows, scaling easily to 10 million+ tokens.
On the hardest long-context reasoning benchmarks, a standard model scored a dismal 0.04. The RLM architecture hit 58.00.
All while costing less than running a standard massive prompt.
We’ve spent the last two years burning millions in compute trying to build bigger and bigger context windows.
But the future of AI isn’t about forcing a model to swallow a giant wall of text.
It’s about teaching it how to read.

English
🇺🇸 retweetledi

Finally, my favorite Batman villain comes to life. Clayface | Official Teaser youtu.be/ZIfpL3mgkFk?si… @warnerbros

YouTube
English
🇺🇸 retweetledi
🇺🇸 retweetledi

We're partnering with SpaceX to improve Composer.
cursor.com/blog/spacex-mo…
English
🇺🇸 retweetledi

SpaceXAI and @cursor_ai are now working closely together to create the world’s best coding and knowledge work AI.
The combination of Cursor’s leading product and distribution to expert software engineers with SpaceX’s million H100 equivalent Colossus training supercomputer will allow us to build the world’s most useful models.
Cursor has also given SpaceX the right to acquire Cursor later this year for $60 billion or pay $10 billion for our work together.
English
🇺🇸 retweetledi
🇺🇸 retweetledi
🇺🇸 retweetledi
🇺🇸 retweetledi
🇺🇸 retweetledi
🇺🇸 retweetledi

Forget about any other AI, the signs are clear: Grok 4.3 Beta is already #1
- Creates clean slides, graphs, images & dashboards
- Pulls real-time web data + images
- Runs smooth multi-step workflows
- Analyzes videos like a beast
From idea → full presentation or dashboard in one chat.
Grok is finally becoming that everyday superpower.
And the Grok Computer drops soon
This is the one.
Source: @grok, @elonmusk
English
🇺🇸 retweetledi













