8K posts

‘ banner
‘

@supnullpointer

Back in the game🦸‍♂️ | turn on notis

dm for promo शामिल हुए Aralık 2019
156 फ़ॉलोइंग2.4K फ़ॉलोवर्स
पिन किया गया ट्वीट
‘
@supnullpointer·
Having a childhood before social media took over is a blessing in disguise fr
English
183
35K
192.8K
0
‘
@supnullpointer·
CLAUDE OPUS 4.7 USING 500K TOKENS TO RENAME A PARAMETER
English
0
0
0
8
‘
@supnullpointer·
Told Claude Opus 4.7 “hi” just to greet My tokens:
English
0
0
1
10
‘
@supnullpointer·
MIT just made every AI company’s billion-dollar bet look embarrassing. They solved AI memory—not by building a bigger brain, but by teaching it how to read. The Breakthrough: On December 31, 2025, three MIT CSAIL researchers published a paper revealing that AI models don’t need massive context windows. Instead of loading entire documents into memory, they store them as external Python variables. The AI knows these exist, searches them using code and regex, pulls only the relevant sections, and spawns sub-AIs to analyze pieces in parallel. No summarization, no information loss. The Problem: Traditional AI models suffer from a hard context window. Overloading it leads to “context rot”—facts blur, mid-document info vanishes, and models forget what they read. Retrieval Augmented Generation (RAG) tried to fix this by chunking documents, but it shredded context and guessed relevancy poorly. The Results: RLMs (Reading-augmented Language Models) solved complex long-context benchmarks where GPT-5 failed 90% of the time. Handled 10 million tokens—100× a model’s native window. Delivered better answers at comparable or cheaper cost. The Implication: For five years, the AI arms race chased bigger windows: GPT‑3 (4K), GPT‑4 (32K), Claude 3 (200K), Gemini 2 (2M). MIT proved the assumption wrong: more context ≠ better performance. The right approach is teaching AI where to look. The Impact: Open source code on GitHub; drop-in replacement for LLM APIs. Enables tasks spanning weeks or months via self-managed context. Ends the context window wars—MIT won by walking away. Sources: Zhang, Kraska, Khattab · MIT CSAIL · arXiv:2512.24601 Paper: arxiv.org/abs/2512.24601 GitHub: github.com/alexzhang13/rlm #mit #gpt #codex #rag #claude #anthropic
‘ tweet media
English
1
0
0
9
‘
@supnullpointer·
India probably the only country who’s pr team is so cooked that they ain’t even trying
English
0
0
0
5
‘
@supnullpointer·
What about small accounts trying to their best @nikitabier ??? I was inactive here tor sometime and my entire reach is gone
English
0
0
0
8
‘
@supnullpointer·
Ppl who get 0 likes and still keep posting are built different.
English
0
0
0
5
‘
@supnullpointer·
My typa love language
‘ tweet media‘ tweet media
English
0
0
0
7
‘
@supnullpointer·
my dog: *breathes* me:
English
0
0
0
9
‘
@supnullpointer·
Been drinking pretty much only Diet Coke to save water for Claude
English
0
0
0
21
‘
@supnullpointer·
‘ tweet media
ZXX
0
0
0
14
‘
@supnullpointer·
it doesn’t matter how you were raised .. it’s your responsibility to grow up.
English
0
0
0
6
‘
@supnullpointer·
if ashes turn to cities and stars into the moon, then why am i inside of my own mind?
English
0
0
0
6
‘
@supnullpointer·
i want that “we do everything together” type love fr...
‘ tweet media
English
0
0
0
4