Pablos
1.1K posts

Pablos
@pablos
Implementing Science Fiction @ Deep Future. VC – Bestseller – Podcast. https://t.co/5GQPgY5lw0
Earth Katılım Mart 2007
380 Takip Edilen8.6K Takipçiler

gadgetreview.com/reddit-user-un…
Come on Meta. Give us a reason to be proud of you. Even Europe is getting this right.
English

🚨Nobody is ready for this paper.
Every LLM you use GPT-4.1, Claude, Gemini, DeepSeek, Llama-4, Grok, Qwen has a flaw that no amount of scaling has fixed.
They cannot tell old information from new information.
A patient's blood pressure: 120 at triage. 128 ten minutes later. 125 at discharge.
"What's the latest reading?"
Any human: "125, obviously."
Every LLM, once enough updates pile up: wrong. Not sometimes wrong. 100% wrong. Zero accuracy. Complete hallucination. Every model. No exceptions.
The answer sits at the very end of the input. Right before the question. No searching needed.
The model just can't let go of the old values.
35 models tested by researchers from UVA and NYU. All 35 follow the exact same mathematical death curve. Accuracy drops log-linearly to zero as outdated information accumulates.
No plateau. No recovery. Just a straight line to total failure.
They borrowed a concept from cognitive psychology called proactive interference old memories blocking recall of new ones. In humans, this effect plateaus. Our brains learn to suppress the noise and focus on what's current.
LLMs never plateau. They decline until they break completely.
The researchers tried everything:
"Forget the old values"- barely moved the needle
Chain-of-thought- same collapse
Reasoning models- same collapse
Prompt engineering- marginal improvement at best
But here's the finding that should reshape how you think about AI infrastructure:
Resistance to this interference has zero correlation with context window length.
Zero.
It only correlates with parameter count.
Your 128K context window is not memory. It's a junk drawer that the model can't sort through.
The entire AI industry is charging you for longer context. This paper says context length was never the problem.
If you're building agents, memory systems, financial tools, healthcare pipelines, or anything that tracks changing data over time you are building on top of this flaw.
And almost nobody is talking about it.

English

If you're coming to @deeptechweek in NYC. Try to join us for Deep Tacos.
deep-tech-week.com/events/af2fbdb…
English

@AustinA_Way Tried to do this for my AP US History class, but I only had 128k of RAM.
English

“Study more” is useless advice.
Student will just keep rereading the textbook, doing worksheets. total waste of time.
So we built one of the most advanced diagnostics in the world.
400+ skills per course. So instead of "review Unit 5," I can tell you the exact 23 skills in Unit 5 you're missing.

English
Pablos retweetledi

BREAKING: While a new War for Oil erupts in the Middle East
A Physics Paper just quietly dropped TODAY that will eventually make Oil, and the entire current Energy Industry, irrelevant.
Ushering in the era of Zero-Point Energy
@EagleworksSonny
Here is the breakthrough🧵

English
Pablos retweetledi

So they built a reactor that eats its own waste and cant melt down and runs for a thousand years
Cool cool cool
Meanwhile we spent the last decade arguing about whether solar panels make your house look ugly and shutting down perfectly good plants because someone watched Chernobyl on HBO
China just casually solved the two biggest problems in nuclear energy while we were busy debating windmill noise complaints
The future didnt knock it just walked in and started splitting atoms on the night shift

English

@farzyness @elonmusk Whoa. Which Tesla do you have to buy to get a button?
English




