Adrian A. 🌶△ ☁️⚛

15.5K posts

Adrian A. 🌶△ ☁️⚛ banner
Adrian A. 🌶△ ☁️⚛

Adrian A. 🌶△ ☁️⚛

@Adrian_A_x

$AKT @akashnet $VVV @AskVenice $MOR, $YNE, $Atom, $Osmo, $P2P I will change my banner when $ATOM goes to either 0.smthn or 100$. Too early or wrong?

Katılım Mart 2017
712 Takip Edilen299 Takipçiler
Sabitlenmiş Tweet
Adrian A. 🌶△ ☁️⚛
Adrian A. 🌶△ ☁️⚛@Adrian_A_x·
New victory for OpenSourceAI is on it's way. LLMs AI Models were competing on slim context/prompt capacity improvements. MIT changed the game, RLM can store and navigate huge context as Python variables. Paves road to AI working on problems over Weeks/months. @AskVenice @akashnet
Elias Al@iam_elias1

MIT just made every AI company's billion dollar bet look embarrassing. They solved AI memory. Not by building a bigger brain. By teaching it how to read. The paper dropped on December 31, 2025. Three MIT CSAIL researchers. One idea so obvious it hurts. And a result that makes five years of context window arms racing look like the wrong war entirely. Here is the problem nobody solved. Every AI model on the planet has a hard ceiling. A context window. The maximum amount of text it can hold in working memory at once. Cross that line and something ugly happens — something researchers have a clinical name for. Context rot. The more you pack into an AI's context, the worse it performs on everything already inside it. Facts blur. Information buried in the middle vanishes. The model does not become more capable as you feed it more. It becomes more confused. You give it your entire codebase and it forgets what it read three files ago. You hand it a 500-page legal document and it loses the clause from page 12 by the time it reaches page 400. So the industry built a workaround. RAG. Retrieval Augmented Generation. Chop the document into chunks. Store them in a database. Retrieve the relevant ones when needed. It was always a compromise dressed up as a solution. The retriever guesses which chunks matter before the AI has read anything. If it guesses wrong — and it does, constantly — the AI never sees the information it needed. The act of chunking destroys every relationship between distant paragraphs. The full picture gets shredded into fragments that the AI then tries to reassemble blindfolded. Two bad options. One broken industry. Three MIT researchers and a deadline of December 31st. Here is what they built. Stop putting the document in the AI's memory at all. That is the entire idea. That is the breakthrough. Store the document as a Python variable outside the AI's context window entirely. Tell the AI the variable exists and how big it is. Then get out of the way. When you ask a question, the AI does not try to remember anything. It behaves like a human expert dropped into a library with a computer. It writes code. It searches the document with regular expressions. It slices to the exact section it needs. It scans the structure. It navigates. It finds precisely what is relevant and pulls only that into its active window. Then it does something that makes this recursive. When the AI finds relevant material, it spawns smaller sub-AI instances to read and analyze those sections in parallel. Each one focused. Each one fast. Each one reporting back. The root AI synthesizes everything and produces an answer. No summarization. No deletion. No information loss. No decay. Every byte of the original document remains intact, accessible, and queryable for as long as you need it. Now here are the numbers. Standard frontier models on the hardest long-context reasoning benchmarks: scores near zero. Complete collapse. GPT-5 on a benchmark requiring it to track complex code history beyond 75,000 tokens — could not solve even 10% of problems. RLMs on the same benchmarks: solved them. Dramatically. Double-digit percentage gains over every alternative approach. Successfully handling inputs up to 10 million tokens — 100 times beyond a model's native context window. Cost per query: comparable to or cheaper than standard massive context calls. Read that again. One hundred times the context. Better answers. Same price. The timeline of the arms race makes this sting harder. GPT-3 in 2020: 4,000 tokens. GPT-4: 32,000. Claude 3: 200,000. Gemini: 1 million. Gemini 2: 2 million. Every generation, every company, billions of dollars spent, all betting on the same assumption. More context equals better performance. MIT just proved that assumption was wrong the entire time. Not slightly wrong. Fundamentally wrong. The entire premise of the last five years of context window research — that the solution to AI memory was a bigger window — was the wrong answer to the wrong question. The right question was never how much can you force an AI to hold in its head. It was whether you could teach an AI to know where to look. A human expert handed a 10,000-page archive does not read all 10,000 pages before answering your question. They navigate. They search. They find the relevant section, read it deeply, and synthesize the answer. RLMs are the first AI architecture that works the same way. The code is open source. On GitHub right now. Free. No license fees. No API costs. Drop it in as a replacement for your existing LLM API calls and your application does not even notice the difference — except that it suddenly works on inputs it used to fail on entirely. Prime Intellect — one of the leading AI research labs in the space — has already called RLMs a major research focus and described what comes next: teaching models to manage their own context through reinforcement learning, enabling agents to solve tasks spanning not hours, but weeks and months. The context window wars are over. MIT won them by walking away from the battlefield. Source: Zhang, Kraska, Khattab · MIT CSAIL · arXiv:2512.24601 Paper: arxiv.org/abs/2512.24601 GitHub: github.com/alexzhang13/rlm

English
1
0
1
140
Adrian A. 🌶△ ☁️⚛
@Alina_Lipp_X Predators talk with a smile. Not with a frowning. Just because you work at UN / WHO, doesn't mean you're not a predator, in fact probably... on contrary, it makes you more responsible or...irresponsible.
English
0
0
0
8
Alina Lipp
Alina Lipp@Alina_Lipp_X·
Masturbationsunterricht für Kinder von der WHO 🤦‍♀️ kann bitte endlich jemand diesen kranken Scheiß stoppen???
Deutsch
429
1.8K
6K
86.9K
Adrian A. 🌶△ ☁️⚛ retweetledi
Akash Alpha
Akash Alpha@akashalpha_·
Three showcases / proof points on how AI startups won’t be profitable with the hyperscalers and how @akashnet provides one of the only solutions. Both builders migrated to the $AKT marketplace to raise profits & reduce their infra costs. Razer saw the same results as well. 👀👇
Lino Defi@Linodefi1

Sakura runs an AI startup in Tokyo. Last month, AWS accounted for 43% of her gross revenue. Her GPU compute bill was $47,000, while her entire engineering payroll was $52,000. She recently migrated her workloads to @akashnet. By utilizing decentralized GPU marketplace, she accessed the same hardware for 85% less, reducing her costs by $40,000 a month. I Visualized the architectural breakdown on why AI founders are migrating to decentralized compute in 2026..

English
1
11
50
2K
Bruno Topola
Bruno Topola@BrunoTopola·
🔴🟡 Australia. Imigrant z Iraku, pracujący jako ochroniarz w centrum handlowym w Sydney, podszedł do 3-letniej dziewczynki. Dziewczynka bawiła się na placu zabaw. Mama robiła zakupy. Imigrant wziął małą dziewczynkę za rękę i zaprowadził w ustronne miejsce. Następnie wykorzystał seksualnie. Imigrant (Mohammad Hassan Al Bayati) dostał 2,5 roku więzienia. Sprawa z 2016 roku. Ale jakże aktualna w dzisiejszej różnorodnej Europie. #MasowaMigracja #migration
Polski
374
3.3K
9.3K
251.3K
Adrian A. 🌶△ ☁️⚛ retweetledi
Roy
Roy@SSJCurrency·
I just ran Token scan for $VVV and noticed some interesting data points that are extremely bullish The token has been soaring organically and its currently sitting past its prior ATH at a 413M market cap needless to say that most holders are in profit but not as much as you'd think Regardless - The divergence below is very interesting > 7.7% of all DEX buyers have already capitulated for ≈ 13M USD > 5.8% of all DEX buyers realized profits worth ≈ 4.85M USD > Not a single wallet holds between 2% and 4% of the supply This divergence between profit taking and capitulation is a sign that weak hands left the station at a loss despite the token being at an ATH Unrealized profits mounting simply because VVV has actual usage and emissions are being cut off aggressively
Roy tweet media
English
13
14
142
6.2K
For all Curious
For all Curious@fascinatingonX·
🚨 : Germany's Fusion Reactor could power the whole planet by 2030s and connect to the Grid soon.
For all Curious tweet mediaFor all Curious tweet media
English
40
137
1.2K
28.4K
Romu.
Romu.@supermanyyoam·
El Banco mundial clasifica Argentina como uno de los 3 Países más pobres del mundo . Milei lo hizo retroceder 10 puestos . Seguir votando derecha .
Romu. tweet media
Español
2.2K
18.1K
47.3K
1.6M
Dr. Jebra Faushay
Dr. Jebra Faushay@JebraFaushay·
Whose idea was it to make *NSYNC bounce on giant balls as part of their choreography? So embarrassing and cringe.
English
279
35
599
71K
Adrian A. 🌶△ ☁️⚛
@theconread Brah... job was for sitting around, getting a coffee with something , from time to time... get paid. NOT Risk your own life !!! Omaigad they dont pay enough for daaaaat. 😅😂
English
1
0
0
16
The Conservative Read
The Conservative Read@theconread·
Are these 3 DEI hires, who RAN out the room when the shooter appeared, secret service??!!!
English
1.4K
2.3K
9.5K
1M
Meme King
Meme King@MemeKingc·
you had ONE job
English
877
2.6K
59.6K
9.3M
Adrian A. 🌶△ ☁️⚛ retweetledi
Venice
Venice@AskVenice·
VVV emissions reduced from 6M to 5M tokens per year, effective today. This is the first of 3 monthly reductions: - May 1: 6M → 5M - June 1: 5M → 4M - July 1: 4M → 3M Our goal is a net deflationary VVV with native yield, where burns exceed emissions
English
29
50
314
66K
Liberta Cherguia 🇪🇺
Liberta Cherguia 🇪🇺@MbarkCherguia·
Ok guys, what would you do if you ran into this scary guy in a dark alley? 😂
English
1.3K
59
402
64.3K
Adrian A. 🌶△ ☁️⚛ retweetledi
Greg Osuri 🇺🇸
Greg Osuri 🇺🇸@gregosuri·
We are excited to share @Razer's recent successful campaign that leveraged Akash's decentralized cloud. This is one of the first large enterprises (1500+ company) to use Akash and lays the foundation for Homenode to leverage Razer's vast fleet of gaming GPUs.
Akash Network@akashnet

Today we're pleased to share how AVA Mini, @Razer’s personalized AI companion experience, was brought to life at global scale using Razer's AIKit and Akash. By leveraging Razer AIKit’s inference capabilities on Akash’s decentralized compute network, managed by AkashML (@akashnetai), the team achieved: • $0.01 per generated image • 3.24 second average response time • Seamless scaling during peak campaign demand • Zero manual intervention across the campaign lifecycle Read the complete whitepaper: razer.ai/aikit-akash-av…

English
11
38
176
7.8K
Adrian A. 🌶△ ☁️⚛
Adrian A. 🌶△ ☁️⚛@Adrian_A_x·
@HowToAI_ MIT's Recursive Language Model will eventually shed even more light on this ...same, one, reality map... The coming years will be so interesting, so many AI innovations, they can surely create AGI in under 10 years as they start to converge.
English
0
0
0
55
How To AI
How To AI@HowToAI_·
MIT proved every major AI model is secretly converging on the same "brain." It’s called the “platonic representation hypothesis,” and it’s one of the most mind-blowing papers you’ll ever read. You train a vision model purely on images. You train a language model purely on text. They use completely different architectures. They process completely different data. They should have completely different "brains." But as these models scale up, something impossible is happening. When researchers measure how they organize information, the mathematical geometry is identical. A model that only "sees" images and a model that only "reads" text are measuring the distance between concepts in the exact same way. The models are converging. The researchers named this after Plato’s Allegory of the Cave. Plato believed that everything we experience is just a shadow of a deeper, hidden, perfect reality. The paper argues that AI models are doing the exact same thing. They are looking at the different "shadows" of human data, text, images, audio. And they are independently discovering the exact same underlying structure of the universe to make sense of it. It doesn't matter what company built the AI. It doesn't matter what data it was trained on. As models get larger, they stop memorizing their specific tasks. They are forced to build a statistical model of reality itself. And there is only one reality to map. 2024, Arxiv
How To AI tweet media
English
244
827
4K
287.8K
0HOUR1
0HOUR1@0hour1·
Went to Cal Tech, invented a kick stand out of a poop pipe, and went on to fail at being an assassin. He will go down as Obama's retarded son.
0HOUR1 tweet media
English
357
1.7K
14.1K
92.8K