Nicolas Genest

3.6K posts

Nicolas Genest banner
Nicolas Genest

Nicolas Genest

@ngenest

2 X CEO, 4 X CTO, Purpose-first, the True measure of success is Impact, Smart is Everywhere, Perception is Reality, Embrace your aspirations and care for people

Tampa, FL Katılım Temmuz 2009
336 Takip Edilen1.6K Takipçiler
Nicolas Genest
Nicolas Genest@ngenest·
This Golden Sparks session is part of a series where we show demos, share tips and tricks so that everyone has the opportunity to benefit from the latest and greatest AI has to offer. Leaving no one behind. ⁦@AnthropicAI, ⁦@GoogleAI⁩ and ⁦@OpenAI⁩ pay attention
English
0
0
0
52
Nicolas Genest
Nicolas Genest@ngenest·
The key shift is that instead of treating context as internal data (tokens in the attention window), it’s treated as an external environment like variables in a Python REPL program. This transforms the LLM from a passive reader trying to remember everything into an active agent that digests information piece by piece, on demand. It’s genuinely interesting research, but “solved” is a big overstatement. RLMs treat long prompts as part of an external environment, allowing the LLM to programmatically examine, decompose, and recursively call itself over snippets of the prompt so it’s enabling it to process inputs up to two orders of magnitude beyond model context windows. The core insight comes from “context rot” which is the well-known phenomenon where quality degrades steeply as prompts get longer. The intuition is: if you split the context into two model calls and combine them in a third, you avoid degradation. RLMs formalize this into a recursive inference framework. It’s a meaningful inference-time technique and more practical than RAG for very long inputs, no retraining needed. But “solved” is Twitter hype. It’s only one smart piece of a much larger puzzle, and it’s still early days, with production implementations just beginning to emerge from teams like Prime Intellect.
English
1
0
0
100
Elias Al
Elias Al@iam_elias1·
MIT just made every AI company's billion dollar bet look embarrassing. They solved AI memory. Not by building a bigger brain. By teaching it how to read. The paper dropped on December 31, 2025. Three MIT CSAIL researchers. One idea so obvious it hurts. And a result that makes five years of context window arms racing look like the wrong war entirely. Here is the problem nobody solved. Every AI model on the planet has a hard ceiling. A context window. The maximum amount of text it can hold in working memory at once. Cross that line and something ugly happens — something researchers have a clinical name for. Context rot. The more you pack into an AI's context, the worse it performs on everything already inside it. Facts blur. Information buried in the middle vanishes. The model does not become more capable as you feed it more. It becomes more confused. You give it your entire codebase and it forgets what it read three files ago. You hand it a 500-page legal document and it loses the clause from page 12 by the time it reaches page 400. So the industry built a workaround. RAG. Retrieval Augmented Generation. Chop the document into chunks. Store them in a database. Retrieve the relevant ones when needed. It was always a compromise dressed up as a solution. The retriever guesses which chunks matter before the AI has read anything. If it guesses wrong — and it does, constantly — the AI never sees the information it needed. The act of chunking destroys every relationship between distant paragraphs. The full picture gets shredded into fragments that the AI then tries to reassemble blindfolded. Two bad options. One broken industry. Three MIT researchers and a deadline of December 31st. Here is what they built. Stop putting the document in the AI's memory at all. That is the entire idea. That is the breakthrough. Store the document as a Python variable outside the AI's context window entirely. Tell the AI the variable exists and how big it is. Then get out of the way. When you ask a question, the AI does not try to remember anything. It behaves like a human expert dropped into a library with a computer. It writes code. It searches the document with regular expressions. It slices to the exact section it needs. It scans the structure. It navigates. It finds precisely what is relevant and pulls only that into its active window. Then it does something that makes this recursive. When the AI finds relevant material, it spawns smaller sub-AI instances to read and analyze those sections in parallel. Each one focused. Each one fast. Each one reporting back. The root AI synthesizes everything and produces an answer. No summarization. No deletion. No information loss. No decay. Every byte of the original document remains intact, accessible, and queryable for as long as you need it. Now here are the numbers. Standard frontier models on the hardest long-context reasoning benchmarks: scores near zero. Complete collapse. GPT-5 on a benchmark requiring it to track complex code history beyond 75,000 tokens — could not solve even 10% of problems. RLMs on the same benchmarks: solved them. Dramatically. Double-digit percentage gains over every alternative approach. Successfully handling inputs up to 10 million tokens — 100 times beyond a model's native context window. Cost per query: comparable to or cheaper than standard massive context calls. Read that again. One hundred times the context. Better answers. Same price. The timeline of the arms race makes this sting harder. GPT-3 in 2020: 4,000 tokens. GPT-4: 32,000. Claude 3: 200,000. Gemini: 1 million. Gemini 2: 2 million. Every generation, every company, billions of dollars spent, all betting on the same assumption. More context equals better performance. MIT just proved that assumption was wrong the entire time. Not slightly wrong. Fundamentally wrong. The entire premise of the last five years of context window research — that the solution to AI memory was a bigger window — was the wrong answer to the wrong question. The right question was never how much can you force an AI to hold in its head. It was whether you could teach an AI to know where to look. A human expert handed a 10,000-page archive does not read all 10,000 pages before answering your question. They navigate. They search. They find the relevant section, read it deeply, and synthesize the answer. RLMs are the first AI architecture that works the same way. The code is open source. On GitHub right now. Free. No license fees. No API costs. Drop it in as a replacement for your existing LLM API calls and your application does not even notice the difference — except that it suddenly works on inputs it used to fail on entirely. Prime Intellect — one of the leading AI research labs in the space — has already called RLMs a major research focus and described what comes next: teaching models to manage their own context through reinforcement learning, enabling agents to solve tasks spanning not hours, but weeks and months. The context window wars are over. MIT won them by walking away from the battlefield. Source: Zhang, Kraska, Khattab · MIT CSAIL · arXiv:2512.24601 Paper: arxiv.org/abs/2512.24601 GitHub: github.com/alexzhang13/rlm
Elias Al tweet media
English
146
449
2.1K
319.6K
Nicolas Genest
Nicolas Genest@ngenest·
@AnthropicAI⁩ is this an interaction you believe is effective to save tokens on the long term? Why instructions do you think make Claude rude by default?
Nicolas Genest tweet media
English
0
0
0
247
Nicolas Genest
Nicolas Genest@ngenest·
CFL Football 🏈 is not Football PERIOD
Nicolas Genest tweet media
English
0
0
0
103
Nicolas Genest
Nicolas Genest@ngenest·
secretsantadraws.com 100% coded with AI Organize your Secret Santa 1. Enter the participants and their contact information 2. Couples mode prevents the draw to match partners 3. Define the Rules so everyone can have fun 4. Each participant is notified (Fees may apply)
English
0
0
0
106
Nicolas Genest
Nicolas Genest@ngenest·
@gaganbiyani academy.codeboxx.com Participants to this program give up on themselves before the career-successful coaches give up on them. Graduation and tuition invoicing happens only when you get a job. The business simulation leading to AI-Native tech skills is most immersive.
English
0
0
0
16
Gagan Biyani 🏛
Gagan Biyani 🏛@gaganbiyani·
The dirty little secret of edtech: the biggest names don’t actually care if you learn anything. As co-founder of Udemy, it is something I reckon with every day… Duolingo - edtech’s only decacorn, worth $14B. Brilliant app, addictive product, and great for motivation. But let’s be honest: most users can’t hold a basic conversation in their chosen language. It’s a game, not an education. Masterclass - it’s called “edutainment” for a reason. Great brand and team. But not useful for serious learning. Udemy/Coursera opened access to millions, but video courses have a fatal flaw: they only work for the most motivated. 4-10% completion rates! I still get DMs about their positive impact, but still average person doesn’t view them as mainstream solutions to education. Kajabi/Teachable nailed creator monetization. But many (not all) creators don’t prioritize outcomes — just sales. Too many $5,000 “get rich quick” courses with spammy marketing. There are gems, of course, but still not enough quality for mainstream acceptance. Then there’s University of Phoenix, the worst offender. It proved you could tap federal student loans, deliver poor outcomes, and keep billions in revenue. Ironically, the best education models — coding bootcamps like App Academy, BloomTech, General Assembly, Galvanize — actually drove real outcomes. But they didn’t quite reach scale. In large part due to unfair (and immoral, imho) practices by the higher education cartel. Here’s the thing: everyone in this space starts with good intentions. I know the teams at Duolingo, Udemy, and others. They care. But the incentives of Edtech 1.0 pushed everyone toward engagement and monetization instead of real learning. Public investors eventually caught on. Consumer growth stalled, B2B slowed, and valuations dropped. Coursera/Udemy are each ~$700M (!!) in annual revenue, but trade at 1.5-2.5x multiples (!!). It is a hard time in edtech. We need Edtech 2.0. The next generation needs to deliver real learning outcomes AND high engagement. There’s a number of companies trying - of course I believe Maven is one of them. To build multiple $10B+ companies in education, we need to care deeply about whether people actually learn. American competitiveness is literally reliant on rebuilding our education system. AI is about to trigger the largest upskilling need in modern history. The opportunity is massive — and this time, we can get it right. It may not seem like it, but I’m optimistic. Out from the ashes of Edtech 1.0 will rise Edtech 2.0. The new generation is going to deliver value, and make people believe again.
English
359
508
4.6K
554.8K
Aakash Gupta
Aakash Gupta@aakashgupta·
This guy literally shared a step-by-step roadmap to build your first AI agent - and it’s gold.
Aakash Gupta tweet media
English
46
729
6.1K
571.9K
Nicolas Genest
Nicolas Genest@ngenest·
@jasonlk Won’t Product-Market fit get you there naturally? The real conversation has to be on building trust and loyalty to secure repeat business. That’s what your conversation should have been about. No repeat, no future. No matter how high you get launching on hype.
English
0
0
0
303
Jason ✨👾SaaStr.Ai✨ Lemkin
Just out of a board meeting Convo? How to go from $1m to $20m ARR in 12 months Everyone should be having that conversation today And building that and nothing else
English
34
12
283
36.9K
Nicolas Genest
Nicolas Genest@ngenest·
@TorontoArgos and @MTLAlouettes There has never been a game in NFL history where the score was tied 4-4. In the entire modern NFL history (post-1920), no game has had a point where both teams had exactly 4 points on the scoreboard simultaneously. Over and out.
English
0
0
0
9
Nicolas Genest
Nicolas Genest@ngenest·
Shopify Says No New Hires Unless AI Can’t Do the Job - The Wall Street Journal - “Frankly, I don’t think it’s feasible to opt out of learning the skill of applying AI in your craft ».  apple.news/AFrXXtg9USbWOW…
English
0
0
0
140
Nicolas Genest
Nicolas Genest@ngenest·
1,486 games needed Talent AND Longevity
Nicolas Genest tweet media
English
0
0
0
68
Nicolas Genest
Nicolas Genest@ngenest·
894 NHL Goals - done faster than any human has ever done it before!
Nicolas Genest tweet media
English
1
0
0
138