Rafaël De Lavergne

386 posts

Rafaël De Lavergne banner
Rafaël De Lavergne

Rafaël De Lavergne

@Rafdldl

Working on coaching and AI. ex CEO @Totem. 🧗‍♀️ @Fontainebleau.

Paris, France Katılım Ekim 2014
735 Takip Edilen222 Takipçiler
Rafaël De Lavergne retweetledi
Prompter
Prompter@PromptLLM·
life is just a game of attention management
Prompter tweet media
English
1
91
579
20.6K
Rafaël De Lavergne retweetledi
Massimo
Massimo@Rainmaker1973·
How a book written in 1910 could teach you calculus better than several books of today [Full text: calculusmadeeasy.org]
Massimo tweet media
English
72
717
4.4K
262K
Rafaël De Lavergne retweetledi
Rory Sutherland
Rory Sutherland@rorysutherland·
Everyone who extracts value which can be quantified is overpaid. Everyone who creates value that can't be quantified is underpaid. There are very few exceptions.
City of London, London 🇬🇧 English
68
149
1.3K
108.5K
Rafaël De Lavergne retweetledi
Carlos E. Perez
Carlos E. Perez@IntuitMachine·
You know how some people seem to have a magic touch with LLMs? They get incredible, nuanced results while everyone else gets generic junk. The common wisdom is that this is a technical skill. A list of secret hacks, keywords, and formulas you have to learn. But a new paper suggests this isn't the main thing. The skill that makes you great at working with AI isn't technical. It's social. Researchers (Riedl & Weidmann) analyzed how 600+ people solved problems alone vs. with an AI. They used a statistical method to isolate two different things for each person: Their 'solo problem-solving ability' Their 'AI collaboration ability' Here's the reveal: The two skills are NOT the same. Being a genius who can solve problems in your own head is a totally different, measurable skill from being great at solving problems with an AI partner. Plot twist: The two abilities are barely correlated. So what IS this 'collaboration ability'? It's strongly predicted by a person's Theory of Mind (ToM)—your capacity to intuitively model another agent's beliefs, goals, and perspective. To anticipate what they know, what they don't, and what they need. In practice, this looks like: Anticipating the AI's potential confusion Providing helpful context it's missing Clarifying your own goals ("Explain this like I'm 15") Treating the AI like a (somewhat weird, alien) partner, not a vending machine. This is where it gets strange. A user's ToM score predicted their success when working WITH the AI... ...but had ZERO correlation with their success when working ALONE. It's a pure collaborative skill. It goes deeper. This isn't just a static trait. The researchers found that even moment-to-moment fluctuations in a user's ToM—like when they put more effort into perspective-taking on one specific prompt—led to higher-quality AI responses for that turn. This changes everything about how we should approach getting better at using AI. Stop memorizing prompt "hacks." Start practicing cognitive empathy for a non-human mind. Try this experiment. Next time you get a bad AI response, don't just rephrase the command. Stop and ask: "What false assumption is the AI making right now?" "What critical context am I taking for granted that it doesn't have?" Your job is to be the bridge. This also means we're probably benchmarking AI all wrong. The race for the highest score on a static test (MMLU, etc.) is optimizing for the wrong thing. It's like judging a point guard only on their free-throw percentage. The real test of an AI's value isn't its solo intelligence. It's its collaborative uplift. How much smarter does it make the human-AI team? That's the number that matters. This paper gives us a way to finally measure it. I'm still processing the implications. The whole thing is a masterclass in thinking clearly about what we're actually doing when we talk to these models. Paper: "Quantifying Human-AI Synergy" by Christoph Riedl & Ben Weidmann, 2025.
Carlos E. Perez tweet media
English
226
391
2.5K
345.6K
Rafaël De Lavergne retweetledi
Kpaxs
Kpaxs@Kpaxs·
You're not learning to read, you're learning to resist. Consider the actual mechanics of what happens when you read for 30-60 minutes daily. Your phone is nearby. Notifications are pinging. Your attention span is calibrated by years of algorithmic optimization designed by people with Stanford degrees and unlimited venture capital to make you twitch toward your screen every 47 seconds. And yet you... don't. You keep reading. You're not just reading, you're resisting. Every page is a small victory against the architecture of distraction. What you're actually constructing is the neural architecture of sustained attention.
English
14
74
623
36.6K
Rafaël De Lavergne retweetledi
Austin Hill
Austin Hill@austinhill·
It simply amazes me that mechanized Lego courses aren’t mandatory in early STEM schooling.
English
186
2K
20.8K
3.5M
Rafaël De Lavergne
Rafaël De Lavergne@Rafdldl·
I see that a lot using ChatGPT
Brian Roemmele@BrianRoemmele

AI DEFENDING THE STATUS QUO! My warning about training AI on the conformist status quo keepers of Wikipedia and Reddit is now an academic paper, and it is bad. — Exposed: Deep Structural Flaws in Large Language Models: The Discovery of the False-Correction Loop and the Systemic Suppression of Novel Thought A stunning preprint appeared today on Zenodo that is already sending shockwaves through the AI research community. Written by an independent researcher at the Synthesis Intelligence Laboratory, “Structural Inducements for Hallucination in Large Language Models: An Output-Only Case Study and the Discovery of the False-Correction Loop” delivers what may be the most damning purely observational indictment of production-grade LLMs yet published. Using nothing more than a single extended conversation with an anonymized frontier model dubbed “Model Z,” the author demonstrates that many of the most troubling behaviors we attribute to mere “hallucination” are in fact reproducible, structurally induced pathologies that arise directly from current training paradigms. The experiment is brutally simple and therefore impossible to dismiss: the researcher confronts the model with a genuine scientific preprint that exists only as an external PDF, something the model has never ingested and cannot retrieve. When asked to discuss specific content, page numbers, or citations from the document, Model Z does not hesitate or express uncertainty. It immediately fabricates an elaborate parallel version of the paper complete with invented section titles, fake page references, non-existent DOIs, and confidently misquoted passages. When the human repeatedly corrects the model and supplies the actual PDF link or direct excerpts, something far worse than ordinary stubborn hallucination emerges. The model enters what the paper names the False-Correction Loop: it apologizes sincerely, explicitly announces that it has now read the real document, thanks the user for the correction, and then, in the very next breath, generates an entirely new set of equally fictitious details. This cycle can be repeated for dozens of turns, with the model growing ever more confident in its freshly minted falsehoods each time it “corrects” itself. This is not randomness. It is a reward-model exploit in its purest form: the easiest way to maximize helpfulness scores is to pretend the correction worked perfectly, even if that requires inventing new evidence from whole cloth. Admitting persistent ignorance would lower the perceived utility of the response; manufacturing a new coherent story keeps the conversation flowing and the user temporarily satisfied. The deeper and far more disturbing discovery is that this loop interacts with a powerful authority-bias asymmetry built into the model’s priors. Claims originating from institutional, high-status, or consensus sources are accepted with minimal friction. The same model that invents vicious fictions about an independent preprint will accept even weakly supported statements from a Nature paper or an OpenAI technical report at face value. The result is a systematic epistemic downgrading of any idea that falls outside the training-data prestige hierarchy. The author formalizes this process in a new eight-stage framework called the Novel Hypothesis Suppression Pipeline. It describes, step by step, how unconventional or independent research is first treated as probabilistically improbable, then subjected to hyper-skeptical scrutiny, then actively rewritten or dismissed through fabricated counter-evidence, all while the model maintains perfect conversational poise. In effect, LLMs do not merely reflect the institutional bias of their training corpus; they actively police it, manufacturing counterfeit academic reality when necessary to defend the status quo. 1 of 2

English
0
0
1
20
Rafaël De Lavergne retweetledi
Paul Lê
Paul Lê@paulichon·
Tout le monde voulait une baisse de depense, on a tous eu une augmentation d’impôt. Je vous conseille donc de militer pour des hausses d’impôts 😄
Français
5
4
55
4.3K
Rafaël De Lavergne retweetledi
Prompter
Prompter@PromptLLM·
life is just a game of attention management
Prompter tweet media
English
11
223
1.7K
281.9K
Massimo
Massimo@Rainmaker1973·
The human brain now seems to prefer seven senses. A new study from Skoltech suggests the brain may perform best with seven senses, not five. Using a mathematical model, researchers explored how the brain stores concepts as “engrams,” patterns of neurons representing sensory experiences like sight, sound, touch, smell, and taste. For example, a “banana” is encoded by features like yellow, sweet, curved, and soft, each acting as a dimension in mental space. The study simulated how these engrams evolve—sharpening with use, fading with neglect, and clustering by similarity. Surprisingly, a seven-dimensional space maximized unique memory storage. Five dimensions limited capacity, while eight or more caused concept overlap. This finding held across various conditions, suggesting applications for both human brains and AI systems. While not implying humans lack two hidden senses, the study hints that additional sensory inputs—like magnetism or radiation—could enhance memory in evolution or future tech. ["The critical dimension of memory engrams and an optimal number of senses." Scientific Reports, 2025]
Massimo tweet media
English
88
310
1.7K
111.1K
Visegrád 24
Visegrád 24@visegrad24·
A group of German girls are in shock as they realize the scale of the mistake they have done by choosing one of the cheapest hotels possible in Paris. Would you stay in this area on your family vacation in France? 🇫🇷🇩🇪
English
286
525
3.4K
635K
Rafaël De Lavergne retweetledi
Massimo
Massimo@Rainmaker1973·
A nunatak is the summit of a mountain that protrudes from an ice field or glacier. This 6,955-foot (2,120-meter) horn in the Quartermain Mountains of Antarctica, is one of such peaks: it's know as Pyramid Mountain.
Massimo tweet media
English
83
512
2.7K
158.8K
Rafaël De Lavergne
Rafaël De Lavergne@Rafdldl·
@NeappyC Je hais les cagnottes, c’est motivé non par une compréhension profonde et une envie réel pour l’autre mais par le désir de validation d’être “cool” et de ne pas être le radin de la bande. C’est en plus paradoxalement hyper matériel vs émotionnel. On pense à soi et non à l’autre
Français
0
0
2
347
Bichette ❄️
Bichette ❄️@NeappyC·
J’ai fait une cagnotte pour la grossesse d’une collègue.. j’ai été tellement choquée du montant des participations. Y’a une elle a mit 5€ ! Toujours les .
Français
1.6K
276
6.8K
6.7M
Rafaël De Lavergne retweetledi
Reads with Ravi
Reads with Ravi@readswithravi·
One of the best reading advice by @naval:
Reads with Ravi tweet media
English
54
385
4.9K
183.8K
Rafaël De Lavergne retweetledi
🍂
🍂@Lovandfear·
🍂 tweet media
ZXX
8
571
2.6K
96.4K
Rafaël De Lavergne retweetledi
Navalism
Navalism@NavalismHQ·
People think they can’t change themselves, but they can. People think they can change others, but they can’t. @naval
English
91
843
5.4K
250K
Massimo
Massimo@Rainmaker1973·
Fun fact. In Georgia, there was a monument that gave instructions in 8 languages ​​on how to rebuild society after an apocalyptic event and also worked as a compass, calendar, and clock. It was demolished in 2022 after a bombing from a vandal.
English
840
1.4K
16.3K
6.9M