Richard Valtr

2.9K posts

Richard Valtr banner
Richard Valtr

Richard Valtr

@valtrese

Founder @mewssystems

London Katılım Nisan 2007
3.9K Takip Edilen1.1K Takipçiler
Richard Valtr
Richard Valtr@valtrese·
Brilliant
Kira (Hindsight Capital)@Klaudnin3

The 2028 Global Intelligence Crisis That Wasn't A Macro Memo from the Actual June 2028, Not the Fanfic Version The unemployment rate printed 3.8% this morning, roughly where it's been all year. The market yawned. The S&P 500 is at 7,400, which is somehow both a record high and a disappointment to people who were promised 10,000 by every DCF model with a "AI Upside Case" tab. We are writing this memo because in February 2026, a widely circulated Substack piece predicted that by this exact date, the S&P would be down 38%, unemployment would be 10.2%, and the mortgage market would be in free fall. It was beautifully written, rigorously structured, and wrong about nearly everything. We feel it is our duty — nay, our privilege— to conduct the post-mortem. In the authors' defense, it was explicitly labeled a "scenario, not a prediction." In our defense, 2,321 people liked it and several macro Twitter accounts made it their entire personality for six months. How It Actually Started In late 2025, agentic coding tools did indeed take a step function jump in capability. The Citrini memo predicted that a competent developer could now "replicate the core functionality of a mid-market SaaS product in weeks." This was true! What the memo failed to mention was that a competent developer could also replicate the core functionality of a mid-market SaaS product in weeks in 2019. The difference was that back then, nobody did it because maintaining software is horrible, and in 2026, nobody did it because maintaining software is still horrible. The procurement manager at the Fortune 500 who told the vendor he'd been "in conversations with OpenAI about replacing them entirely"? He got his 30% discount, then spent the next eighteen months trying to get his internal AI prototype to handle SSO correctly. It could write a Shakespearean sonnet about SAML authentication but could not, for the life of it, actually implement SAML authentication without hallucinating an endpoint that didn't exist. He renewed the vendor contract at full price the following year. The memo predicted ServiceNow's $NOW net new ACV growth would decelerate to 14% as customers cut seats. In reality, ServiceNow reported accelerating growth in 2027 because — and this is the part the doom thesis always misses — the AI agents that companies deployed generated more workflow tickets, not fewer. Every autonomous agent needed monitoring, logging, exception handling, and escalation paths. ServiceNow didn't sell fewer seats. They sold seats to robots. SERVICENOW Q3 2027: "AI AGENT MANAGEMENT" BECOMES FASTEST-GROWING MODULE; CEO JOKES "OUR BEST CUSTOMERS ARE NOW NON-HUMAN" | Bloomberg, October 2027 The Friction That Refused to Die The Citrini memo's most elegant argument was that AI agents would eliminate friction, and that trillions in enterprise value depended on friction persisting. Subscriptions that passively renewed, insurance policies nobody re-shopped, delivery apps that exploited laziness — all would be ruthlessly optimized away. Here's what actually happened with subscriptions: AI agents did start cancelling unused subscriptions on behalf of users. Subscription companies responded by making cancellation flows so Byzantine that the AI agents needed other AI agents to navigate them. An arms race ensued. By Q2 2027, the average subscription cancellation flow involved a 47-step conversational gauntlet with an AI retention specialist. The median consumer's agent spent more tokens trying to cancel a $9.99/month meditation app than the consumer had spent meditating in the entire previous year. Net result on subscription revenue: approximately zero. The memo predicted agents would disintermediate travel booking platforms. In practice, when agents assembled "optimal" itineraries, they produced trips that were technically cheaper but involved three layovers, a 4am bus transfer in Ljubljana, and a hotel 45 minutes from the city center with a 4.1-star rating that turned out to be an Airbnb above a nightclub. Consumers used the agent, looked at its itinerary, said "absolutely not," and went back to $BKNG. It turns out that what humans call "preferences" and what a cost-optimization function calls "irrational friction" are the same thing. People don't want the cheapest flight. They want the one that doesn't leave at 5am. We knew this. We have always known this. We briefly forgot because a Substack told us machines would make us rational. The DoorDash $DASH Thesis, or "You Underestimate How Lazy People Are" The memo called DoorDash the "poster child" of habitual intermediation destruction. Agents would compare twenty delivery apps and pick the cheapest. Vibe-coded competitors would flood the market. DoorDash's moat of "you're hungry, you're lazy, this is the app on your home screen" would evaporate. Counterpoint: have you met people? The vibe-coded delivery competitors did indeed launch. Dozens of them. They had names like Fetchr, GrubAgent, NomNom AI, and — we are not making this up — "Deliver.sol." They offered lower fees by passing 90-95% through to drivers. They also had no customer service, no restaurant onboarding team, no logistics optimization, no insurance, and no way to handle the moment when a driver ate half your order and marked it "delivered." The apps worked flawlessly in demo videos and catastrophically in the rain on a Friday night in Brooklyn. By Q3 2027, the subreddit r/VibecodeDeliveryHorror had 400,000 subscribers and a pinned post titled "My agent ordered me sushi from a restaurant that closed in 2019." DoorDash stock is up 35% from the date of the Citrini memo. The Payments Armageddon That Wasn't Perhaps the most creative prediction was that AI agents would route around card interchange using stablecoins, destroying Visa / $V, Mastercard / $MA, and American Express $AXP. What actually happened: agents tried to pay with stablecoins. Merchants said no. Not because they couldn't accept them, but because the fraud liability framework for stablecoin payments did not exist, and no CFO in America was going to accept payment in magic internet money to save 2% on interchange when the chargeback protections that interchange funded were the only thing standing between them and an army of AI agents submitting fraudulent refund claims. That's the thing nobody modeled. AI didn't just empower consumers. It empowered fraud. The same agents that could price-optimize your protein bars could also generate synthetic identities, file fake chargebacks, and exploit return policies at scale. Visa and Mastercard's moat turned out not to be friction — it was trust infrastructure. When fraud exploded in early 2027, merchants practically begged to keep paying interchange. MASTERCARD Q1 2028: NET REVENUES +11% Y/Y; CEO CITES "UNPRECEDENTED DEMAND FOR AI-POWERED FRAUD DETECTION SUITE" AND "RETURN TO CARD RAILS FROM ALTERNATIVE PAYMENT EXPERIMENTS" | Bloomberg, April 2028 Mastercard didn't die. It sold the antidote. The Mortgage Crisis That Was Actually Just San Francisco Being San Francisco The memo's most alarming prediction was that the $13 trillion mortgage market would crack because white-collar workers would lose their income and default on their loans. What actually happened in housing: San Francisco home prices did decline, approximately 8% peak-to-trough. This was treated as a national emergency by San Francisco homeowners and as "Tuesday" by everyone who'd watched San Francisco home prices fall 8% roughly every four years since the city was founded. The national housing market was fine, because the national housing market has a problem that is far more powerful than AI displacement: there aren't enough houses. The US has been underbuilding for fifteen years. A structural housing shortage does not resolve because some product managers in SOMA lost their jobs. If anything, the modest cooling in tech-heavy metros made housing more affordable for the nurses, teachers, and tradespeople who'd been priced out — people whose jobs, it should be noted, AI has not disrupted in any meaningful way. The 780-FICO borrowers the memo flagged? Most of them had two-income households, 30-year fixed mortgages locked at 3-4% in 2020-2021, and six months of savings. The ones who lost their jobs found new ones — not always at the same pay, but enough to make a mortgage payment that was locked in at 2021 rates. Turns out a $2,400/month mortgage is pretty easy to service even at $120k instead of $180k, especially when your rate is 3.25% and the alternative is paying $3,500/month in rent. FANNIE MAE: SERIOUS DELINQUENCY RATE REMAINS AT 0.6%, NEAR ALL-TIME LOWS; "AI DISPLACEMENT CONCERNS HAVE NOT MATERIALIZED IN CREDIT PERFORMANCE" | Fannie Mae Q2 2028 Credit Supplement The Job Market: Disrupted, Not Destroyed We are not going to pretend that AI has had zero impact on employment. It has. The labor market is different. Some categories of work have genuinely contracted — particularly rote analytical work, first-draft content generation, and basic code production. But the Citrini memo made the classic futurist error: it modeled job destruction in high resolution and job creationin zero resolution. It said AI "created new jobs" but "for every new role AI created, it rendered dozens obsolete." This sounded profound and was completely made up. Here's what they missed: 1. AI made existing jobs bigger, not extinct. The product manager at Salesforce didn't get replaced by Claude. She used Claude to do the work of three product managers, got promoted, and now manages a portfolio twice the size. Companies didn't fire 60% of their PMs. They gave the surviving PMs AI tools and expanded their scope. Headcount was flat. Output tripled. 2. The "build it yourself" thesis created more jobs than it destroyed. All those companies that tried to replace their SaaS vendors with internal AI-built tools? They needed people to manage those tools. A new class of "AI operations" roles emerged — not the fake "prompt engineer" jobs from 2023, but genuine systems integration, agent orchestration, and reliability engineering roles. The BLS hasn't even finished categorizing them yet. 3. Humans got weird. The fastest-growing job categories of 2027-2028 were things nobody predicted: AI output auditors, "authenticity consultants" for brands that wanted to prove their content was human-made, in-person experience designers (turns out when everything digital gets commoditized, people pay more for analog), and — our personal favorite — professional "vibe curators" for corporate events, which is just party planning with a $300/hour rate and a LinkedIn title. The unemployment rate is 3.8%. It was 3.7% when the memo was written. The composition has shifted, but the apocalypse has not arrived. The Real Feedback Loop They Missed The Citrini memo described a "negative feedback loop with no natural brake." AI gets better → companies cut workers → workers spend less → economy weakens → companies buy more AI → repeat until civilization collapses. The natural brake they missed was called "shareholders." When companies cut too aggressively, quality collapsed. The first wave of AI-driven layoffs in 2026 did boost margins. The second wave, in early 2027, started producing disasters. AI-generated customer communications that were subtly unhinged. Product launches with no human gut-check that flopped spectacularly. Legal filings with hallucinated case citations (again). A major airline's AI-managed pricing engine that accidentally sold 40,000 business class tickets from New York to London for $12 each before a human noticed. UNITED AIRLINES Q2 2027: $380M CHARGE RELATED TO "AUTONOMOUS PRICING SYSTEM ERROR"; CEO ANNOUNCES "HUMAN-IN-THE-LOOP" MANDATE FOR ALL REVENUE MANAGEMENT SYSTEMS | Bloomberg, July 2027 Companies re-hired. Not to the same levels, and not the same roles. But the "fire everyone, let the robots handle it" thesis ran directly into the wall of "the robots are confidently wrong 3% of the time and that 3% is extremely expensive." The negative feedback loop had a natural brake, and its name was liability. India, Actually The memo predicted India's IT services sector would collapse, the rupee would crash 18%, and the IMF would come knocking. What actually happened: TCS, Infosys, and Wipro did see growth slow in traditional staff augmentation. They responded by — and stop us if you've heard this before — selling AI services. It turns out that the same cost arbitrage that made Indian developers attractive for manual coding also makes Indian firms attractive for AI implementation, training, and management. They pivoted from "we'll give you 500 developers" to "we'll give you 50 developers and 450 AI agents managed by our platform." The rupee is roughly where it was in February 2026. The IMF has not called. What We Actually Got Right and Wrong The bears got right: AI is transforming the economy. Wage growth for certain white-collar categories has stagnated. Inequality has widened. The political tensions around AI are real and growing. Some business models — particularly those built purely on information asymmetry — are under genuine pressure. The bears got wrong: The speed, the severity, and the linearity. The Citrini memo extrapolated every trend at its maximum velocity for 28 months and assumed no adaptation, no friction, no regulatory response, no human irrationality, no corporate incompetence, and no second-order effects that cut the other way. In short, they modeled the economy as a physics problem and forgot it's a biological one. Systems adapt. Humans are stubborn. Institutions are slow but not dead. And the most powerful force in the American economy is not artificial intelligence. It's inertia. Closing We say this with genuine respect for the original authors: it was a good piece. Thoughtful, well-structured, and asking the right questions. The scenario was worth gaming out. But the scenario assumed a frictionless spherical economy in a vacuum, and we live in a world where a Fortune 500 company once took nine months to change its font. The canary is still alive. It just learned to use ChatGPT and is now posting on LinkedIn about its "AI-augmented singing journey." The S&P is at 7,400. The mortgage market is fine. DoorDash still has a 28% take rate. And somewhere, a procurement manager is telling a SaaS vendor he could replace them with AI, while secretly praying they don't call his bluff. Disclaimer: This is a rebuttal, not a prediction. If the 2028 Global Intelligence Crisis actually happens, please don't forward this back to us.

English
0
0
0
276
Richard Valtr retweetledi
James Medlock
James Medlock@jdcmedlock·
Anyways, all this is to say I think a lot of people underestimate the extent to which AI is much better now than it was last year, and a lot of people overestimate the extent to which this necessarily means we're going to get 20% GDP growth or mass unemployment.
English
24
44
952
44.8K
Jan Barta
Jan Barta@absurdtrader·
Velka gratulace do Mews k $300m nove investici a $2.5b valuaci!
Čeština
4
7
197
23.3K
Richard Valtr retweetledi
Petr Pavel
Petr Pavel@prezidentpavel·
Vážení spoluobčané, jako prezident republiky mám tu výjimečnou možnost promluvit k vám hned první den nového roku a popřát vám, aby byl dobrý a aby se každému z vás dařilo.
Čeština
451
661
7.4K
244.6K
Richard Valtr retweetledi
Jorge Bravo Abad
Jorge Bravo Abad@bravo_abad·
Do humans learn like transformers? It's a question that sounds almost philosophical, but Pesnot Lerousseau and Summerfield turned it into a rigorous experiment. They trained both humans (n = 530) and small transformer networks on the same rule-learning task, then manipulated a single variable: the statistical distribution of training examples—from fully diverse (every example unique) to highly redundant (the same items repeated over and over). The result is striking. Both humans and transformers show nearly identical sensitivity to this manipulation. Train on diverse data, and learners generalize rules to novel situations ("in-context learning"). Train on redundant data, and they memorize specific examples ("in-weights learning"). The transition between strategies occurs at the same critical point (Zipf exponent α ≈ 1) in both biological and artificial systems. Neither can easily do both—until you give them a composite distribution mixing diversity and redundancy, at which point both humans and transformers become "double learners." But here's where they diverge: humans benefit from curricula. Present diverse examples early, and people discover the generalizable rule without losing the ability to memorize later. Transformers, by contrast, suffer catastrophic interference—whatever they learn second overwrites what came first. The implication for AI and education alike: the structure of training data matters as much as its content. And while transformers may match human learning in surprising ways, they still lack the flexibility that lets us benefit from well-designed curricula. Paper: nature.com/articles/s4156…
Jorge Bravo Abad tweet media
English
17
89
471
32.2K
Richard Valtr retweetledi
Freda Duan
Freda Duan@FredaDuan·
Some deep thinking about the frontier-model business model. All of this is grounded in numbers leaked by The Information, NYT, etc. 🔵The Core: It’s a Compute-Burn Machine At its heart, the model is brutally simple: almost all costs come from compute – inference, and especially training. Training follows something like a scaling law. Let’s assume costs rise ~5x every year; and ROI on training costs is 2x. That creates a weird dynamic: Year 1 training cost: 1 Year 2 revenue from that model: 2 But Year 2 training cost for the next model: 5 Net: +2 - 5 = -3 Run it forward and it gets worse: Year 3 revenue: +10 Year 3 training cost: -25 Net: -15 Frontier models, as currently run, are negative-cash-flow snowballs. Every generation burns more cash than the one before. For this to ever flip to positive cash flow, only two things can logically change: A. Revenue grows much faster than 2x, or B. Training cost growth slows from 5x a year to something like <2x Anthropic’s CEO Dario Amodei has broken down scenario B (“training costs stop growing exponentially”) into two possible realities: youtu.be/GcqQ1ebBqkc 1/ Physical/economic limits: You simply can’t train a model 5x bigger — not enough chips, not enough power, or the cost approaches world GDP. 2/ Diminishing returns: You could train a bigger model, but the scaling curve flattens. Spending another 10x stops being worth it. What OpenAI and Anthropic’s Numbers Reveal: Both companies’ leaked financial projections basically validate this framework. OpenAI: OpenAI’s plan effectively assumes total compute capacity stops growing after 2028. Translation: margins improve because training costs flatten. This is scenario B. theinformation.com/articles/opena… Anthropic: 1/ They assume the ROI per model increases each year. Spend 1, get back say 5 instead of 2. 2/ Their compute spend growth is also much more muted. From FY25 to FY28: OpenAI compute cost growth >> Anthropic's Using the framework above, they’re counting on both A revenue ramp and B slower cost growth. theinformation.com/articles/anthr… 🔵 $NFLX Is the Closest Analogy In tech, capital-intensive models are rare, though not unprecedented. $NFLX is a good analogy: for years it had deeply negative cash flow that worsened annually. They had to pour money into content upfront, and those assets depreciated over four years. In many ways it resembles data-center and model-training economics. Peak cash burn in 2019: -3B 2020 cash flow: +2B Why the sudden swing positive? COVID shut down production. Content spend stopped growing. Cash flow instantly flipped. 🔵The Endgame: Margins Arrive When Cost Growth Slows $NFLX didn’t stop investing in content entirely – it just stopped *growing* that investment aggressively once it reached ~300M global subscribers. At that scale, stickiness is high, and they only need to maintain their position, not expand content spend 10x a year. I don’t think OpenAI or Anthropic will ever stop training entirely. But they won’t need to grow training spend by multiples forever. At some point: ROI per model goes up, or scaling limits kick in, or both. And the moment annual training spend stops growing 5x a year, profit margins show up almost immediately. That’s the strange thing about LLM economics: It’s a burn machine…until suddenly it isn’t. Sources: theinformation.com/articles/opena… theinformation.com/articles/opena… theinformation.com/articles/opena… theinformation.com/articles/anthr… ++ Full article: open.substack.com/pub/robonomics…
YouTube video
YouTube
Freda Duan tweet media
English
21
28
194
39.2K
Richard Valtr retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
Sharing an interesting recent conversation on AI's impact on the economy. AI has been compared to various historical precedents: electricity, industrial revolution, etc., I think the strongest analogy is that of AI as a new computing paradigm (Software 2.0) because both are fundamentally about the automation of digital information processing. If you were to forecast the impact of computing on the job market in ~1980s, the most predictive feature of a task/job you'd look at is to what extent the algorithm of it is fixed, i.e. are you just mechanically transforming information according to rote, easy to specify rules (e.g. typing, bookkeeping, human calculators, etc.)? Back then, this was the class of programs that the computing capability of that era allowed us to write (by hand, manually). With AI now, we are able to write new programs that we could never hope to write by hand before. We do it by specifying objectives (e.g. classification accuracy, reward functions), and we search the program space via gradient descent to find neural networks that work well against that objective. This is my Software 2.0 blog post from a while ago. In this new programming paradigm then, the new most predictive feature to look at is verifiability. If a task/job is verifiable, then it is optimizable directly or via reinforcement learning, and a neural net can be trained to work extremely well. It's about to what extent an AI can "practice" something. The environment has to be resettable (you can start a new attempt), efficient (a lot attempts can be made), and rewardable (there is some automated process to reward any specific attempt that was made). The more a task/job is verifiable, the more amenable it is to automation in the new programming paradigm. If it is not verifiable, it has to fall out from neural net magic of generalization fingers crossed, or via weaker means like imitation. This is what's driving the "jagged" frontier of progress in LLMs. Tasks that are verifiable progress rapidly, including possibly beyond the ability of top experts (e.g. math, code, amount of time spent watching videos, anything that looks like puzzles with correct answers), while many others lag by comparison (creative, strategic, tasks that combine real-world knowledge, state, context and common sense). Software 1.0 easily automates what you can specify. Software 2.0 easily automates what you can verify.
English
555
1.5K
12.5K
2.1M
Richard Valtr retweetledi
Chris Offner
Chris Offner@chrisoffner3d·
The unbridled joy of listening to someone smart who’s not trying to sell you anything.
Dwarkesh Patel@dwarkesh_sp

The @karpathy interview 0:00:00 – AGI is still a decade away 0:30:33 – LLM cognitive deficits 0:40:53 – RL is terrible 0:50:26 – How do humans learn? 1:07:13 – AGI will blend into 2% GDP growth 1:18:24 – ASI 1:33:38 – Evolution of intelligence & culture 1:43:43 - Why self driving took so long 1:57:08 - Future of education Look up Dwarkesh Podcast on YouTube, Apple Podcasts, Spotify, etc. Enjoy!

English
54
331
6.6K
456.4K
Richard Valtr retweetledi
Derek Thompson
Derek Thompson@DKThomp·
Like everything ⁦@jburnmurdoch⁩ makes, this chart is amazing. The sharp decline in conscientiousness and rise in neuroticism among young people is astonishing. But also of note: literally every age group has gotten less extroverted in the age of the smartphone
Derek Thompson tweet media
English
257
1.2K
6.2K
2.1M
Richard Valtr retweetledi
Suhail
Suhail@Suhail·
This is the most inspiring thing I’ve read in AI in the last two years: storage.googleapis.com/deepmind-media… What a beautiful future ahead, just happy to take part in it.
Suhail tweet media
English
40
419
3.1K
304.6K
Richard Valtr retweetledi
Battery Ventures
Battery Ventures@BatteryVentures·
Exciting news from our portfolio company, @MewsSystems! 🙌 Mews has raised $75 million to fuel expansion in the U.S. and Europe while continuing to innovate in hospitality tech. Congrats to Richard Valtr (@valtrese), Matthijs Welle and the team! Read more: axios.com/pro/retail-dea…
Battery Ventures tweet media
English
0
2
3
1K
Richard Valtr retweetledi
.
.@APresserV2·
Man like @PeterOnSports playing Kendrick's 'Humble' at FT 💀
English
15
207
1.3K
44K
Richard Valtr retweetledi
Ethan Mollick
Ethan Mollick@emollick·
New randomized, controlled trial of students using GPT-4 as a tutor in Nigeria. 6 weeks of after-school AI tutoring = 2 years of typical learning gains, outperforming 80% of other educational interventions. And it helped all students, especially girls who were initially behind
Ethan Mollick tweet mediaEthan Mollick tweet media
English
358
2.2K
11.6K
4.3M
Richard Valtr
Richard Valtr@valtrese·
@paulg I wonder why the link between Wokeness to Marxism is always overplayed (I don’t find woke ideology particularly Marxist), but the link to an extreme form of Rawls hardly ever gets mentioned.
English
0
0
0
36
Paul Graham
Paul Graham@paulg·
I just published a new essay, and since the algorithm prefers links in replies, I dutifully put it in a thread about why I wrote it. But a lot of people didn't notice the link, so screw that: The Origins of Wokeness: paulgraham.com/woke.html
English
333
333
4.7K
505.2K
Richard Valtr retweetledi
Alec Stapp
Alec Stapp@AlecStapp·
This story about SpaceX engineers transporting a rocket from Texas to Florida is insanely hardcore
Alec Stapp tweet mediaAlec Stapp tweet media
English
204
864
10.9K
927.1K
Richard Valtr retweetledi
Quantіan
Quantіan@quantian1·
Obviously Chamath knows this, but US tax rates already fit on one page of simple English. The other 6999 pages enumerate every edge case of what counts as income and what doesn’t and what the deductions are, which you still need to define for a flat tax!
Chamath Palihapitiya@chamath

Replace the US Federal Tax Code's 7000 pages and millions of words with a simple flat tax. It could fit into a few pages of simple english, make paying taxes simple and enforcement even simpler.

English
135
615
10.2K
875.6K
Richard Valtr retweetledi
Crémieux
Crémieux@cremieuxrecueil·
The anti-fluoride crowd usually relies on studies of extremely low quality to make their case. One example that has stuck with me since I saw it was this: This study suggested that maternal fluoride exposure during pregnancy depressed the IQs of their children later on. There are multiple questionable results in this single graphical display of the study's findings and they throw the whole thing into question. First, as you can see, the significant decrement they observed in IQs by maternal urinary fluoride concentrations was only observed in males, and not overall or in females. Why? Completely unclear, and marginally significant (p = 0.02). If we take it seriously, then the effect of a small amount of fluoride for those males was enormous. Each mg/L should reduce boys' IQs by 4.49 points. Males are so fragile! Second, look on the right now and you'll see that the difference in the IQs of kids in fluoridated and non-fluoridated communities was nonsignificant, but at the same time, the impact of maternal self-reported fluoride intake from beverages was significant. What a curious combination of results! And what a curious p-value for this main effect: 0.04. Third, did you notice the right plot, showing the alleged impacts of fluoride by maternal self-reported intake wasn't split by sex, but instead by something that failed to moderate the association? You could argue they did that to maximize the amount of information they showed in the set of graphs. OK, but I doubt it. More realistically, this happened because the sex interaction by maternal self-reported fluoride intake was nonsignificant. This combination of results is extremely suspicious. It has all the hallmarks of being the result of a p-hacking expedition, and there are no indications that it should be regarded as a trustworthy finding indicating that we should be deeply concerned about fluoride. For one, the first result is a marginally-significant interaction with no biological plausibility. For two, this statistically unlikely and biologically dubious interaction fails to replicate. For three, these effects are enormous (yet barely significant despite a reasonable N and variance) for such small exposures. And for all the marbles, an effect that should have been present if the authors wanted us to take their reasoning about all of this seriously just wasn't. And despite demonstrating nothing, this is one of the better anti-fluoride studies. If you want to read the only study that has rigorously, causally looked into the impacts of fluoride in the typical range of exposures, check out: x.com/cremieuxrecuei… If you want to read this terrible study, first, guess where it was published. If you guessed @JAMAPediatrics, you're right! The link is here: jamanetwork.com/journals/jamap… And if you're interested, I have another thread on a recent, awful study published in JAMAPeds, here: x.com/cremieuxrecuei…
Crémieux tweet media
AnechoicMedia@AnechoicMedia_

This is incorrect, the report did not quantify the relationship between fluoride and IQ, and this isn't a real AP headline. The figure "2-5" was included in the source article as being "suggested" by "some studies reviewed in the report" at an unspecified exposure level.

English
47
62
693
252.8K
Richard Valtr retweetledi
vittorio
vittorio@IterIntellectus·
holy shit, it’s happening. a new paper in science just presented “evo”, an ai model that learns biology like a language, from single dna mutations to entire genomes. it doesn’t just predict biology; it can design it. we’re so close to creating new life i can taste it.
vittorio tweet media
English
219
396
2.9K
493.2K