

Daniel Horowitz
164.9K posts

@RMConservative
Senior Editor @TheBlaze Host: CR Podcast https://t.co/nkkw7iupta YouTube: https://t.co/UdpNPp4izE Substack: https://t.co/OuJM4fiDqE






Let the mosques prove me wrong. Condemn the violence. x.com/realdailywire/…

Industry insiders acknowledge that many in the US have negative feelings about the technology. bloomberg.com/news/newslette…







Nick Fuentes OBLITERATES Joe Kent & Tucker Carlson — accusing them of shamelessly grifting on the assassination of Charlie Kirk. Please watch this video. Hate to agree with Nick, but I agree with every single second of this 6 minute rant….

"Massive investment in AI contributed basically zero to US economic growth last year," per Goldman Sachs



🚨 Shocking: Frontier LLMs score 85-95% on standard coding benchmarks. We gave them equivalent problems in languages they couldn't have memorized. They collapsed to 0-11%. Presenting EsoLang-Bench. Accepted to the Logical Reasoning and ICBINB workshops at ICLR 2026 🧵



JUST DROPPED: Anthropic's research proves AI coding tools are secretly making developers worse. "AI use impairs conceptual understanding, code reading, and debugging without delivering significant efficiency gains." -- That's the paper's actual conclusion. 17% score drop learning new libraries with AI. Sub-40% scores when AI wrote everything. 0 measurable speed improvement. → Prompting replaces thinking, not just typing → Comprehension gaps compound — you ship code you can't debug → The productivity illusion hides until something breaks in prod Here's why this changes everything: Speed metrics look fine on a dashboard. Understanding gaps don't show up until a critical failur and when they do the whole team is lost. Forcing AI adoption for "10x output" is a slow-burning technical debt nobody is measuring. Full paper: arxiv.org/abs/2601.20245

JUST DROPPED: Anthropic's research proves AI coding tools are secretly making developers worse. "AI use impairs conceptual understanding, code reading, and debugging without delivering significant efficiency gains." -- That's the paper's actual conclusion. 17% score drop learning new libraries with AI. Sub-40% scores when AI wrote everything. 0 measurable speed improvement. → Prompting replaces thinking, not just typing → Comprehension gaps compound — you ship code you can't debug → The productivity illusion hides until something breaks in prod Here's why this changes everything: Speed metrics look fine on a dashboard. Understanding gaps don't show up until a critical failur and when they do the whole team is lost. Forcing AI adoption for "10x output" is a slow-burning technical debt nobody is measuring. Full paper: arxiv.org/abs/2601.20245