
Logic Lab AI 🧪
287 posts

Logic Lab AI 🧪
@LogicLabAI
AI is the most important tech of our lifetime. Too many people are lost. I run the lab that explains it simply.













Anthropic accidentally leaked the source code of its Claude Code CLI tool. A source map file in the public npm package exposed nearly 1,900 TypeScript files and more than 500,000 lines of code. This included internal architecture details and references to unreleased features. The code was publicly downloadable from Anthropic's own storage.




@sglaas À moyen terme, on aura tous des LLMs qui tournent en local. Que ça soit Google avec ses TPU (déjà embarqués de base sur les Pixel) ou Taalas avec ses models hardware, je pense pas que globalement l'inférence se fera dans un Cloud très longtemps.














Huge Anthropic leak just dropped: the entire Claude Code CLI source is now public. A misconfigured .map file in their npm package exposed a direct download link to the full unobfuscated TypeScript codebase from Anthropic’s own R2 bucket. Discovered by Chaofan Shou (@Fried_rice), the dump is massive 1,900 files, 512,000+ lines including the complete tool system, 50+ slash commands, multi-agent coordinator, React/Ink terminal UI, IDE bridge, permission engine, and several unreleased features. Full repo is live on GitHub(@nichxbt ): github.com/nirholas/claud… Clean mirrors are already up for easy browsing(@baanditeagle): cc-poster.vercel.app cc-hidden-deploy.vercel.app It’s spreading fast, the entire dev community is already tearing through it.

Our 16th Transporter rideshare mission is targeted to launch tomorrow from California and will deliver 119 payloads to orbit → spacex.com/launches/trans…


In this episode of Founder Firesides, YC Managing Partner Jared Friedman talks to Karine Mellata (@karine_exe), co-founder of Variance (@trustvariance), who is coming out of stealth and announcing their $21 million Series A. Variance builds purpose-built AI agents for risk and compliance — automating fraud detection, content review, and identity verification for Fortune 500 companies and platforms like GoFundMe. They discuss why Variance built in the shadows for three years, detecting state-sponsored fraud rings, and the accident that nearly ended the company. 00:49 – The AI That Keeps the Internet Safe 01:28 – Why They Stayed Secret for 3 Years 02:26 – You’ve Used This Without Knowing It 02:57 – How GoFundMe Stops Scams 03:59 – How Scammers Use Big News Events 05:50 – Checking IDs and Businesses Online 07:44 – How the AI Agents Work 09:28 – The Hardest Problem: Bad Data 12:07 – Why This Only Works Now 14:22 – Catching Organized Fraud Groups 16:26 – Tiny Team, Huge Output 20:18 – How They Met at Apple 22:24 – Getting Their First Customer 24:57 – Recovering from Getting Hit by a Truck 29:36 – Sticking to One Big Idea



Yep, that's basically how I became a "doomer". Did a research project to figure out what were the counter arguments to xrisk and realised... There was nothing that made sense! Just people religiously clinging to their cognitive dissonance and zero rationality. Kind of like a cult when you think about it. Quite horrific...










There are lots of projects that could really help the transition to superintelligence go much better, which almost nobody is working on. With @finmoorhouse, I’ve written up eight ideas that seem especially promising. Some are about shaping AI systems themselves: independently evaluating AI character traits, benchmarking AI for strategic and philosophical reasoning, auditing models for sabotage and backdoors, and brokering deals with AIs to disclose early forms of misalignment. Others are about building tools on top of AI. There’s so much low-hanging fruit in tools that improve collective epistemics (e.g. reliability tracking for public figures) and enable coordination (e.g. monitoring and verification tools). We also sketch out a CSET-style think tank focused on the governance of outer space. And we propose a coalition of concerned ML researchers who commit to coordinated action if AI companies cross clear red lines. This isn’t a final list by any means, and I'd love to hear about other very concrete projects for handling the intelligence explosion. There’s so much to do! Link in reply.





