Freedom Preetham

15.2K posts

Freedom Preetham banner
Freedom Preetham

Freedom Preetham

@freedompreetham

Founding Chair, CELL (@cellbiosf). Founder, https://t.co/Iio0YlWj2v | AI | Math | Genomics | Quantum Physics | https://t.co/KSL0H1KvU6

San Francisco, CA เข้าร่วม Haziran 2008
3.6K กำลังติดตาม14.5K ผู้ติดตาม
Freedom Preetham รีทวีตแล้ว
Andrej Karpathy
Andrej Karpathy@karpathy·
New supply chain attack this time for npm axios, the most popular HTTP client library with 300M weekly downloads. Scanning my system I found a use imported from googleworkspace/cli from a few days ago when I was experimenting with gmail/gcal cli. The installed version (luckily) resolved to an unaffected 1.13.5, but the project dependency is not pinned, meaning that if I did this earlier today the code would have resolved to latest and I'd be pwned. It's possible to personally defend against these to some extent with local settings e.g. release-age constraints, or containers or etc, but I think ultimately the defaults of package management projects (pip, npm etc) have to change so that a single infection (usually luckily fairly temporary in nature due to security scanning) does not spread through users at random and at scale via unpinned dependencies. More comprehensive article: stepsecurity.io/blog/axios-com…
Feross@feross

🚨 CRITICAL: Active supply chain attack on axios -- one of npm's most depended-on packages. The latest axios@1.14.1 now pulls in plain-crypto-js@4.2.1, a package that did not exist before today. This is a live compromise. This is textbook supply chain installer malware. axios has 100M+ weekly downloads. Every npm install pulling the latest version is potentially compromised right now. Socket AI analysis confirms this is malware. plain-crypto-js is an obfuscated dropper/loader that: • Deobfuscates embedded payloads and operational strings at runtime • Dynamically loads fs, os, and execSync to evade static analysis • Executes decoded shell commands • Stages and copies payload files into OS temp and Windows ProgramData directories • Deletes and renames artifacts post-execution to destroy forensic evidence If you use axios, pin your version immediately and audit your lockfiles. Do not upgrade.

English
539
1.1K
10.5K
1.3M
Freedom Preetham
Freedom Preetham@freedompreetham·
@anshulkundaje 🤣🤣🤣 of course, we got bored of all the wonderful things the ‘virtual cells’ does.
English
0
0
0
61
Freedom Preetham รีทวีตแล้ว
Andrej Karpathy
Andrej Karpathy@karpathy·
Software horror: litellm PyPI supply chain attack. Simple `pip install litellm` was enough to exfiltrate SSH keys, AWS/GCP/Azure creds, Kubernetes configs, git credentials, env vars (all your API keys), shell history, crypto wallets, SSL private keys, CI/CD secrets, database passwords. LiteLLM itself has 97 million downloads per month which is already terrible, but much worse, the contagion spreads to any project that depends on litellm. For example, if you did `pip install dspy` (which depended on litellm>=1.64.0), you'd also be pwnd. Same for any other large project that depended on litellm. Afaict the poisoned version was up for only less than ~1 hour. The attack had a bug which led to its discovery - Callum McMahon was using an MCP plugin inside Cursor that pulled in litellm as a transitive dependency. When litellm 1.82.8 installed, their machine ran out of RAM and crashed. So if the attacker didn't vibe code this attack it could have been undetected for many days or weeks. Supply chain attacks like this are basically the scariest thing imaginable in modern software. Every time you install any depedency you could be pulling in a poisoned package anywhere deep inside its entire depedency tree. This is especially risky with large projects that might have lots and lots of dependencies. The credentials that do get stolen in each attack can then be used to take over more accounts and compromise more packages. Classical software engineering would have you believe that dependencies are good (we're building pyramids from bricks), but imo this has to be re-evaluated, and it's why I've been so growingly averse to them, preferring to use LLMs to "yoink" functionality when it's simple enough and possible.
Daniel Hnyk@hnykda

LiteLLM HAS BEEN COMPROMISED, DO NOT UPDATE. We just discovered that LiteLLM pypi release 1.82.8. It has been compromised, it contains litellm_init.pth with base64 encoded instructions to send all the credentials it can find to remote server + self-replicate. link below

English
1.4K
5.4K
28.1K
66.2M
Freedom Preetham
Freedom Preetham@freedompreetham·
THE COMPRESSED CONTINUUM I was talking to a good friend (h/t Brett Waikart) about the split that is forming around the autonomous agent economy. There are maximalists who are willing to go all in now, restructure early, and prepare for what happens when the market hits the inverted end of the yield curve. Then there are the people who want to ride the continuum and slowly get there. That second crowd is living inside a false comfort. They think they understand how long the continuum will last. They think they have time to observe, calibrate, and still capture meaningful upside. They keep asking themselves some softened version of the same question. How fast can it really be? Maybe 2x faster than expected. Maybe a bit more intense. Maybe still manageable. No. That is the wrong mental model. This time it is different. This time, the continuum is compressed on a log scale. What looks gradual from far away becomes brutal when you are inside it. You do not get a neat linear transition where the future politely announces itself and waits for your organization, your product, your hiring plan, and your capital strategy to catch up. You get repeated doubling. Again and again. Every couple of months the ground shifts, and people still thinking in linear time call it noise. 2^6 = 64x faster! That is what a year looks like if the underlying compression keeps doubling every couple of months. By the time the slow movers finally convince themselves the transition is real, the market has already repriced the game. The talent has moved. The primitives have changed. The margins have shifted. User expectations have reset. The window they thought they were timing no longer exists in the form they imagined. Here is my (biased) advice. STOP whatever you are doing and rethink everything through this lens. Product. Team. Capital. Distribution. Interfaces. Timing. Moat. All of it. Because from here, you either move with maximalist seriousness, or you stay on the continuum long enough to be removed from the market by someone who already understood it was compressed. It's not the same game. #AI #agents
English
0
0
1
71
Freedom Preetham
Freedom Preetham@freedompreetham·
OpenClaw is moving at a brutal pace and it is starting to scramble the entire application layer. At this rate, a very large share of software will simply get compressed out of existence.(maybe 80% of apps will be gone) I still have deep reservations on Personal Intelligence tgough. Personal intelligence sits on a very different axis. It needs a different substrate, a different memory model, and a different relationship to the human. It is orthogonal to general intelligence. The company that gets that right will build the next real market moat post LLMs. #AI
English
0
0
1
164
Freedom Preetham รีทวีตแล้ว
Jeff Dean
Jeff Dean@JeffDean·
Congratulations to Charles Bennett and Gilles Brassard for winning this year's @theofficialacm Turing Award! 🎉 They were recognized for their work on quantum information science & quantum cryptography. Google is proud to support the award to recognize groundbreaking CS work.
Association for Computing Machinery@TheOfficialACM

Congratulations to Charles H. Bennett (@IBMResearch) and Gilles Brassard ( @UMontreal) on receiving the 2025 ACM A.M. Turing Award! 🔗: awards.acm.org/turing

English
18
39
442
39.2K
Freedom Preetham รีทวีตแล้ว
Anshul Kundaje
Anshul Kundaje@anshulkundaje·
Great to the see the flurry of single gene knockdown Perturb-seq like atlases from cell-lines, mouse brain etc over the last few days. These are undoubtedly very valuable datasets. I just want to re-iterate a few other very important expt. design considerations 1/
English
2
67
275
27.4K
Freedom Preetham
Freedom Preetham@freedompreetham·
There are too many companies popping up that market biological root cause inference as though it were already a solved science problem. In reality, any system claiming to infer mechanism across functional genomics and physiology is facing an inverse problem on a massively underdetermined multiscale network, where a thin, noisy clinical snapshot is being used to reconstruct latent regulatory state, pathway activity, cell type composition, compensatory feedback, and causal direction. Even the best research groups across the world working on gene regulatory networks, single cell perturbation biology, and metabolic flux still struggle with identifiability, sparse observability, context dependence, and experimental validation, which is why grand claims built on limited clinical data deserve very hard scrutiny. #genomics #biology #clinical #diagnostics
English
0
1
2
162
Anshul Kundaje
Anshul Kundaje@anshulkundaje·
@freedompreetham And peer review should not be restricted to 3 random people that some editor picks. Anyone with relevant expertise is a peer reviewer & commentary & feedback should be a community wide and continuous process.
English
1
0
3
199
Freedom Preetham รีทวีตแล้ว
Prof. Anima Anandkumar
Prof. Anima Anandkumar@AnimaAnandkumar·
We’re excited to release TorchLean which is the first fully verified neural network framework in Lean. The Lean community has largely focused on pure mathematics. TorchLean expands this frontier toward verified neural network software and scientific computing. With the recent release of CSlib, we see this as another step toward a fully verified ML stack. We support features: 1. Executable IEEE-754 floating-point semantics (and extensible alternative FP models) verified tensor abstractions with precise shape/indexing semantics 2. Formally verified autograd system for differentiation of NN programs Proof-checked certification / verification algorithms like CROWN (robustness, bounds, etc.) 3. PyTorch-inspired modeling API with eager-style development + export/lowering to a shared IR for execution and verification Project page: leandojo.org/torchlean.html Paper: [2602.22631] TorchLean: Formalizing Neural Networks in Lean Work done @Robertljg, Jennifer Cruden, Xiangru Zhong, @huan_zhang12 and @AnimaAnandkumar. #MachineLearning #ScientificComputing #Lean
Prof. Anima Anandkumar tweet media
English
27
247
1.6K
136.7K
Freedom Preetham
Freedom Preetham@freedompreetham·
Oh, my favorite people are the ones who say some abstract sh*t like “we’re building a paradigm that redefines how humans interface with possibility,” lock eyes deeply and nod like a courtroom stenographer, and then ask “you know what I mean?” like the burden of meaning is somehow on me. No, I do not know what you mean. And stop staring at me for so long! 🤦‍♂️
English
0
0
0
101
Freedom Preetham
Freedom Preetham@freedompreetham·
I faced 3 hardest questions in Quantum Field Theory today. I think I aced it. 1) Can you give a mathematically rigorous, nonperturbative definition of 4D Yang–Mills theory and prove it has a positive mass gap? Answer: They are eating dogs! They are eating cats! 2) Can you classify all consistent interacting 4D QFTs, especially CFT fixed points, and compute their operator dimensions and OPE coefficients without relying on perturbation theory? Answer: The DOW! the DOW is above 50,000!! 3) Can you compute real-time and finite-density dynamics in QCD, including the phase diagram at baryon chemical potential, despite the sign problem and without uncontrolled approximations? Answer: I have recordings of the UFO and the aliens which I have attached below. #StateOfUnionPhysics
English
0
0
1
105
Freedom Preetham รีทวีตแล้ว
Claude
Claude@claudeai·
Introducing Claude Code Security, now in limited research preview. It scans codebases for vulnerabilities and suggests targeted software patches for human review, allowing teams to find and fix issues that traditional tools often miss. Learn more: anthropic.com/news/claude-co…
English
1.9K
5.7K
49.9K
26.1M
Freedom Preetham รีทวีตแล้ว
Thomas Wolf
Thomas Wolf@Thom_Wolf·
Shifting structures in a software world dominated by AI. Some first-order reflections (TL;DR at the end): Reducing software supply chains, the return of software monoliths – When rewriting code and understanding large foreign codebases becomes cheap, the incentive to rely on deep dependency trees collapses. Writing from scratch ¹ or extracting the relevant parts from another library is far easier when you can simply ask a code agent to handle it, rather than spending countless nights diving into an unfamiliar codebase. The reasons to reduce dependencies are compelling: a smaller attack surface for supply chain threats, smaller packaged software, improved performance, and faster boot times. By leveraging the tireless stamina of LLMs, the dream of coding an entire app from bare-metal considerations all the way up is becoming realistic. End of the Lindy effect – The Lindy effect holds that things which have been around for a long time are there for good reason and will likely continue to persist. It's related to Chesterton's fence: before removing something, you should first understand why it exists, which means removal always carries a cost. But in a world where software can be developed from first principles and understood by a tireless agent, this logic weakens. Older codebases can be explored at will; long-standing software can be replaced with far less friction. A codebase can be fully rewritten in a new language. ² Legacy software can be carefully studied and updated in situations where humans would have given up long ago. The catch: unknown unknowns remain unknown. The true extent of AI's impact will hinge on whether complete coverage of testing, edge cases, and formal verification is achievable. In an AI-dominated world, formal verification isn't optional—it's essential. The case for strongly typed languages – Historically, programming language adoption has been driven largely by human psychology and social dynamics. A language's success depended on a mix of factors: individual considerations like being easy to learn and simple to write correctly; community effects like how active and welcoming a community was, which in turn shaped how fast its ecosystem would grow; and fundamental properties like provable correctness, formal verification, and striking the right balance between dynamic and static checks—between the freedom to write anything and the discipline of guarding against edge cases and attacks. As the human factor diminishes, these dynamics will shift. Less dependence on human psychology will favor strongly typed, formally verifiable and/or high performance languages.³ These are often harder for humans to learn, but they're far better suited to LLMs, which thrive on formal verification and reinforcement learning environments. Expect this to reshape which languages dominate. Economic restructuring of open source – For decades, open-source communities have been built around humans finding connection through writing, learning, and using code together. In a world where most code is written—and perhaps more importantly, read—by machines, these incentives will start to break down.⁴ Communities of AIs building libraries and codebases together will likely emerge as a replacement, but such communities will lack the fundamentally human motivations that have driven open source until now. If the future of open-source development becomes largely devoid of humans, alignment of AI models won't just matter—it will be decisive. The future of new languages – Will AI agents face the same tradeoffs we do when developing or adopting new programming languages? Expressiveness vs. simplicity, safety vs. control, performance vs. abstraction, compile time vs. runtime, explicitness vs. conciseness. It's unclear that they will. In the long term, the reasons to create a new programming language will likely diverge significantly from the human-driven motivations of the past. There may well be an optimal programming language for LLMs—and there's no reason to assume it will resemble the ones humans have converged on. TL; DR: - Monoliths return – cheap rewriting kills dependency trees; smaller attack surface, better performance, bare-metal becomes realistic - Lindy effect weakens – legacy code loses its moat, but unknown unknowns persist; formal verification becomes essential - Strongly typed languages rise – human psychology mattered for adoption; now formal verification and RL environments favor types over ergonomics - Open source restructures – human connection drove the community; AI-written/read code breaks those incentives; alignment becomes decisive - New languages diverge – AI may not share our tradeoffs; optimal LLM programming languages may look nothing like what humans converged on ¹ x.com/mntruell/statu… ² x.com/anthropicai/st… ³ wesmckinney.com/blog/agent-erg…#issuecomment-3717222957" target="_blank" rel="nofollow noopener">github.com/tailwindlabs/t…
English
99
285
1.8K
1M