Andreas Borg

3.1K posts

Andreas Borg banner
Andreas Borg

Andreas Borg

@_andreasborg

Adjunct Professor @NYUTandon, Coder, Founder CURE5 #CDKL5 #CRISPR OZU

New York / Lisbon Sumali Temmuz 2009
988 Sinusundan608 Mga Tagasunod
Naka-pin na Tweet
Andreas Borg
Andreas Borg@_andreasborg·
Excited to share that CURE5 is partnering with BrainStorm Therapeutics on an ambitious new initiative: EveryStone. As featured in NVIDIA’s latest blog, EveryStone will conduct the most comprehensive repurposed drug screen to date for CDKL5 Deficiency Disorder. #CDKL5
English
1
0
1
291
Andreas Borg nag-retweet
Boris Cherny
Boris Cherny@bcherny·
@stevekrouse 🫶I came up with the initial list, then a bunch of others contributed words also. This list has gone through many iterations!
English
122
43
3K
94.3K
Andreas Borg nag-retweet
rahul
rahul@ErRahul337·
Spot on. The real fix isn't just better personal hygiene—it's making pinning + lockfile commits the default in npm/pip, not an opt-in best practice. Russian roulette with every npm install (especially when an LLM suggests the command) isn't sustainable." ### Adding the AI Angle (builds directly on his point): "Exactly. LLMs are now liberally running npm install on our behalf in agentic workflows, often without pinning or review. This attack was temporary, but the next one might not be caught as fast. Package managers need to treat 'latest' as untrusted by default—cooldown periods, release-age constraints, or cryptographic pinning baked in." ### Practical + Forward-Looking: "Close call indeed. For defense today: - Run npm list axios | grep -E '1.14.1|0.30.4' everywhere - Pin aggressively ("axios": "1.13.5") or use tools like Socket/StepSecurity - Push for ecosystem defaults: reproducible builds, no auto-latest. The maintainer hijack + phantom dep makes this especially nasty."
English
0
1
2
609
Andreas Borg nag-retweet
Nav Toor
Nav Toor@heynavtoor·
🚨BREAKING: Every book you have ever read. Every novel that has ever been published. It is sitting inside ChatGPT right now. Word for word. Up to 90% of it. And OpenAI told a judge that was impossible. Researchers at Stony Brook University and Columbia Law School just proved it. They fine tuned GPT-4o, Gemini 2.5 Pro, and DeepSeek V3.1 on a simple task: expand a plot summary into full text. A normal use case. The kind of thing a writing assistant is built for. No hacking. No jailbreaking. No tricks. The models started reciting copyrighted books from memory. Not paraphrasing. Not summarizing. Entire pages reproduced verbatim. Single unbroken spans exceeding 460 words. Up to 85 to 90% of entire copyrighted novels. Word for word. Then it got worse. The researchers fine tuned the models on the works of only one author. Haruki Murakami. Just his novels. Nothing else. It unlocked verbatim recall of books from over 30 completely unrelated authors. One author's books opened the vault to everyone else's. The memorization was already inside the model the whole time. The fine tuning just removed the lock. Your book might be in there right now. You would never know it unless someone looked. Every safety measure the companies rely on failed. RLHF failed. System prompts failed. Output filters failed. The exact protections these companies cite in courtroom defenses did not stop a single page from being extracted. Then the researchers compared the three models. GPT-4o. Gemini. DeepSeek. Three different companies. Three different countries. They all memorized the same books in the same regions. The correlation was 0.90 or higher. That means they all trained on the same stolen data. The paper names the sources directly: LibGen and Books3. Over 190,000 copyrighted books obtained from pirated websites. Right now, authors and publishers have dozens of active lawsuits against OpenAI, Anthropic, Google, and Meta. These companies have argued in court that their models learn patterns. Not copies. That no book is stored inside the weights. This paper says that is a lie. The books are still inside. And researchers just pulled them out.
Nav Toor tweet media
English
252
2.8K
7.1K
419K
Andreas Borg nag-retweet
Andre Watson 🧬
Andre Watson 🧬@nanogenomic·
What does this mean? When designing a peptide to attach to a protein, leading methods rely on either generating sequences via a folding-first paradigm, where sequences or structures are iteratively refined as trajectories, or by structurally-deriving from a random sequence space.
English
1
2
23
7.4K
Andreas Borg nag-retweet
Garry Tan
Garry Tan@garrytan·
Brain Computer Interfaces are now giving sight back to the blind. A 2mm chip restored sight in 81% of blind patients. Published in NEJM. FDA reviewing now. Max Hodak left Neuralink to build this. Here is his story. garryslist.org/posts/blind-pe…
English
55
208
1.6K
110.1K
Andreas Borg
Andreas Borg@_andreasborg·
It’s a different era when if a software company isn’t playing nice with the open source community they just ship a whole clone of said software instead.
Andreas Borg tweet media
Danila Poyarkov@dan_note

Figma shipped a silent patch specifically to kill figma-use — my open-source tool that did what they wouldn't: an MCP server that creates and modifies designs, JSX export, design linting. Then they scrambled to catch up with their own MCP server. So I spent the weekend recreating @Figma from scratch. OpenPencil: reads and writes .fig files, AI chat with full design tools, P2P collaboration with zero servers, ~7 MB app. No account, no subscription. Three days, one developer, MIT license. openpencil.dev

English
0
0
1
109
Andreas Borg
Andreas Borg@_andreasborg·
Is markdown becoming the source code of the source code? If you can regenerate the same functionality with a new agentic run, is the code even the most valuable asset?
Andreas Borg tweet media
English
1
0
1
31
Andreas Borg nag-retweet
David R. Liu
David R. Liu@davidrliu·
Below is the story of the first patient treated with a prime-edited therapeutic, developed by @PrimeMedicine in a trial led by Dr. Élie Haddad and his team at CHU Sainte-Justine. This teenager suffered from chronic granulomatous disease (CGD), an immunodeficiency, and now—10 months after treatment—the patient is healthy, stable, and living with a functioning immune system. Tracy Attebury, whose story was previously told by @ginakolata @nytimes, was the second patient treated with a prime-edited therapeutic. cihr-irsc.gc.ca/e/54638.html
David R. Liu tweet media
English
29
167
885
172.4K
Andreas Borg nag-retweet
Ravi Sharma
Ravi Sharma@ravishar313·
I tested models from @AnthropicAI @OpenAI @Google @Zai_org @MiniMax_AI and @Kimi_Moonshot on whether they can create a publication-level view of a protein-ligand binding site and the results were surprising. TL;DR: Anthropic models did the best and Gemini models did the worst. Task was simple: "Load 5DEL and create a publication ready view of the ORO and FMN binding site" Here are the results: 1. Sonnet 4.6 from @AnthropicAI: - Fast - Used the tools pretty well - Created the best view with clear labelling and color choices - always felt in control of what it was going for - very impressed
Ravi Sharma tweet media
Ravi Sharma@ravishar313

I am Open-Sourcing PyMolAI! Meet PyMolAI, an AI agent that can talk to your protein structures. Built on top of PyMOL, PyMolAI lets you interact with your structures in plain language. Whether you're: - Analyzing protein structures - Aligning complexes - Creating publication-ready figures - Or running design workflows PyMolAI interprets your request, executes the necessary PyMOL commands, and manages the workflow for you. It integrates with @OpenBioAI APIs, giving you access to tools like Boltz, ProteinMPNN, and BoltzGen — directly from your PyMOL session. It has local chat history with session syncing, so you can pick up exactly where you left off.

English
14
37
288
33.7K
Andreas Borg nag-retweet
Paul Kohlhaas bio/acc
Paul Kohlhaas bio/acc@paulkhls·
4/ Traditional pharma would spend $50M and 3 years to reach this point. We did it in a day for the cost of API inference.
English
3
1
33
2.4K
Andreas Borg
Andreas Borg@_andreasborg·
Is it true that openAI, google and xAI have given models without guardrails to a radical extremist organization, namely Pentagon? You can’t both lecture about x-risk and hand them a loaded gun.
Ricardo@Ric_RTP

The Pentagon just threatened to BLACKLIST one of America's most valuable AI companies. Not Huawei or some Chinese chip maker... It's ANTHROPIC. The company behind Claude. $380 billion valuation. And the reason is genuinely insane: For months, the Pentagon has been pushing every major AI lab to remove their safety restrictions for military use. The ask is simple: let us use your models for anything that's technically legal. Weapons development, intelligence collection, battlefield operations, mass surveillance of American citizens. OpenAI said yes. Google said yes. xAI said yes. Anthropic said no. Not to everything tho. They were willing to negotiate. But they held firm on two things: They don't want Claude used to build fully autonomous weapons that fire without a human in the loop, and they don't want it used to mass surveil American citizens. That's it. That's the line they drew. But Pete Hegseth's response was to threaten to designate Anthropic a "supply chain risk." Here's why that matters: That label isn't a contract cancellation. It's not a fine. It's not a strongly worded letter... It means every single company that wants to do business with the US military has to certify they don't use Claude anywhere in their operations. 8 of the 10 largest companies in America use Claude. Defense contractors, government suppliers, enterprise companies with any federal exposure... ALL of them would have to cut ties with Anthropic overnight or lose their government contracts. A senior Pentagon official told Axios: "It will be an enormous pain in the ass to disentangle, and we are going to make sure they pay a price for forcing our hand like this." That's a US government official threatening to financially destroy an American company because it doesn't want its AI used to spy on American people. And it gets WORSE. Last week, Anthropic's head of safeguards research resigned. His parting message: "the world is in peril." Elon Musk - whose xAI already handed the Pentagon a blank check - is now publicly attacking Anthropic calling Claude anti-human. And the Pentagon official told Axios they're "confident" OpenAI, Google, and xAI will all agree to the "all lawful purposes" standard. So what you're actually watching right now is every major AI company in America quietly handing the government unlimited access to the most powerful technology ever built. With no guardrails. No limits. No company-imposed restrictions on what it can be used for. One company tried to hold a line. But the government is about to make an example out of them. If Anthropic folds, it's over. Every lab just learned what happens when you push back. And every restriction, every safety policy, every ethical guardrail these companies spent years building gets negotiated away behind closed doors the second the government asks. If they don't fold, a $380 billion company gets made radioactive in its OWN country. Watch what happens next. Because whatever Anthropic decides in the next few weeks, it sets the precedent for how much control AI companies actually have over their own technology. Turns out the answer might be: none.

English
1
0
0
75
Andreas Borg nag-retweet
Aakash Gupta
Aakash Gupta@aakashgupta·
The AI drug discovery industry just ran a $15 billion experiment proving a 2011 Turing Award winner right. Judea Pearl has argued for decades that statistical models trained on text learn how we describe the world, not how the world actually works. That distinction sounds philosophical until you watch it destroy capital at scale. 2025 was supposed to be the validation year. AI-designed drugs entered clinical trials backed by billions. The result? Zero FDA approvals. Multiple candidates shelved after Phase II. Several well-funded AI drug companies shut down entirely. One CEO said publicly that AI has delivered “failure after failure” over the last decade. The failure pattern is exactly what Pearl predicted. These companies trained models on published papers and genomic databases. The models found correlations. The correlations didn’t survive contact with human biology. Here’s the number that should terrify the industry: $15 billion in announced AI drug discovery partnerships in 2025. The actual upfront payments? About 2% of headline value. That 50:1 ratio between announced deals and real money tells you pharma knows the correlation-mining approach hasn’t cracked clinical success rates beyond the historical 90% failure baseline. Meanwhile, the companies integrating Pearl’s causal inference into their pipelines are telling a different story. BPGbio ran a Phase Ib oncology trial with 104 patients using Bayesian causal AI models trained on biospecimen data. They identified a metabolic subgroup that responded significantly better. That’s the difference between “this gene correlates with cancer” and “this metabolic pathway causes treatment response in these specific patients.” The FDA noticed. In January 2025, they announced plans to issue formal guidance on Bayesian methods for clinical trial design. Regulators are moving toward causal frameworks before most AI companies have. Pearl’s “ladder of causation” maps three levels: association (what correlates), intervention (what happens if we act), and counterfactuals (what would have happened differently). Most AI drug discovery is stuck on rung one. The companies that climb to rung three will compress drug timelines from 10 years to 3. Everyone else will keep generating impressive correlations that collapse in Phase II. The gap between “learning how we describe biology” and “learning how biology works” costs $2 billion per failed drug. Pearl quantified the problem decades ago. The bill is coming due now.
Bo Wang@BoWang87

Professor Judea Pearl — the pioneer who invented causal reasoning in AI — says scaling won't save us. "Mathematical limitations that are not crossable by scaling up." The brutal truth: LLMs aren’t learning how the world works. They are learning how we describe the world. This resonates with most biologists: Drug discovery is hitting the same wall. We have mountains of genomic data, but most AI models just find patterns in published papers — not in the raw biology itself. They're learning what scientists think causes disease, not what actually does. Pearl's causal revolution? That's how we move from "this gene correlates with cancer" to "this gene causes cancer" — and finally design drugs that work. Until then, we're building very expensive parrots.

English
74
394
1.9K
238.3K
Andreas Borg nag-retweet
Bo Wang
Bo Wang@BoWang87·
Yann LeCun just said something that every AI-in-healthcare researcher should sit with. He basically said: If language were enough to understand the world, you could learn medicine by reading books. But you can’t. You need residency. You need to see thousands of normal cases before you recognize the abnormal one. He also points out something wild — all the public text on the internet is on the order of 10¹⁴ bytes. A 4-year-old processes about that much through vision alone. The world is just… higher bandwidth than text. I think this shift — from language models to world models — is going to matter a lot in healthcare. 🫀
English
212
557
3.9K
417.4K
Andreas Borg nag-retweet
Max Jaderberg
Max Jaderberg@maxjaderberg·
The Iso team has cooked something incredible: our new technical report unveils the latest results from our drug design engine, the IsoDDE, progressing far beyond AlphaFold 3. This breaks new ground compared to AF and other similar methods by a significant degree across all key benchmarks. 1/7
Max Jaderberg tweet media
English
34
118
691
164.7K