Tim Lantin

810 posts

Tim Lantin banner
Tim Lantin

Tim Lantin

@timlantin

the constitution of man rewrites the constitutions of states, and the constitution of man is subject to change. bme phd candidate @columbia 🫡🟥

New York, NY Katılım Haziran 2022
442 Takip Edilen893 Takipçiler
Tim Lantin retweetledi
roon
roon@tszzl·
are you prepared for the violence of everything on earth becoming legible
English
234
131
3.7K
481.4K
Tim Lantin
Tim Lantin@timlantin·
this and the monster under the jpm conference venue piece…owl is really filling the vacuum in biotech for roon-style esoterics, on top of actually well-researched long form. doing the most to imbue this industry with a much-needed je ne sais quoi that attracts people and $
owl@owl_posting

a lot of excellent ai-bio startups are based on sufficiently complicated theories of change that slide decks are no longer sufficient to grasp them in their entirety. these days, you must cross your eyes, think really, really hard, and trust the fuzzy bubble that emerges

English
1
0
6
907
Tim Lantin retweetledi
Michelle Lee
Michelle Lee@michellearning·
Welcome to the scientific revolution. 100s of robots. Zero coffee breaks. America’s largest autonomous lab, open today.
English
163
396
3K
670.8K
Tim Lantin retweetledi
Josie Zayner
Josie Zayner@josiezayner·
Every AI now supports advanced bio reasoning until you try and upload your single 25gb fastq file.
English
8
4
107
6.4K
Tim Lantin retweetledi
Avi Roy
Avi Roy@agingroy·
380 biology AI models exist today. In 2015, fewer than 10. 63% of them train on the same two databases. @BessemerVP just mapped where $18B in AI drug discovery funding actually went. Three layers are emerging: proprietary biological data, autonomous AI research agents, and closed-loop robotic labs. Pharma is writing real checks: - @GSK paid $50M upfront to NOETIK for oncology AI - @EliLillyandCo pays mid-8 figures annually to @chaidiscovery - @IsomorphicLabs has $3B+ in deals with Lilly, @Novartis, @JNJInnovation - @AnthropicAI acquired @CoefficientBio for $400M. The startup was 8 months old. Today, @novonordisk partnered with @OpenAI to deploy AI across its entire drug pipeline and manufacturing. 90% of drugs still fail in clinical trials. R&D costs double every 9 years. These companies are betting that owning proprietary biology data, not just running better algorithms on public data, is what breaks that cycle. The question worth asking: if most of these models share the same training foundation, are 380 models actually 380 different ways to find your next drug?
Avi Roy tweet media
English
15
36
285
25.1K
Tim Lantin retweetledi
Martin Shkreli
Martin Shkreli@MartinShkreli·
my response to an unhinged take: first, median overall survival doesn't count fat tails. SOME patients will the lottery and get 3 or 5 more years of life. wish i still had my dad who died of cardiovascular disease! next, this is how progress happens. i made a post a few weeks ago about how mOS for multiple myeloma has gone from something like 6 months to 5 years over the last 20 years. thats amazing progress. the progress compounds. cheer on progress--it's not hard. our children and their children's children will still get cancer, we owe it to them to give them the best outcomes. finally, no one is trying to make half a medicine. when you make a molecule you're giving it your all. you don't dial it down and say 'well, 6 months is enough' as if anyone sets this kind of goal. (it's not even scientifically possible). you make it and see what you get. somewhat invariably, there is no 'cure' lurking that you missed. you hit one pathway, then another, and then another, and then you get a real long-term outcome shift. it's the same story in every tumor type (excluding a few like CML where pharma hit it out of the park). if a 'cure' is so easy, start a drug company and do it yourself!
Jason Locasale@LocasaleLab

Over 50,000 people in the U.S. die from pancreatic cancer every year. After this drug is approved and widely used, that number will remain essentially the same. In absolute terms, they are reporting a median survival shift of around six months. Yet we know resistance inevitably develops, as it does in all cancers subjected to drugs targeting mutations in the RAS/MAPK/PI3K pathway. If the goal is to meaningfully reduce cancer mortality, this does not move the needle. This is where decades of focus and billions in NIH/NCI funding have concentrated. National Cancer Institute is funded at roughly $9 billion per year, and a substantial portion of that budget is devoted to oncogenes and what is marketed as targeted therapies. This is then layered on top of a drug development and healthcare model where drugs like this can cost over $100,000 per patient. These are incremental gains at the late metastatic stage, where the biology is already stacked against you. Meanwhile, the two areas that actually determine population-level outcomes—early detection and prevention—remain neglected. If we are serious about reducing the number of people who die from pancreatic cancer, the priority cannot be continuing to optimize late-stage interventions that predictably yield temporary gains. The goal should be zero deaths. Right now, we are not on a path that gets us there. It is not surprising that this view is being met with backlash. Much of the criticism is coming from people whose incentives—academic , financial, or institutional—are tied to maintaining the current system in biomedical research and the biotech and pharma sectors that profit from it.

English
49
59
1.2K
266.3K
Tim Lantin retweetledi
Adam Green
Adam Green@adamlewisgreen·
Everyone in bio is trying to build a 'virtual cell'—an AI model that understands cell biology well enough to predict what happens when you intervene. The field thinks you need massive perturbation datasets to train one. Billions of dollars are flowing into Big Perturbation to generate this data. NLP thought the same thing. Big Parallel Corpora and Big Treebank held back the field for years, until GPT-1 figured out unsupervised training on raw text and all that became a footnote. Big Perturbation doesn't want you to know this one simple trick for unsupervised pretraining on observational RNA data...
Adam Green tweet media
English
1
4
51
5K
Tim Lantin retweetledi
Hans Mahncke
Hans Mahncke@HansMahncke·
The story behind the New York Times’ 1903 claim that human flight was between one and ten million years away is even worse than it looks. Once you understand the backstory, you realize that the New York Times story is not really about flight at all but about how elites and credentialed “experts” mistake their own failures for the boundaries of possibility. The New York Times did not dismiss the possibility of powered flight at random. There was a very specific reason behind it. At the time, America’s most prominent scientific authority, Smithsonian Secretary Samuel Langley, had been showered with large amounts of taxpayer funding to build an aircraft, the Langley Aerodrome. Despite all the money, institutional backing, and elite prestige, Langley and his team could not get it to fly, culminating in a series of very public failures, the last on December 8, 1903. So when the New York Times declared that flight was millions of years away, what it was really saying was that if the most credentialed and well-funded “experts” cannot do it, then it cannot be done. A mere nine days later, the elites’ proclamation of impossibility lay in ruins. Two totally unknown bicycle mechanics from Ohio achieved the first powered flight using improvised parts, a few hundred dollars of their own money, and sheer persistence. The story of flight is, at its core, a story of the triumph of American individualism over elite credentialism. The fact that it was the New York Times that inadvertently delivered the proof is the most fitting conclusion imaginable.
Aaron Ng@localghost

"Man won't fly for a million years" – NYT 1903

English
446
4.6K
20.4K
2.1M
Tim Lantin
Tim Lantin@timlantin·
old neural wetware evolved for ancient times hasn’t had time to catch up to modern environment and stressors. some neural engineering is in order
Saganism@Saganismm

Look closely. Between these two moments, our species has performed miracles. We have mapped the blueprint of life within our own DNA. We have built “brains” of silicon that can outthink their creators. We have pushed back the darkness of disease. Infant mortality has plummeted, and millions of children who would have been lost to the earth in 1972 are today alive, dreaming, and contributing to the global chorus. We have sent robotic emissaries to the edge of the interstellar dark and peered back at the beginning of time itself through mirrors of gold. Technologically, we are a different species. We are more connected, more informed, and more capable than any ancestor could have imagined in their wildest fever dreams. And yet, look again. From this distance, the borders remain invisible. You cannot see the “holy” ground over which we spill the blood of our children. You cannot see the walls we build to keep our neighbors out or the ideological trenches we dig to bury our common humanity. Despite our leap from vacuum tubes to artificial intelligence, we remain haunted by the same ancient tribalisms. We use 21st century technology to prosecute Bronze Age grudges. We have changed the climate of our world, but we have yet to change the climate of our hearts. We are still a toddler civilization, playing with matches in a library of irreplaceable wonders. The contrast is our great paradox. We have the power of gods, but we still possess the temperaments of the territorial primates from which we rose. We have learned to fly between worlds, but we are still struggling to learn how to walk together on this one.

English
0
0
1
63
Tim Lantin retweetledi
@goth
@goth@goth600·
Mutually Assured Cognitive Vigilance
@goth tweet media
English
27
22
421
18.7K
Tim Lantin retweetledi
NASA
NASA@NASA·
Liftoff. The Artemis II mission launched from @NASAKennedy at 6:35pm ET (2235 UTC), propelling four astronauts on a journey around the Moon. Artemis II will pave the way for future Moon landings, as well as the next giant leap — astronauts on Mars.
English
3.8K
55.5K
178.6K
14.2M
Tim Lantin retweetledi
Rohil Badkundri
Rohil Badkundri@rohilbadkundri·
We used AI to predict the failure of a Phase 3 trial before the results were announced. Today, we're publishing 10 more predictions for the future. Thread 🧵
GIF
English
53
102
742
246.1K
Tim Lantin retweetledi
Jake Wintermute 🧬/acc
Jake Wintermute 🧬/acc@SynBio1·
Today we’re launching American Wetware, a design studio for building with biology 🇺🇸💧 I’m doing this together with @thisischristina and @p_maverick_b Our mission is to learn the design language of biology
Jake Wintermute 🧬/acc tweet media
English
51
68
599
74.7K
Tim Lantin retweetledi
roon
roon@tszzl·
the private sector has been remaking its own versions of NIH, ARPA etc as these public science institutions have seen structural decline and defunding and it will be supercharged by the funding NPV of machine intelligence and its firepower at allocation decisions
Jacob Trefethen@JacobTref

I'm joining the OpenAI Foundation to lead the Life Sciences & Curing Diseases program. We're starting with three areas of grantmaking: * AI for Alzheimer's * Public Data for Health * Accelerating Progress on High-Mortality and High-Burden Diseases Time to get to work!

English
25
19
323
114.4K
Tim Lantin retweetledi
0xSero
0xSero@0xSero·
In 72 hours I got over 100k of value 1. Lambda gave me 5000$ credits in compute 2. Nvidia offered me 8x H100s on the cloud (20$/h) idk for how long but assuming 2 weeks that'd be 5000$~ 3. TNG technology offered me 2 weeks of B200s which is something like 12000$ in compute 4. A kind person offered me 100k in GCP credits (enough to train a 27B if you do it right) 5. Framework offered to mail me a desktop computer 6. We got 14,000$ in donations which will go to buying 2x RTX Pro 6000s (bringing me up to 384GB VRAM) 7. I got over 6M impressions which based on my RPM would be 1500$ over my 500$~ usual per pay period 8. I have gained 17,000~ followers, over doubling my follower count 9. 17 subscribers on X + 700 on youtube. The total value of all this approaches at minimum 50,000$~ and closer to 150,000$ if I leverage it all. --------------------- What I'll be doing with all this: Eric is an incredibly driven researcher I have been bouncing ideas off of over the last month. Him and I have been tackling the idea of getting massive models to fit on relatively cheap memory. The idea is taking advantage of different forms of memory, in combination with expert saliency scoring, to offload specific expert groupings to different memory tiers. For the MoEs I've tested over my entire AI session history about 37.5% of the model is responsible for 95% of token routing. So we can offload 62.5% of an LLM onto SSD/NVMe/CPU/Cheap VRAM this should theoretically result in minimal latency added if we can select the right experts. We can combine this with paged swapping to further accelerate the prompt processing, if done right we are looking at very very decent performance for massive unquantisation & unpruned LLMs. You can get DeepSeek-v3.2-speciale at full intelligence with decent tokens/s as long as you have enough vram to host the core 20-40% of the model and enough ram or SSD to host the rest. Add quantisation to the mix and you can basically have decent speeds and intelligence with just 5-10% of the model's size in vram (+ you need some for context) The funds will be used to push this to it's limits. ----------------- There's also tons of research that you can quantise a model drastically, then distill from the original BF16 or make a LoRA to align it back to the original mostly. This will be added to the pipeline too. ------------------ All this will be built out here: github.com/0xSero/moe-com… you will be able to take any MoE and shove it in here, and with only 24GB and enough RAM/NVMe to compress it down. it'll be slow as hell but it will work with little tinkering. ------------------ Lastly I will be looking into either a full training run from scratch -> or just post-training on an open AMERICAN base model - a research model - an openclaw/nanoclaw/hermes model - a browser-use model To prove that this can be done. -------------------- I will be bad at all of it, and doubt I will get beyond the best small models from 6 months ago, but I want to prove it's no boogeyman impossible task to everyone who says otherwise. -------------------- By the end of the year: 1. I will have 1 model I trained in some capacity be on the top 5 at either pinchbench, browseruse, or research. 2. My github will have a master repo which combines all my work into reusable generalised scripts to help you do that same. 3. The largest public comparative dataset for all MoE quantisations, prunes, benchmarks, costs, hardware requirements. -------------------------- A lot of this will be lead by Eric, who I will tag in the next post. I want to say thank you to everyone who has supported me, I have gotten a lot of comments stating: 1. I'm crazy, stupid, or both 2. I'm wasting my time, no one cares about this 3. This is not a real issue I believe the amount of interest and support I've received says it all. donate.sybilsolutions.ai
0xSero tweet media
English
219
271
4.1K
168.2K
Tim Lantin
Tim Lantin@timlantin·
@maxxyung11 i think you’re right. OAI gets automated lab infra + know-how for pennies on the dollar and can capitalize on the imminent biology revolution with a team they’ve already produced something with
English
0
0
2
942
Maxx Yung
Maxx Yung@maxxyung11·
just want to put it here right now that i think openai will acquire ginkgo within 8 months
English
16
0
140
21.1K
Tim Lantin
Tim Lantin@timlantin·
“Our alpha doesn’t extend to eschatology.” - GSV Sleeper Service, my claw
Tim Lantin tweet media
English
0
0
1
56
Tim Lantin retweetledi
stefan
stefan@wasserstein_rao·
Using claude code to directly control a liquid handling robot is such a crazy experience
English
15
24
286
74.4K