ravi sheth

669 posts

ravi sheth banner
ravi sheth

ravi sheth

@raviusheth

cofounder kingdom (YC S20); phd columbia. microbiome, pets, genomics, weird animals

New York, NY Se unió Mayıs 2010
431 Siguiendo723 Seguidores
Jeff Tang
Jeff Tang@jefftangx·
How much did Meta pay for Manus? $1.55B (according to Manus) Link below
Jeff Tang tweet media
Alexandr Wang@alexandr_wang

Excited to announce that @ManusAI has joined Meta to help us build amazing AI products! The Manus team in Singapore are world class at exploring the capability overhang of today’s models to scaffold powerful agents. Looking forward to working with you, @Red_Xiao_!

English
31
14
171
77.9K
ravi sheth retuiteado
François Chollet
François Chollet@fchollet·
When it comes to scientific discovery, one thing LLMs are really good at is getting hobbyists to delude themselves into believing they've made a huge breakthrough on some longstanding problem or a theory of everything
English
268
239
3.4K
332.7K
ravi sheth
ravi sheth@raviusheth·
@NikoMcCarty @OpenAI Choosing the right problem to solve / metric to optimize for is super important! Cool case study though
English
0
0
0
159
Niko McCarty.
Niko McCarty.@NikoMcCarty·
There's a recent blog from @OpenAI where they used GPT-5 to optimize a common biology experiment, called Gibson Assembly. I've seen criticisms online from people who say things like, "Who cares? A human totally could have done that" or whatever. And that's true. But I still think this blog is nice for a couple reasons. First, faster iterations / more reliable experiments is one of the best ways to accelerate biotechnology progress more broadly! Experiments take much too long, and are often much too unreliable, for scientists to move quickly. Therefore, we should invest more resources toward optimizing and improving common methods that seem "mundane". Second, this is a simple experimental system in which to test AI; indeed, that's the whole point! Gibson Assembly has been around for nearly two decades, is widely-used, and only requires three enzymes. It is therefore a natural fit for AI companies to benchmark their models on biological questions. (The parameter space is not too large!) To understand what OpenAI actually did, I first need to tell you about Gibson Assembly, a common method biologists use to stitch DNA molecules together. Originally developed in 2009, most scientists use Gibson because it's dead simple: Everything works at one temperature (50°C) and it requires only three enzymes. The DNA molecules to be joined together are designed such that they have 15-40 nucleotides, at either end, which overlaps with the other DNA molecule. All the DNA is then added to a tube and an enzyme, exonuclease, "chews back" several dozen nucleotides from the 5' ends of each molecule, leaving behind long single-stranded "arms." These arms float around in the liquid, collide with a matching arm in another DNA sequence, and hug each other tightly. A second enzyme, DNA polymerase, runs along these touching DNA strands and fills in parts of the arms that don't overlap or are still single-stranded. Finally, DNA ligase seals the "nick" and heals the strands, thus forming a newly assembled, double-stranded piece of DNA. OpenAI collaborated with a new biosecurity startup, Red Queen Bio (co-founded by Hannu Rajaniemi, an excellent science fiction writer), to build the evaluation framework. The metric they settled on is called cloning efficiency, which just means this: For a fixed amount of input DNA (like one picogram) transformed into cells, how many colonies successfully grow and contain the correctly assembled DNA molecule? By the end of their blog post, the OpenAI team claims that they were able to boost this number 79x relative to a "baseline protocol" from New England Biolabs, or NEB, a common purveyor of the Gibson enzymes. An important note is that OpenAI says no humans were involved in optimizing the reaction; all the humans did was carry out protocols generated by GPT-5, and also upload experimental results back into the model. They repeated this several times, coaxing the model to iterate each time. Their Gibson Assembly was remarkably simple, involving just two DNA molecules: a gene encoding a fluorescent protein and a plasmid to hold the gene. (The OpenAI team, intriguingly, also set up a set up a robot to automate the Gibson Assembly and transformation, but couldn't get it to work as well as a human. "We compared the robot's work to human-performed experiments at each step. The robot successfully handled the transformation process…When compared directly with human-performed transformations, the robot generated similar quality data with equivalent improvements over baseline, showing early potential for automating and accelerating biological experiment optimization." However. "while the fold-changes between the robot and human experiments were similar, absolute colony counts from the robot were approximately ten-fold lower than manual execution.") After several rounds of iteration, the model made two notable proposals: First, it added two additional enzymes to the normal Gibson Assembly reaction. Specifically, it added "the recombinase RecA from E. coli, and phage T4 gene 32 single-stranded DNA–binding protein (gp32)." The blog continues: "Working in tandem, gp32 smooths and detangles the loose DNA ends, and RecA then guides each strand to its correct match." This tweak improved the "cloning efficiency" metric by 14x over the standard NEB protocol. Second, it made a subtle change to how the assembled DNA molecules were inserted into living cells. Specifically, GPT-5 told the humans to spin down cells in a centrifuge, thus forming a pellet, prior to transforming them. This is typically not recommended because competent cells are "fragile," but the OpenAI team writes that "the cells tolerated concentration well and the increased molecular collisions boosted transformation efficiency substantially (>30-fold on final validation)." Now, recall that at the start of this little blog I said I really liked this experiment! (Do not crucify me, ye AI optimists.) But no internet commentary is truly complete without some nitpicking, so here goes. One criticism is that the largest improvement made by the model was not related to Gibson Assembly at all! It was related to how the DNA gets delivered into cells. And, indeed, prior studies have shown something similar. (This research paper, for example, says that one of the best ways to improve transformation is to concentrate cells beforehand. Fair play to the OpenAI team for linking to this in their blog post.) And if you are a human reading this blog, and you are planning to spin down your competent cells before transformation, just be sure to aliquot everything into small tubes first; repeated spins will, over time, kill everything. Another issue is that adding RecA and gp32 to a Gibson Assembly reaction complicates things quite a bit. For a normal Gibson reaction, everything comes in a single kit from NEB with the enzymes, and the whole experiment is done at one temperature: 50°C! But doing a Gibson Assembly this way would require one to buy purified RecA and gp32, and also change incubation temperatures to get everything working (RecA and gp32 work best at 37°C.) This is more expensive and more complicated, but maybe worthwhile in some cases. And lastly, the selected metric — namely, how many colonies one gets from a given amount of DNA — doesn't actually seem all that useful in most scenarios. A scientist stitching together two strands of DNA doesn't actually care if they only get five colonies because, often, they only need to get ONE colony that works, and then they can grow up those cells in large beakers and extract a huge amount of the plasmid. A more useful metric might be to increase the total number of unique DNA strands that can be joined together in a single Gibson Assembly reaction, without reducing overall quality, instead. Still, I liked this blog post as a whole. I'm glad people are optimizing the "small" things, and I don't blame OpenAI for not trying to solve cancer, in its overwhelming magnitude of manifestations, on their first attempt! Gibson Assembly is a much better starting point.
Niko McCarty. tweet media
English
19
52
351
71.8K
Sam Rodriques
Sam Rodriques@SGRodriques·
Science is too slow. At Edison, we are integrating AI Scientists into the full stack of research, from basic discovery to clinical trials. We want cures for all diseases by mid-century. We have raised a $70M seed to get started. Join us. We need cracked software engineers who want to work on finding cures rather than selling ads and generating slop. If you’re reading this, you’re probably a candidate. We need brilliant AI researchers who want to figure out how AI will accelerate real-world science. We need scientists and researchers with deep expertise in biology, biotech, and pharma who want to figure out how to integrate AI deeply into scientific workflows, from ideation to experimentation, and how to measure success or failure. We need extraordinarily talented generalist operators across BD, sales, product management, and partnerships who can focus on getting our tools into the hands of pharmaceutical companies. If any of these roles sound like you, get in touch. We are also expanding access to our platform. Our goal is to accelerate science writ large. To that end, we will continue to give academics and students 650 credits/mo indefinitely. I can’t promise we’ll keep this up forever, but we will try. Kosmos will still cost 200 credits, and the other agents (Analysis, Literature, etc.) will cost 1 or 2 credits. All paid users will have access to our regular agents, like our Analysis agent, Literature agent, and so on, for free via the UI. API access will still be paid, and users without a paid subscription will continue to get 10 credits per month for those agents. Our $200/mo subscription for 650 credits/mo is staying in place for now, but might be phased out at our next major product update. Along the lines of accelerating science, we’re also doing a major release of PaperQA today, our flagship open source literature agent, as part of our commitment to open science. In the short run, expect major improvements to Kosmos, including the ability to automatically access data, the ability to steer its exploration, and the ability to converse directly with its world model. In the long run, expect exponentially increasing rates of scientific discoveries, in biology and elsewhere. Our round is led by Triatomic Capital, Spark Capital, and a major US institutional biotech investor. We are also joined in this round by existing investors Pillar VC and Susa Ventures, two exceptional early-stage funds who backed us at founding, along with Striker Venture Partners, Hawktail VC, Olive VC, and a host of exceptional angels that includes famous AI researchers, the CEOs of multiple frontier AI labs, and leadership of major biotech and pharma companies.
Sam Rodriques tweet media
English
133
170
1.5K
242.1K
ravi sheth
ravi sheth@raviusheth·
@jonas humans require sleep, seems sensible that agents will need something similar
English
0
0
0
84
Jonas Templestein
Jonas Templestein@jonas·
I think it’ll be quite normal for agents to do something like human sleep soon Where they review their traces (as well as perhaps simulated ones) and update their own instructions
Harrison Chase@hwchase17

Longer running agents are starting to work - and we're starting to see new patterns for debugging and improving them Part of this is making traces accessible to coding agents so they can diagnose and suggest changes Wrote a bit about this new paradigm: blog.langchain.com/debugging-deep…

English
4
3
40
16.9K
ravi sheth retuiteado
Dr. Maria Elena De Obaldia
Dr. Maria Elena De Obaldia@EllenDeObaldia·
Are you a creative, collaborative scientist looking for your next challenge? We’re seeking awesome colleagues to join us on our quest to design microbial communities for plant-based foods (and other cool projects)! Our beautiful lab space is located in the Brooklyn Navy Yard.🚀
ravi sheth@raviusheth

So excited to share that we have not one, two, but SEVEN open positions to join the Kingdom team - we're hiring across RA and Sci levels to help deepen our scientific investment across microbial cultivation, chemistry, imaging, screening and more.

English
1
4
14
0
ravi sheth
ravi sheth@raviusheth·
If working with weird microbes and cool toys all day excites you - please get in touch. I feel super lucky to do science with our superstar team every single day, and we're looking for exceptional scientists, engineers and microbe lovers to join us.
English
1
1
15
0
ravi sheth
ravi sheth@raviusheth·
So excited to share that we have not one, two, but SEVEN open positions to join the Kingdom team - we're hiring across RA and Sci levels to help deepen our scientific investment across microbial cultivation, chemistry, imaging, screening and more.
English
5
29
73
0
ravi sheth retuiteado
Nicholas Larus-Stone
Nicholas Larus-Stone@nlarusstone·
The state of software sales in biotech is abysmal. If you’re afraid that I won’t like your product when demoing it, then why should I spend many $100k on it?
English
9
11
116
0
ravi sheth retuiteado
Ben Adler
Ben Adler@ben_a_adler·
Thrilled to share my first, first-author manuscript from my time here in @Doudna_lab with funding from m-CAFEs SFA! We believe Cas13a will have a bright future across phage biology and engineering. A short 🧵. TLDR; it works. 1/ biorxiv.org/content/10.110…
English
11
62
324
0
ravi sheth
ravi sheth@raviusheth·
@srikosuri Brand/connections were obviously pluses but for us at least the office hours were huge. Def depends on partner and nature of business model.
English
0
0
1
0
Sri Kosuri
Sri Kosuri@srikosuri·
@raviusheth Over the brand and access to capital? That’s great. I talk to a bunch of YC bio founders here in bay. often not impressed w/ the advice they are pushed on. Lots of focus on obtaining non-binding LOI’s, etc as a proxy for early traction in biotechs; but prob depends on partner.
English
1
0
0
0
Sri Kosuri
Sri Kosuri@srikosuri·
It’s pretty amazing how fast YC has become the equivalent of an elite university. It’s biggest value nowadays are your peer cohort (less so virtually), alumni networks, access to people with capital, and the signaling that comes from being admitted.
English
2
2
36
0
ravi sheth retuiteado
Robert Nelsen
Robert Nelsen@rtnarch·
At the time we seeded Illumina, everyone thought it was 1) crazy. 2) using optics to do genotyping-very crazy. 3). How could you compete against dominant players like Affy. Not possible. 4) could never make oligos cheap enough or multiplex them.
English
2
10
167
0