Vivek Adarsh

49 posts

Vivek Adarsh

Vivek Adarsh

@flyingthor

Cofounder & CEO, @mithrl_ai. AI Co-Scientist for biologists. Past life, CS PhD @UCSB

San Francisco, CA Katılım Nisan 2017
323 Takip Edilen125 Takipçiler
Vivek Adarsh retweetledi
Mithrl Inc.
Mithrl Inc.@mithrl_ai·
We had a blast at @UCSF last week, discussing with researchers their work and the latest in agentic AI to accelerate discovery and delve deeper into transcriptomics, proteomics, and other research data. It was great to share the tabletop space with fantastic companies and teams as well; what a welcoming community! @idtdna, @Elemental_io, @PackGeneBiotech, @MilliporeSigma, and more. @GarredGarred #AIinLifeSciences #Proteomics #UCSF
Mithrl Inc. tweet media
English
0
2
3
80
Vivek Adarsh
Vivek Adarsh@flyingthor·
@jermdemo @mithrl_ai @mithrl_ai is an ai assistant btw, more full-stack/platform than the ones listed above ( e.g., raw data to bio insights vis-à-vis a point solution!)
English
0
0
2
37
Vivek Adarsh retweetledi
Mithrl Inc.
Mithrl Inc.@mithrl_ai·
"Regulators don't fear AI. They fear black boxes." - Vivek Adarsh, Co-Founder & CEO, Mithrl In our #JPM26 panel with healthcare and life sciences innovators, we explored the intersection of AI and biology. The panel considered how to uphold vital principles such as safety, transparency, and evidence-based science while harnessing AI's potential. Hear a snippet of this discussion theme and connect with us to work together and set the standard for transparent AI technology in Life Sciences, for pharma, biotech, and beyond. #JPM2026 @flyingthor
English
0
2
2
123
Vivek Adarsh
Vivek Adarsh@flyingthor·
RT @agihouse_org: 🩺 AI4Healthcare Technical Workshop – 3 days to go ⏳ 7–8 hour guided research & build session at JP Morgan Health Tech Wee…
English
0
1
0
16
Vivek Adarsh retweetledi
Mithrl Inc.
Mithrl Inc.@mithrl_ai·
Attending #JPM26? Don’t miss this JPM Healthcare Satellite Event! 📅 Wed, Jan 14 | 5–8 PM 📍 100 Montgomery St, Ste. 1200, San Francisco Register: bit.ly/44wDkOZ Join us for Happy Hour + Panel Discussions on AI & Biopharma Innovation, featuring industry leaders, expert insights, and gourmet food & drinks. Speakers include: ✔️ Vivek Adarsh (@flyingthor), Co-Founder & CEO, @Mithrl ✔️ Mark Budde (@markwbudde), Co-Founder & CEO, @Plasmidsaurus ✔️ Rayyan Sheikh, Discovery & Innovation Research Leader, @Roche ✔️ Morten Sogaard, SVP Innovation, @AstellasUS ✔️ Steven Henck (@DNAbiotech), VP and NGS Fellow, Danaher (@idtdna) ✔️ Jimmy Lin, CSO, @freenome ✔️ Mia Nease, SVP Commercial, @dnanexus ✔️ Gordon Sanghera (@gordon_sanghera), Founder & CEO, @nanopore ✔️ Christopher Szeto (@ChrisSzetoPhD), Snr. Director of Discovery Bioinformatics, @ExelixisInc ✔️ Uli Stilz, Global BioPharma leader and Board Member ✔️ Hinco Gierman, CSO, Elephas ✔️ Jason T. Gammack (@Jason_Gammack), CEO, @AnsaBio Moderated by John Cumbers (@johncumbers) (@SynBioBeta). See you in SF! #JPMHealthcareWeek
Mithrl Inc. tweet media
English
1
3
7
586
Vivek Adarsh retweetledi
Mithrl Inc.
Mithrl Inc.@mithrl_ai·
🚀 What are the value-based outcomes of AI in discovery science? Find out in the latest @mendelspod episode featuring our very own CEO and Co-Founder, Vivek Adarsh, PhD. Theral Timpson sits down with Vivek to explore how Mithrl’s AI platform is turning weeks of bioinformatics analysis into minutes, helping researchers: ✔️ Identify new biomarker candidates in minutes instead of weeks ✔️ Catch costly errors before they happen ✔️ Multiply opportunities for “happy accidents” that can lead to discovery Vivek also shares the vision behind Mithrl: building a true partner for science that accelerates the journey from raw data to real discovery. 🎧 Listen here: bit.ly/4o0Rte0 @flyingthor #Mendelspod #AIinScience #Bioinformatics #Pharma
Mithrl Inc. tweet media
English
0
2
3
146
Mithrl Inc.
Mithrl Inc.@mithrl_ai·
We're at #BIOEurope where innovation is grounded in the wisdom of the past. If you can call-out who is behind our pharma BD colleague, Naomi Thomson, please do! Naomi & CEO Vivek @flyingthor are onsite discussing w/ pharma partners the next competitive edge for pharma discovery.
Mithrl Inc. tweet media
English
1
0
1
66
Vivek Adarsh retweetledi
Mithrl Inc.
Mithrl Inc.@mithrl_ai·
Excited to kick-off #BIOEurope- THE life sciences pharma partnering event of the year w/ 5,800+ attendees! Connect with Mithrl CEO, Vivek Adarsh @flyingthor to discuss how AI can transform pharma R&D, driving higher discovery program ROI & new patent opportunities for R&D teams.
Mithrl Inc. tweet media
English
0
1
2
69
Vivek Adarsh
Vivek Adarsh@flyingthor·
Great to see @mithrl_ai featured in this @CNBC piece about AI startups redefining the San Francisco's office culture. At Mithrl, in-person collaboration is more than just a preference—it’s core to our mission and culture. We believe that the best ideas and breakthroughs happen when brilliant minds share the same space, whether it’s during brainstorming sessions, over coffee, or in the heat of some healthy competition at our ping-pong table. 🏓 (The leaderboard is fierce, but we’re always looking for challengers!)
Sal Rodriguez 🕷@sal19

A wave of San Francisco early stage startups are going back in office 4+ days a week, and they're competing for deals on office spaces in the city that they couldn't have landed pre-pandemic My latest for CNBC: cnbc.com/2024/12/06/ai-…

English
1
0
1
240
Vivek Adarsh
Vivek Adarsh@flyingthor·
I'm thrilled to announce @mithrl_ai's oversubscribed $4 Million round, led by @bonfire_vc 🔥 with participation from our previous investors and biotech industry insiders! Thanks @axios for covering this. This achievement is a testament to our exceptionally talented team and the incredible customers who partner with us every day. For all of us, this isn’t just a milestone—it’s an opportunity to push forward in our mission to accelerate scientific breakthroughs. We’ve all seen the toll that illness and slow scientific progress can take on people’s lives. Knowing that science often takes longer than we can afford, we set out to change this. Time saved in scientific research doesn’t just lead to faster results; it leads to real lives saved. Mithrl became our answer to this challenge, driven by our shared belief that scientific progress shouldn’t wait. axios.com/pro/health-tec…
Vivek Adarsh tweet media
English
1
2
6
642
Vivek Adarsh retweetledi
Axios Pro
Axios Pro@AxiosPro·
Mithrl raises $4M to speed drug discovery process trib.al/6mVEQZd
English
0
1
1
137
Vivek Adarsh retweetledi
Arvind Narayanan
Arvind Narayanan@random_walker·
In a new AI Snake Oil essay by me and @sayashk, we do a deep dive into AI existential risk probability estimates. We find that these forecasts are just feelings dressed up as numbers, and even the best-run, well funded, time intensive forecasting efforts result in a range of probability estimates that spans many orders of magnitude! We are forced to conclude that AI x-risk forecasts are far too unreliable to be useful for policy, and in fact highly misleading. We caution against speculation being laundered through pseudo-quantification. Full essay: aisnakeoil.com/p/ai-existenti… (about 7,000 words). Summary below. Background Over a year ago we got deep into the AI x-risk literature. We were skeptical but not dismissive. We wanted to identify valid concerns while rebutting bad arguments on their own terms. We've been especially interested in how policymakers should think about x-risk. Today's essay is the first in a series. We've been circulating private drafts for a while and have incorporated a lot of great feedback. I'm excited that we're finally launching this series of essays today! Analogy: alien invasion If the two of us predicted an 80% probability of aliens landing on earth in the next ten years, would you take this possibility seriously? Of course not. You would ask to see our evidence. As obvious as this may seem, it seems to have been forgotten in the AI x-risk debate that probabilities carry no authority by themselves. Probabilities are usually derived from some grounded method, so we have a strong cognitive bias to view quantified risk estimates as more valid than qualitative ones. But it is possible for probabilities to be nothing more than guesses. The reference class problem The domains where forecasting has been successful, such as geopolitics, rely on the existence of reasonably good reference classes of past events. A reference class for turmoil in one country is turmoil in other country. Reference classes for AI x-risk are things like ... animal extinction. Let’s get real. This kind of reference class tells us nothing about the possibility of developing superintelligent AI or losing control over such AI, which are the central sources of uncertainty for AI x-risk forecasting. Subjective probabilities vary by orders of magnitude Lacking grounded methods, forecasts are necessarily “subjective probabilities”, that is, guesses based on the forecaster’s judgment. Unsurprisingly, these vary by orders of magnitude. Consider the Existential Risk Persuasion Tournament (XPT) conducted by the Forecasting Research Institute in late 2022, which we think is the most elaborate and well-executed x-risk forecasting exercise conducted to date. It involved various groups of forecasters, including AI experts and forecasting experts. The 75th percentile AI expert estimate and the 25th percentile forecasting expert estimate differ by at least a factor of 100. All of these estimates are from people who have deep expertise on the topic and participated in a months-long tournament where they tried to persuade each other! If this range of forecasts here isn’t extreme enough, keep in mind that this whole exercise was conducted by one group at one point in time. We might get different numbers if the tournament were repeated today, if the questions were framed differently, etc. It's all speculation What’s most telling is to look at the rationales that forecasters provided, which are extensively detailed in the 754-page report. Forecasters aren’t using quantitative models, especially when thinking about the likelihood of bad outcomes conditional on developing powerful AI. For the most part, forecasters are engaging in the same kind of speculation that everyday people do when they discuss superintelligent AI. Maybe AI will take over critical systems through superhuman persuasion of system operators. Maybe AI will seek to lower global temperatures because it helps computers run faster, and accidentally wipe out humanity. Or maybe AI will seek resources in space rather than Earth, so we don’t need to be as worried. There’s nothing wrong with such speculation. But we should be clear that when it comes to AI x-risk, forecasters aren’t drawing on any special knowledge, evidence, or models that make their hunches more credible than yours or ours or anyone else’s. Forecast skill can't be measured We often hear that forecasting has a great track record and so we should trust it. This makes no sense — why should we trust that someone who is good at forecasting elections or other kind of events is good at forecasting AI x-risk? Besides the fact that these are completely different kinds of events, there just isn't any real evidence to draw upon for the AI x-risk estimation, so being good at finding and weighing evidence is not a skill that helps much here. Besides, the math doesn't work out. We show that if someone is good at forecasting common events but systematically overestimates rare events, we would have to evaluate them on millions of forecasts before this became apparent. Summary of the main argument: none of the three probability estimation methods yields reliable AI x-risk forecasts. Risk estimates may be systematically inflated There are many reasons why forecasters might systematically overestimate AI x-risk. The belief that AI can change the world is one of the main motivations for becoming an AI researcher. And once someone enters this community, they are in an environment where that message is constantly reinforced. And if one believes that this technology is terrifyingly powerful, it is perfectly rational to think there is a serious chance that its world-altering effects will be negative rather than positive. And in the AI safety subcommunity, which is a bit insular, the echo chamber can be deafening. Claiming to have a high "p(doom)" seems to have become a way to signal one’s identity and commitment to the cause. So what should governments do about AI x-risk? Our view isn’t that they should do nothing. But they should reject the kind of policies that might seem compelling if we view x-risk as urgent and serious, notably: restricting AI development. As we’ll argue in a future essay in this series, not only are such policies unnecessary, they are likely to increase x-risk. Instead, governments should adopt policies that are compatible with a range of possible estimates of AI risk, and are on balance helpful even if the risk is negligible. Fortunately, such policies exist. Governments should also change policymaking processes so that they are more responsive to new evidence. More on all that soon. The full essay is in our newsletter. We plan to publish the rest of the series over the next few weeks. Thank you for reading! aisnakeoil.com/p/ai-existenti…
Arvind Narayanan tweet media
English
77
205
792
346.3K