Robert Y. Chen

3.9K posts

Robert Y. Chen banner
Robert Y. Chen

Robert Y. Chen

@therealRYC

PGY-3 @UW Psychiatry. Prev MD-PhD @WUSTL. Biomarker and drug discovery for psychiatry, starting with Schizophrenia. Dog Daddy x2 to Taro and Azuki.

Seattle, WA Katılım Şubat 2020
764 Takip Edilen3.8K Takipçiler
Sabitlenmiş Tweet
Robert Y. Chen
Robert Y. Chen@therealRYC·
🚨 The HHS led by Dr. Oz and RFK Jr. just dropped a major policy letter pushing for deprescribing, shared decision-making, and non-pharmacological treatments in mental health care. The key message: meds are essential, but shouldn't be the only tool in the toolbox 🧵
Robert Y. Chen tweet media
English
4
2
4
559
Pavel Trubetskov
Pavel Trubetskov@PTrubetskov·
@therealRYC One PHQ-9 point sits inside measurement noise. Thin ground for a policy letter scaled to millions.
English
1
0
1
5
Robert Y. Chen
Robert Y. Chen@therealRYC·
🚨 The HHS led by Dr. Oz and RFK Jr. just dropped a major policy letter pushing for deprescribing, shared decision-making, and non-pharmacological treatments in mental health care. The key message: meds are essential, but shouldn't be the only tool in the toolbox 🧵
Robert Y. Chen tweet media
English
4
2
4
559
Grace
Grace@dignityhnrgrace·
@therealRYC BBB decimates Medicaid over the next several years.
English
1
0
0
11
Robert Y. Chen
Robert Y. Chen@therealRYC·
@dignityhnrgrace How should prescribers get better at this? Are there prescribers who you’ve felt have done a good job?
English
3
0
1
16
Grace
Grace@dignityhnrgrace·
@therealRYC Prescribers as a whole lack any shred of competence in recognizing ADEs. And, who defines the length of an “adequate trial?” The current approach completely ignores ADEs, dismissing, and gaslighting patients who report them.
English
1
0
0
13
Robert Y. Chen
Robert Y. Chen@therealRYC·
💡 Takeaways In my clinic and at the hospital I train at, I treat patients suffering from a range of mental health disorders spanning mild to severe impairment. I spend half my time optimizing meds and the other half trying to peel away unnecessary polypharmacy inherited from previous admissions or cowboy polypharmacy. It's encouraging to see federal backing for deprescribing and shared decision-making. We need to normalize asking: "Do you still need to be on this?" This is an overall net-good. The caution: the message must match the evidence. And the reality is that the best studies on deprescribing have not been impressive. Neither have lifestyle interventions. The key will be in tailoring the right treatment (or removal of treatment) to the right patient. Blanket statements that deprescribing, or exercise over meds, is always good or always bad, are not backed by evidence. The good news is that this letter is not making those types of statements. In fact, the letter states many times: "When clinically indicated". Where this goes next → watching how CMS aligns actual reimbursement with these deprescribing and non-pharmacologic treatment goals. Source: U.S. Department of Health and Human Services. Dear Colleague Letter. May 2026 Read the full letter here: samhsa.gov/sites/default/…
English
1
0
1
96
Eric Wu
Eric Wu@ericwu93·
@therealRYC @james_y_zou @gxl_ai Hi Robert, Paperclip updates automatically with the latest literature as its made available, depending on the source (e.g., biorxiv makes monthly releases).
English
1
0
5
258
James Zou
James Zou@james_y_zou·
Big Update🤩: #paperclip now includes full papers from all of arXiv, PubMed Central and 150 million abstracts!🖇️ You can give your LLM all that knowledge in one line—all optimally indexed for AI agents. Much more thorough and ~100x faster than web search, and free.
James Zou tweet media
English
43
241
1.7K
124.2K
Robert Y. Chen
Robert Y. Chen@therealRYC·
@akseljoonas Wow. I want to give this a try. What’s the best way to figure out cost structure for someone who isn’t AI native, but has a problem to work on? I imagine it could be expensive to test out 50 different training runs.
English
0
0
0
180
Aksel
Aksel@akseljoonas·
ml-intern is fully on mobile now you can launch 8 A100s from your phone. while on the couch. while commuting. wherever I just did this while biking. same sessions as your desktop too — start a run on your laptop, check on it from your phone, it's all there the research lab is now just wherever you are
Aksel@akseljoonas

Introducing ml-intern, the agent that just automated the post-training team @huggingface It's an open-source implementation of the real research loop that our ML researchers do every day. You give it a prompt, it researches papers, goes through citations, implements ideas in GPU sandboxes, iterates and builds deeply research-backed models for any use case. All built on the Hugging Face ecosystem. It can pull off crazy things: We made it train the best model for scientific reasoning. It went through citations from the official benchmark paper. Found OpenScience and NemoTron-CrossThink, added 7 difficulty-filtered dataset variants from ARC/SciQ/MMLU, and ran 12 SFT runs on Qwen3-1.7B. This pushed the score 10% → 32% on GPQA in under 10h. Claude Code's best: 22.99%. In healthcare settings it inspected available datasets, concluded they were too low quality, and wrote a script to generate 1100 synthetic data points from scratch for emergencies, hedging, multilingual etc. Then upsampled 50x for training. Beat Codex on HealthBench by 60%. For competitive mathematics, it wrote a full GRPO script, launched training with A100 GPUs on hf.co/spaces, watched rewards claim and then collapse, and ran ablations until it succeeded. All fully backed by papers, autonomously. How it works? ml-intern makes full use of the HF ecosystem: - finds papers on arxiv and hf.co/papers, reads them fully, walks citation graphs, pulls datasets referenced in methodology sections and on hf.co/datasets - browses the Hub, reads recent docs, inspects datasets and reformats them before training so it doesn't waste GPU hours on bad data - launches training jobs on HF Jobs if no local GPUs are available, monitors runs, reads its own eval outputs, diagnoses failures, retrains ml-intern deeply embodies how researchers work and think. It knows how data should look like and what good models feel like. Releasing it today as a CLI and a web app you can use from your phone/desktop. CLI: github.com/huggingface/ml… Web + mobile: huggingface.co/spaces/smolage… And the best part? We also provisioned 1k$ GPU resources and Anthropic credits for the quickest among you to use.

English
12
20
172
20.7K
Kexin Huang
Kexin Huang@KexinHuang5·
I’ve spent the past decade building bio AI models—until now, training them meant years, huge cost, and teams spanning AI, biology, and infra. Not anymore. Introducing a new capability at Biomni Lab: now any scientist can create, fine-tune, pre-train, and optimize bio foundation models on their own datasets, just by describing what they want. This is powered by a new feature called GPU-as-a-tool where we let AI agents launch and orchestrate GPU sandboxes. In the video, we show that you can use prompt to: - Fine-tune Borzoi for MPRA regulatory activity prediction - Fine-tune scGPT on a H1 hESC perturb-seq data - Fine-tune ESM2 for protein subcellular localization prediction - Pre-train a protein language model from scratch on UniRef - Build a novel multi-task ADMET model across 22 endpoints Another big challenge once you’ve trained or have access to a model is actually using it productively. As it is embedded within Biomni Lab, it bridges that gap, letting you “ask the model” to identify and prioritize relevant biological insights directly. Another exciting direction is lab-in-the-loop: scientists can design models, generate predictions, interpret results, and send them to the wet lab—all within one integrated biology environment. This is a preview capability and we’re looking for beta testers. Sign up here for early access: forms.gle/1yhCP6Vrc12DaS… Learn more about opportunities and limitations in our blog: phylo.bio/blog/ai-agents… @phylo_bio
English
16
74
562
61.9K
Robert Y. Chen retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
Judging by my tl there is a growing gap in understanding of AI capability. The first issue I think is around recency and tier of use. I think a lot of people tried the free tier of ChatGPT somewhere last year and allowed it to inform their views on AI a little too much. This is a group of reactions laughing at various quirks of the models, hallucinations, etc. Yes I also saw the viral videos of OpenAI's Advanced Voice mode fumbling simple queries like "should I drive or walk to the carwash". The thing is that these free and old/deprecated models don't reflect the capability in the latest round of state of the art agentic models of this year, especially OpenAI Codex and Claude Code. But that brings me to the second issue. Even if people paid $200/month to use the state of the art models, a lot of the capabilities are relatively "peaky" in highly technical areas. Typical queries around search, writing, advice, etc. are *not* the domain that has made the most noticeable and dramatic strides in capability. Partly, this is due to the technical details of reinforcement learning and its use of verifiable rewards. But partly, it's also because these use cases are not sufficiently prioritized by the companies in their hillclimbing because they don't lead to as much $$$ value. The goldmines are elsewhere, and the focus comes along. So that brings me to the second group of people, who *both* 1) pay for and use the state of the art frontier agentic models (OpenAI Codex / Claude Code) and 2) do so professionally in technical domains like programming, math and research. This group of people is subject to the highest amount of "AI Psychosis" because the recent improvements in these domains as of this year have been nothing short of staggering. When you hand a computer terminal to one of these models, you can now watch them melt programming problems that you'd normally expect to take days/weeks of work. It's this second group of people that assigns a much greater gravity to the capabilities, their slope, and various cyber-related repercussions. TLDR the people in these two groups are speaking past each other. It really is simultaneously the case that OpenAI's free and I think slightly orphaned (?) "Advanced Voice Mode" will fumble the dumbest questions in your Instagram's reels and *at the same time*, OpenAI's highest-tier and paid Codex model will go off for 1 hour to coherently restructure an entire code base, or find and exploit vulnerabilities in computer systems. This part really works and has made dramatic strides because 2 properties: 1) these domains offer explicit reward functions that are verifiable meaning they are easily amenable to reinforcement learning training (e.g. unit tests passed yes or no, in contrast to writing, which is much harder to explicitly judge), but also 2) they are a lot more valuable in b2b settings, meaning that the biggest fraction of the team is focused on improving them. So here we are.
staysaasy@staysaasy

The degree to which you are awed by AI is perfectly correlated with how much you use AI to code.

English
1.2K
2.5K
20.7K
4.3M
Robert Y. Chen
Robert Y. Chen@therealRYC·
Agreed - underestimating people is almost always a mistake. I do it even when I try not to. I can get better at it and own up to it when I make mistakes - it’s the algorithm for progress. Informed consent has changed a lot, thankfully. I’m sure contempt for patients persists in some areas of medicine. I would call that bad medicine. Hopefully you don’t encounter that on any sort of frequent basis, I certainly haven’t but I am biased because I work at a large academic hospital in the US which is not representative of care more broadly (in the states or abroad).
English
1
0
0
41
JD
JD@jill_d35·
@therealRYC It's helped me to see the very low bar informed consent started from in the 1950s, and also the contempt for patients, which persists in some areas of medicine to this day. It's a big mistake to underestimate people imho.
English
1
0
2
41
JD
JD@jill_d35·
@therealRYC Was that supposed to be an apology for accusing me of rage baiting? Bad faith assumptions are unhelpful. I'm reading something very interesting about the early days of clinical trials and informed consent that you may or may not be interested in. x.com/i/status/20385…
JD@jill_d35

#InformedConsent a thread. I'm currently reading Rebecca Skloot's 'The immortal life of Henrietta Lacks' and I learned a very important fact last night about informed consent.

English
2
0
1
59