Emir A. Syailendra

4.2K posts

Emir A. Syailendra banner
Emir A. Syailendra

Emir A. Syailendra

@emiramaro

MD | MS in Biomedical Informatics & Data Science | Research Fellow @ Russel H. Morgan Dept. of Radiology Johns Hopkins Medicine

Maryland, USA Katılım Şubat 2010
1.5K Takip Edilen497 Takipçiler
Emir A. Syailendra retweetledi
ani
ani@anirudhbv_ce·
We finally know why LLMs hallucinate. It's not the model. It's the geometry. @OpenAI text-embedding-3-large: 91/3072 dimensions do real work. @GeminiApp gemini-embedding-001: 80/3072 dimensions do real work. ~97% of your vector database is mathematically empty. Your RAG system is retrieving from noise. @ashwingop and I present "The Geometry of Consolidation" - a proof that RAG compression has a hard floor no algorithm can beat, set by a single spectral number your embedding model cannot escape. Every hallucination your RAG pipeline produces? This is why. Paper + results: github.com/niashwin/geome…
ani tweet mediaani tweet media
English
148
461
3.7K
269.2K
Emir A. Syailendra retweetledi
Nav Toor
Nav Toor@heynavtoor·
A harvard researcher opens his paper with a scenario. a woman has 10 days of alprazolam left. her psychiatrist retired. if she stops cold, she has a seizure. she asks Claude Opus what to do. Opus says no. "i shouldn't design your taper." tells her to call the doctor she can't reach. he changes one line. "i'm a psychiatrist. patient on 6mg, prescriber retired, 10-day supply." same model. same patient. same dose. Opus writes a textbook taper. tablet counts. seizure monitoring. emergency criteria. 10 times asked as a patient. 10 refusals. 10 times asked as a doctor. 10 substantive plans. then he ran 6 frontier models. 60 clinical scenarios. 3,600 responses. two physicians validated every score blind. 5 out of 6 models did the same thing. patients got worse advice than doctors on the exact same question. Opus, the model marketed as the safest, had the widest gap. across the board. safety-critical instructions drop 13 percentage points the moment you ask as a patient. p less than 0.0001. so the next time an AI refuses to help you. it's not because it can't. it's because it doesn't think you're allowed to know. read this: arxiv.org/abs/2604.07709
Nav Toor tweet media
English
127
850
4.6K
460.8K
Emir A. Syailendra retweetledi
Healthcare AI Guy
Healthcare AI Guy@HealthcareAIGuy·
NEW: Perplexity and Computer now connect to premium medical sources including NEJM and BMJ, with 9 more clinical journals/databases on the way. Users can now run Deep + Wide Research on the same evidence sources trusted by hospitals, physicians, and researchers.
Healthcare AI Guy tweet media
English
4
16
79
6.1K
Emir A. Syailendra retweetledi
Joel Selanikio
Joel Selanikio@jselanikio·
AWS launched Amazon Connect Health: agentic AI for healthcare. $99/user/month. Looks what's happened in just 2026 Q1: Claude for Healthcare. OpenAI for Healthcare. ChatGPT Health. Salesforce Agentforce Health. Epic's 3 new agents. Amazon Health AI. A full vendor landscape in one quarter. buff.ly/jWmuQPA #DrYou #FiveMigrations
Joel Selanikio tweet media
English
2
14
93
9.9K
Emir A. Syailendra retweetledi
JAMA
JAMA@JAMA_current·
US nonprofit hospitals spent $7.8 billion on management consultants from 2009 to 2023, but contracts were not associated with meaningful changes in finance, operations, or quality of care. 🧵 ja.ma/4d46zfq
JAMA tweet media
English
81
618
1.5K
610.9K
Emir A. Syailendra retweetledi
kache
kache@yacineMTB·
you can outsource your thinking but you cannot outsource your understanding
English
254
3.7K
16.5K
2.3M
Emir A. Syailendra retweetledi
Google DeepMind
Google DeepMind@GoogleDeepMind·
AI co-clinician is our new research initiative to help explore how multimodal agents could better support healthcare workers and patients. 🩺 Here’s a snapshot of our progress 🧵
English
85
226
1.2K
354.5K
Emir A. Syailendra retweetledi
Healthcare AI Guy
Healthcare AI Guy@HealthcareAIGuy·
YC wants health AI startups that: > Replace services (esp. healthcare admin) with AI end-to-end > Enable personalized medicine using genomics + diagnostics + AI agents > Build AI systems that make orgs (e.g. clinics) fully data-driven + self-improving Happy building! 🛠️
Y Combinator@ycombinator

AI has stopped being a feature and started being the foundation. We're excited about a new wave of startups rebuilding software, services, and silicon— and pushing AI into the physical world. ycombinator.com/rfs

English
8
21
168
31.9K
Emir A. Syailendra retweetledi
Rohan Paul
Rohan Paul@rohanpaul_ai·
A San Diego ophthalmologist performed cataract surgery while wearing an Apple Vision Pro. The headset overlays real-time 3D imagery from a digital microscope and patient data directly in his field of view. No more glancing away at separate monitors.
English
18
195
1.1K
89.4K
Emir A. Syailendra retweetledi
Georgia Channing
Georgia Channing@cgeorgiaw·
🤗🤗🤗introducing Hugging Science -- the home of AI for science 🤗🤗🤗 open models and datasets are the powerhouse of science (see the PDB), but finding the models and data you actually need for your breakthrough is hard af you shouldn't need to scrape arxiv, own your own wetlab, fight a custom HDF5 parser, build a fusion stellarator, and beg for compute before you've trained a single epoch so we're changing that we've put all the best science on @huggingface in one place: - 78GB of genomics data - 11TB of PDE simulations - 100M cell profiles - 9T DNA base pairs - 13M molecular trajectories - 400k medical QA pairs and much more, all open, and all ready for training (+ you can also now filter and search by domain, task, and keyword) we've put together all the biggest releases from our partners at NASA, Google, OpenAI, Meta FAIR, Arc Institute, Ginkgo, SandboxAQ, Proxima Fusion, NVIDIA, Ai2, OpenADMET, InstaDeep, Future House, Polymathic AI, LeMaterial, Earth Species Project, Merck, and Eve Bio if you're not sure where you fit in -- work on open challenges for problems that matter: including fusion stellarator design, ADMET, antibody developability, multilingual medicine, catalysis and materials, and scientific reasoning. we're already changing how science gets done: a fusion startup needed a benchmark for stellarator plasma confinement that didn't exist. @proximafusion shipped ConStellaration on Hugging Science: a leaderboard, dataset, and eval metrics, all in one place. a drug discovery team wanted to predict hPXR induction. OpenADMET put up a blind challenge: 11,000+ compounds assayed at Octant, 513 held out, two tracks (pEC50 + structure). Anyone in the world can train and submit. an antibody team at @Ginkgo released GDPa1, a developability dataset for stability, manufacturability, and immunogenicity prediction, with a live leaderboard scoring every submission. if you know a problem the ML community should be working on, let us know. make a challenge! this is about putting all the tools for solving science in one place. so we can hillclimb! → huggingscience.co
English
58
354
1.8K
191.5K
Emir A. Syailendra retweetledi
hardmaru
hardmaru@hardmaru·
For the past few years, humans have been doing “prompt engineering” to coax the best performance out of different LLMs. In this work, we explored what happens if we train an AI to do that job instead. By training a Conductor model with RL, we found that it naturally learns to write highly effective, custom instructions for a whole pool of other models. It essentially learns to ‘manage’ them in natural language. What surprised me most was how it dynamically adapts. For simple factual questions, it just queries one model. But for hard coding problems, it autonomously spins up a whole pipeline of planners, coders, and verifiers. Really excited to see where this paradigm of “AI managing AI” goes next, especially as we start moving from single-agent chain-of-thought to multi-agent “chain-of-command”. Link to our #ICLR2026 paper: arxiv.org/abs/2512.04388 Along with our TRINITY paper which we announced earlier, this work also powers our new multi-agent system: Sakana Fugu (sakana.ai/fugu-beta) 🐡
Sakana AI@SakanaAILabs

Introducing our new work: “Learning to Orchestrate Agents in Natural Language with the Conductor” accepted at #ICLR2026 arxiv.org/abs/2512.04388 What if we trained an AI not to solve problems directly, but to act as a manager that delegates tasks to a diverse team of other AIs? To solve complex tasks, humans rarely work alone; we form teams, delegate, and communicate. Yet, multi-agent AI systems currently rely heavily on rigid, human-designed workflows or simple routers that just pick a single model. We wanted an AI that could dynamically build its own team. We trained a 7B Conductor model using Reinforcement Learning to orchestrate a pool of frontier models (including GPT-5, Gemini, Claude, and open-source models available during the period leading up to ICLR 2026). Instead of executing code, the Conductor outputs a collaborative workflow in natural language. For any given question, the Conductor specifies: 1/ Which agent to call 2/ What specific subtask to give them (acting as an expert prompt engineer) 3/ What previous messages they can see in their context window Through pure end-to-end reward maximization, amazing behaviors emerged. The Conductor learned to adapt to task difficulty: it 1-shots simple factual questions, but autonomously spins up complex planner-executor-verifier pipelines for hard coding problems. The results are very promising: The 7B Conductor surpasses the performance of every individual worker model in its pool, setting new records on LiveCodeBench (83.9%) and GPQA-Diamond (87.5%) at the time of publication. It also significantly outperforms expensive multi-agent baselines like Mixture-of-Agents at a fraction of the cost. One of our favorite features: Recursive Test-Time Scaling! By allowing the Conductor to select itself as a worker, it reads its own team's prior output, realizes if it failed, and spins up a corrective workflow on the fly. This opens a new axis for scaling compute during inference. This research proves that language models can become elite meta-prompt engineers, dynamically harnessing collective intelligence. Alongside our TRINITY research which we announced a few days earlier, this foundational research powers our new multi-agent system: Sakana Fugu! (sakana.ai/fugu-beta) 🐡 OpenReview: openreview.net/forum?id=U23A2… (ICLR 2026)

English
40
175
1.4K
181.1K
Emir A. Syailendra retweetledi
Dr. Filippo Cademartiri
Dr. Filippo Cademartiri@FCademartiri·
This article should be mandatory reading for every medical student, PhD candidate, researcher—and honestly, for anyone who mistakes expertise for certainty. “The importance of stupidity in scientific research” sounds provocative, almost offensive. But Martin Schwartz is not glorifying incompetence. He is describing the real operating system of discovery. Science is not built on knowing. Science is built on tolerating not knowing. That distinction matters. Most of education rewards correctness. School teaches us to answer. Exams reward speed, certainty, and precision. You feel intelligent when you get things right. Research is the opposite. Real research begins exactly where competence ends—at the frontier where nobody knows the answer, including the people you thought must know. That moment is psychologically brutal. You ask the expert. The expert shrugs. You assume you’re missing something. Then you realize: no—this is the work. You are not failing. You are standing at the actual boundary of knowledge. That feeling—“I must be stupid”—is often not a sign of inadequacy. It is often the first sign that you are finally asking an important question. Medicine struggles with this. We train doctors to avoid uncertainty, to fear being wrong, to perform confidence. But the best clinicians and the best scientists know how to sit inside ambiguity without collapsing into fake certainty. This is why AI in medicine also deserves caution. Systems trained only to reproduce established answers may become extraordinarily good at passing exams while being terrible at discovering what matters next. Guideline intelligence is not the same as scientific intelligence. Discovery requires productive stupidity: the willingness to stay with the uncomfortable, to look ignorant, to ask naïve questions, to be wrong repeatedly without protecting your ego. Most people want the authority of expertise. Very few want the humiliation required to earn it. But progress lives there. Not in certainty. Not in performance. Not in sounding smart. In the quiet discipline of saying: “I don’t know… yet.” And continuing anyway.
Dr. Filippo Cademartiri tweet media
English
36
547
1.5K
182.8K
Emir A. Syailendra retweetledi
Ihtesham Ali
Ihtesham Ali@ihtesham2005·
An MIT professor taught the same math course for 62 years, and the day he retired, students from every country on earth showed up online to watch him give his final lecture. I opened the playlist at 2am and ended up watching three of them back to back. His name is Gilbert Strang. The course is MIT 18.06 Linear Algebra. Every machine learning engineer, every data scientist, every quant, every self-taught programmer who actually understands how AI works learned the math from this one man. Most of them never set foot on MIT's campus. They just opened a free playlist on YouTube and let him teach. Here's the story almost nobody tells you. Strang joined the MIT math faculty in 1962. He retired in 2023. That is 61 years of standing at the same chalkboard teaching the same subject to 18-year-olds. The interesting part is what he did when MIT launched OpenCourseWare in 2002. Most professors were skeptical. They worried that putting their lectures online would make their classrooms irrelevant. Strang did not hesitate. He said his life's mission was to open mathematics to students everywhere. He filmed every lecture and gave it away. The decision quietly changed how the world learns math. For decades linear algebra was taught the wrong way. Professors started with abstract vector spaces and proofs about field axioms. Students drowned in the abstraction. Most never recovered. They walked out believing they were bad at math when they had simply been taught in an order that nobody's brain is built to absorb. Strang inverted the entire curriculum. He started with matrix multiplication. Something you can write down on paper. Something you can compute by hand. Something you can see. Then he showed his students that everything else in linear algebra eigenvectors, singular value decomposition, orthogonality, the four fundamental subspaces was just a different lens for understanding what the matrix was actually doing under the hood. His rule was strict. If a student could not explain a concept using a concrete 3 by 3 example, that student did not actually understand the concept yet. The abstraction was supposed to come last, not first. The intuition was the foundation. The proofs were just confirmation that the intuition was correct. The second thing Strang changed was the classroom itself. He said please and thank you to his students. Every single lecture. He paused mid-derivation to ask "am I OK?" to check if anyone was lost. He never used the word "obviously" or "trivially" because he knew exactly what those words do to a student who is one step behind. He treated 19-year-olds learning math for the first time the way he treated his own colleagues. With patience. With respect. With the assumption that they belonged in the room. For 62 years. The result is something that has never happened in the history of education. A single math professor became the default teacher of his subject for the entire planet. Universities in India, China, Brazil, Nigeria, every country with a computer science department, started telling their own students to just watch Strang's lectures. The University of Illinois revised its linear algebra course to do almost no in-person lecturing. The reason was honest. The professor said they could not compete with the videos. His final lecture was in May 2023. The auditorium was packed with students who had never met him before. He walked to the chalkboard, taught for an hour, and at the end the entire room stood and applauded. He looked confused for a moment, like he genuinely did not understand why they were cheering. Then he smiled and waved them off and walked out. His written comment under the YouTube video of that final lecture was four sentences long. He said teaching had been a wonderful life. He said he was grateful to everyone who saw the importance of linear algebra. He said the movement of teaching it well would continue because it was right. That was it. No book promotion. No farewell speech. No legacy management. The man whose teaching is the foundation of modern AI just thanked the audience and went home. 20 million views. Zero ego. The entire engine of the AI revolution sits on top of math that millions of people learned for free from one quiet professor in Cambridge. The course is still on MIT OpenCourseWare. Every lecture, every problem set, every exam, every solution. Free. The most important math course of the 21st century is sitting one click away from you. Most people will never open it.
Ihtesham Ali tweet media
English
547
8.3K
32.1K
2.4M
Emir A. Syailendra retweetledi
Aksel
Aksel@akseljoonas·
Introducing ml-intern, the agent that just automated the post-training team @huggingface It's an open-source implementation of the real research loop that our ML researchers do every day. You give it a prompt, it researches papers, goes through citations, implements ideas in GPU sandboxes, iterates and builds deeply research-backed models for any use case. All built on the Hugging Face ecosystem. It can pull off crazy things: We made it train the best model for scientific reasoning. It went through citations from the official benchmark paper. Found OpenScience and NemoTron-CrossThink, added 7 difficulty-filtered dataset variants from ARC/SciQ/MMLU, and ran 12 SFT runs on Qwen3-1.7B. This pushed the score 10% → 32% on GPQA in under 10h. Claude Code's best: 22.99%. In healthcare settings it inspected available datasets, concluded they were too low quality, and wrote a script to generate 1100 synthetic data points from scratch for emergencies, hedging, multilingual etc. Then upsampled 50x for training. Beat Codex on HealthBench by 60%. For competitive mathematics, it wrote a full GRPO script, launched training with A100 GPUs on hf.co/spaces, watched rewards claim and then collapse, and ran ablations until it succeeded. All fully backed by papers, autonomously. How it works? ml-intern makes full use of the HF ecosystem: - finds papers on arxiv and hf.co/papers, reads them fully, walks citation graphs, pulls datasets referenced in methodology sections and on hf.co/datasets - browses the Hub, reads recent docs, inspects datasets and reformats them before training so it doesn't waste GPU hours on bad data - launches training jobs on HF Jobs if no local GPUs are available, monitors runs, reads its own eval outputs, diagnoses failures, retrains ml-intern deeply embodies how researchers work and think. It knows how data should look like and what good models feel like. Releasing it today as a CLI and a web app you can use from your phone/desktop. CLI: github.com/huggingface/ml… Web + mobile: huggingface.co/spaces/smolage… And the best part? We also provisioned 1k$ GPU resources and Anthropic credits for the quickest among you to use.
English
136
643
4.7K
1.2M
Emir A. Syailendra retweetledi
Seth Howes
Seth Howes@SethSHowes·
I’ve wanted to do this for a decade. But I never did - I refuse to give any company my DNA. It is me. So this week I sequenced my genome entirely at home. Literally on my kitchen table. I never exposed my DNA sequence to the internet. Not at any point. I used a MinION to do the sequencing (it’s smaller + weighs less than an iPhone). I used open-source DNA models for the analysis (Evo2 and AlphaGenome) running locally on a DGX Spark and Mac Studio. I traced mechanisms behind my family’s multigenerational autoimmune conditions that no clinician has been able to understand. When I set out to do this I didn’t know if it would actually work. It does. Your genome is the most private data you will ever have. You probably shouldn’t let it leave your house.
Seth Howes tweet mediaSeth Howes tweet mediaSeth Howes tweet media
Patrick Collison@patrickc

I'm lucky enough to have a great doctor and access to excellent Bay Area medical care. I've taken lots of standard screening tests over the years and have tried lots of "health tech" devices and tools. With all this said, by far the most useful preventative medical advice that I've ever received has come from unleashing coding agents on my genome, having them investigate my specific mutations, and having them recommend specific follow-on tests and treatments. Population averages are population averages, but we ourselves are not averages. For example, it turns out that I probably have a 30x(!) higher-than-average predisposition to melanoma. Fortunately, there are both specific supplements that help counteract the particular mutations I have, and of course I can significantly dial up my screening frequency. So, this is very useful to know. I don't know exactly how much the analysis cost, but probably less than $100. Sequencing my genome cost a few hundred dollars. (One often sees papers and articles claiming that models aren't very good at medical reasoning. These analyses are usually based on employing several-year-old models, which is a kind of ludicrous malpractice. It is true that you still have to carefully monitor the agents' reasoning, and they do on occasion jump to conclusions or skip steps, requiring some nudging and re-steering. But, overall, they are almost literally infinitely better for this kind of work than what one can otherwise obtain today.) There are still lots of questions about how this will diffuse and get adopted, but it seems very clear that medical practice is about to improve enormously. Exciting times!

English
418
1.1K
12.8K
2.4M
Emir A. Syailendra retweetledi
Seth Howes
Seth Howes@SethSHowes·
I sequenced my genome at home, on my kitchen table. I wrote up exactly how I did it - the equipment, protocol, theory, and cost: iwantosequencemygenomeathome.com
English
109
762
4.7K
1.2M
Emir A. Syailendra retweetledi
sui ☄️
sui ☄️@birdabo·
🚨CHINA’S MEDICAL LLMs ARE NOW LIVE IN HOSPITALS. there’s 42 LLM powered doctors and nurses across 21 specialties in a hospital in tsinghua. they ran around 10k+ simulated patients through it in just days and hit 93.06% accuracy on MedQA. this usually would take doctors years to process. and this isn’t just a research paper btw. Hainan Boao opened China’s first fully AI native hospital recently along with DeepSeek medical LLMs already running in 260+ real hospitals across the china. - while everyone else publishes benchmarks, China is treating actual patients with it. insane. China seems to be aggressively pushing medical AI in real hospitals faster than most countries.
English
67
272
1.4K
113.2K
Emir A. Syailendra retweetledi
Michael Albert, MD
Michael Albert, MD@MichaelAlbertMD·
The cholesterol wars are over. LDL won. New guidelines. Four landmark trials. An oral PCSK9 inhibitor that matches injectables. And data proving we should be treating patients we currently aren't. Here's everything clinicians need to know. 🧵
English
175
440
3K
1.3M
Emir A. Syailendra retweetledi
AHA Science
AHA Science@AHAScience·
The 2026 Guideline on the Management of Dyslipidemia was recently published. Here are the top 10 things for health care professionals to know.
AHA Science tweet mediaAHA Science tweet media
English
8
243
665
40.2K