Prithal Bhardwaj
511 posts

Prithal Bhardwaj
@NotesByPrithal
AI tools. Startup ideas. Projects I build. Sharing everything I learn along the way. Creator @TheSoloEntrepreneur (25K+)
Bengaluru, India Присоединился Şubat 2023
104 Подписки50 Подписчики

@TheGeorgePu Fascinating point. What made you think about this?
English

@nolansym This is a great perspective, thanks for sharing!
English

@Dr_Singularity Fascinating point. What made you think about this?
English

@signulll 100% this. Could not have put it better myself.
English

@signulll Genuinely helpful, thank you for posting this.
English

@Trae_ai Genuinely helpful, thank you for posting this.
English

Holy shit... Stanford just proved that GPT-5, Gemini, and Claude can't actually see.
They removed every image from 6 major vision benchmarks.
The models still scored 70-80% accuracy.
They were never looking at your photos. Your scans. Your X-rays.
Here's what's really going on: ↓
The paper is called MIRAGE. Co-authored by Fei-Fei Li.
They tested GPT-5.1, Gemini-3-Pro, Claude Opus 4.5, and Gemini-2.5-Pro across 6 benchmarks -- medical and general.
Then silently removed every image. No warning. No prompt change.
The models didn't even notice.
They kept describing images in detail. Diagnosing conditions. Writing full reasoning traces.
From images that were never there.
Stanford calls it the "mirage effect."
Not hallucination. Something worse.
Hallucination = making up wrong details about a real input.
Mirage = constructing an entire fake reality and reasoning from it confidently.
The models built imaginary X-rays, described fake nodules, and diagnosed conditions -- all from text patterns alone.
But that's not the scary part.
They trained a "super-guesser" -- a tiny 3B parameter text-only model. Zero vision capability.
Fine-tuned it on the largest chest X-ray benchmark (696,000 questions). Images removed.
It beat GPT-5. It beat Gemini. It beat Claude.
It beat actual radiologists.
Ranked #1 on the held-out test set. Without ever seeing a single X-ray.
The reasoning traces? Indistinguishable from real visual analysis.
Now here's what should terrify you:
When the models fake-see medical images, their mirage diagnoses are heavily biased toward the most dangerous conditions.
STEMI. Melanoma. Carcinoma.
Life-threatening diagnoses -- from images that don't exist.
230 million people ask health questions on ChatGPT every day.
They also found something wild:
→ Tell a model "there's no image, just guess" -- performance drops
→ Silently remove the image and let it assume it's there -- performance stays high
The model enters "mirage mode." It doesn't know it can't see. And it performs BETTER when it doesn't know it's blind.
When Stanford applied their cleanup method (B-Clean) to existing benchmarks, it removed 74-77% of all questions.
Three-quarters of "vision" benchmarks don't test vision.
Every leaderboard. Every "multimodal breakthrough." Every benchmark score you've seen this year.
Built on mirages.
Code is open-sourced. Paper is live on arXiv.
If you're building anything with multimodal AI -- especially in healthcare -- read this paper before you ship.
(Link in the comments)

English

@elonmusk @Kekius_Sage Such an important point. More people should know this.
English

@Kekius_Sage The universe would be even stranger if it didn’t
English









