Ehsan Adeli

1.1K posts

Ehsan Adeli banner
Ehsan Adeli

Ehsan Adeli

@eadeli

Prof. at @Stanford @StanfordMed @StanfordPSY; @StanfordSVL, @StanfordAILab. AI, Computer Vision, Neuroscience, Healthcare, Medical Image Analysis.

Stanford, CA Katılım Temmuz 2009
733 Takip Edilen2K Takipçiler
Ehsan Adeli retweetledi
Anshul Kundaje
Anshul Kundaje@anshulkundaje·
OpenAI is full of sell outs. The leaders of this company & apparently many of the top brass are more than willing to compromise on anything to get their contracts & profits. I think it's time to make choices. This company cannot be trusted to be a responsible player in this space
Joshua Achiam@jachiam0

The right way to make decisions about how AI can or can't be used in military contexts is through the democratic process, the legislative process, and recognized legal authorities. Contracts with the private sector aren't the right place to set defense policy and priorities.

English
10
19
384
24.4K
Ehsan Adeli retweetledi
Kiana Ehsani
Kiana Ehsani@ehsanik·
I don’t think I have ever felt these many mixed emotions in one day! Refreshing the news… Pray for a free Iran! Pray for the safety of civilians!
English
4
6
146
9.4K
Ehsan Adeli
Ehsan Adeli@eadeli·
What’s the right space to diffuse in, pixels or latents? Why not both! Latent Forcing (led by @BaadeAlan) orders generation coarse-to-fine: DINO features first, pixels later, like a chain-of-thought in latent space: Faster convergence + higher quality, end-to-end. Order matters!!
Alan Baade@BaadeAlan

What's the right space to diffuse in: Raw Data or Latents? Why not both! In Latent Forcing, we order a joint diffusion trajectory to reveal Latents before Pixels, leading to improved convergence while being lossless at encoding and end-to-end at inference. w/ @drfeifei+... 1/n

English
0
0
13
1.1K
Ehsan Adeli retweetledi
Alan Baade
Alan Baade@BaadeAlan·
+@eadeli+@jcjohnss Generation is easier in some orders than others: Buildings are planned before construction. We ask for latent versus pixel diffusion: What if only early diffusion timesteps should be compressed, while later timesteps can maintain low-level detail?
English
1
2
25
3.6K
Ehsan Adeli retweetledi
Stanford HAI
Stanford HAI@StanfordHAI·
If AI can’t estimate speed, distance, and size, robots cannot be reliably safe. QuantiPhy measures that weakness directly, and charts which models are improving fastest. Read more: hai.stanford.edu/news/ai-cant-d…
English
4
9
24
2K
Ehsan Adeli retweetledi
Chieh-Ju Chao MD, FACC
Chieh-Ju Chao MD, FACC@ChiehJuChao1·
Standard natural language metrics fail in echo AI —Meet EchoGraph. 🏥🤖 We show EchoGraph F1 has 2.8x higher error sensitivity and variances explained vs RadGraph F1(R^2: 0.803 vs 0.578) for safer echo AI. Read our @npjDigitalMed paper for more details: rdcu.be/d7v2W
English
3
3
9
515
Tiange Xiang
Tiange Xiang@xxtiange·
‼️VLMs/MLLMs do NOT yet understand the physical world from videos‼️ In our recent work, we found that even the most advanced AI models still lag behind humans in one key aspect: reasoning about the kinematic properties of objects from videos. Takeaways: 1. ChatGPT 5.1 leads overall among 21 advanced VLMs, followed by Gemini 2.5 Pro/Flash. 2. Grok 4.1 delivers impressive performance at the lowest API cost. 3. Qwen3-VL is the top-performing open-source model. Read here: quantiphy.stanford.edu 🧵1/N
Tiange Xiang tweet mediaTiange Xiang tweet media
English
11
8
81
25.1K
Ehsan Adeli retweetledi
Yue Zhao
Yue Zhao@__yuezhao__·
Discrete or continuous tokens? Or even tokenizer-free? The visual modeling debate rages on, but for now, let me introduce L24SQ, a provably optimal, regularizer-free quantizer with a large codebook (~200k), achieving SoTA reconstruction-compression tradeoff and generative power!
Yue Zhao tweet media
English
4
34
198
28.8K
Ehsan Adeli retweetledi
Peter Richtarik
Peter Richtarik@peter_richtarik·
I am an AC for ICLR 2026. One of the papers in my batch was just withdrawn. The authors wrote a brief response, explaining why the reviewers failed at their job. I agree with most of their comments. The authors gave up. They are fed up. Just like many of us. I understand. We pretend the emperor has clothes, but he is naked. Here is the final part of their withdrawal notice. I took the liberty to make it public, to highlight that what we are doing with AI conference reviews these last few years is, basically, madness. --- Comment: We thank the reviewers for their time. However, upon reading the reviews for our paper, it became immediately apparent that the four "reject" ratings are not based on good-faith academic disagreement, but on a critical failure to read the submitted paper. The reviews are rife with demonstrably false claims that are directly contradicted by the text. The core justifications for rejection rely on asserting that key components are "missing" when they are explicitly detailed in the manuscript. Some specific examples are (and many are even fake claims). Claim: Harder tasks like GSM8K are missing. Fact: GSM8K results are in many tables, like Table 2 (Section 4.2) and Appendix G. Claim: The method does not use per-layer ranks. Fact: This is the entire point of our method. The reviewer clearly mistook our method for the baselines. (Section 2, Table 1). Claim: The GP kernel is not specified. Fact: It is specified in Appendix E (Table 6). Claim: There is no ablation of the method's three stages. Fact: Section 4.4 ("Ablation Study") and Appendix J are dedicated to this. Reviewers have a fundamental responsibility to read and evaluate the work they are assigned. The nature of these errors is so fundamental, so systemic in overlooking explicit content, that it goes far beyond what "limited time" or "oversight" can explain. This work has gone through several rounds of revision over the last year. In earlier submissions, the paper usually received borderline or weak-accept scores. Numerous signs strongly suggest that some reviewers are relying entirely on AI tools to automatically generate peer reviews, rather than fulfilling their fundamental responsibility of personally reading and evaluating manuscripts. We strongly protest this. This is a gross disrespect to the authors. It is a flagrant desecration of the reviewer's sacred duty. It fundamentally undermines the integrity of the entire peer-review process. Given that the reviews are not based on the actual content of our paper, we have decided to withdraw the submission. We leave this comment so that future readers of the OpenReview page are aware that the items described as "missing" are already present in the submitted manuscript. These negative reviews for this submission are factually unsound and do not reflect the content of the paper. We cannot and will not accept an assessment that is not based on the work we actually submitted.
English
33
205
1.5K
149.3K
Ehsan Adeli
Ehsan Adeli@eadeli·
Learn about "spatial intelligence" from the world’s best! Explore the evolution of AI from symbolic reasoning to LLMs. And learn why the next transformative leap lies in spatial intelligence, enabling machines to perceive, model, and act within the physical world.
Fei-Fei Li@drfeifei

AI’s next frontier is Spatial Intelligence, a technology that will turn seeing into reasoning, perception into action, and imagination into creation. But what is it? Why does it matter? How do we build it? And how can we use it? Today, I want to share with you my thoughts on building and using world models to unlock spatial intelligence in this essay below. 1/n

English
1
3
37
7.9K
Ehsan Adeli
Ehsan Adeli@eadeli·
"A society should be judged not by how it treats its outstanding citizens but by how it treats its criminals." --Dostoyevsky I think this means we have the best country. We give them leadership positions in the government.
English
1
0
3
502
Ehsan Adeli retweetledi
Bailey Trang
Bailey Trang@baileytrangn·
📣 I am thrilled to announce that our paper has been accepted to 𝗡𝗲𝘂𝗿𝗜𝗣𝗦 𝟮𝟬𝟮𝟱! This would not have been possible without the support and guidance of my advisors, @eadeli, Fei-Fei Li, Tal Arbel, and the dedication of amazing collaborators. San Diego, here we come! 🏖️
Bailey Trang tweet media
English
0
2
5
298
Ehsan Adeli
Ehsan Adeli@eadeli·
If you are at #ICCV2025, check out our paper UniEgoMotion, a unified model for egocentric motion analysis (Reconstruction, Forecasting & Generation). It gives your smart glasses the power to understand, predict, and even generate your body’s 3D motion. @chaitanya100100,@jcniebles
Chaitanya Patel@chaitanya100100

[1/n] What if your smart glasses could reconstruct, forecast, and even generate 3D human motion from egocentric view? Introducing UniEgoMotion — our new unified model for egocentric motion tasks.

English
0
0
4
584
Ehsan Adeli retweetledi
Tiange Xiang
Tiange Xiang@xxtiange·
Excited to release 💫GaussianVerse 1.5💫: a large-scale, high-quality dataset of diverse 3D Gaussian fittings, built as a gold-standard reference for 3D generative research. 🌎Start here: gaussianverse.stanford.edu 📜The dataset was first introduced in our ICCV paper GaussianAtlas: cs.stanford.edu/~xtiange/proje… (Come by our poster #1519 on the 22nd if you’re around!) 🧐What’s new in v1.5? 1. Significantly improved fitting algorithm: our data now have higher quality and fidelity. 2. Exhaustive OT in the sphere offsetting step (for every possible point pair), yielding better global structure. 3. >250K fittings released, which is 40K more than reported in the paper. 🤔Bonus: How to use GaussianVerse? Check out the page and you will find some lead! A huge shout-out to the collaborators who made this release possible! @drfeifei @eadeli @RyanNeverWrong If you use GaussianVerse in your work, please share your findings! we’re collecting feedback and open to collaborations. #3D #3DVision #AIGC #GenerativeAI #NeRF #ComputerVision #ICCV
English
0
1
9
3.8K
Ehsan Adeli retweetledi
Azade Farshad
Azade Farshad@azadef·
Our next speaker is Stefan Roth @stefanroth Join us at room 316C for a very interesting talk on unbiased scene graph generation 🎉 #ICCV2025 @ICCVConference
Azade Farshad@azadef

Heading to @ICCVConference in Hawaii? 🏝️ Join us at the 3rd #SG2RL Workshop on Scene Graphs and Graph Representation Learning! Hear from our amazing lineup of speakers: @tolga_birdal @stefanroth @BusamBenjamin @anfurnari, Yuta Nakashima Schedule: sites.google.com/view/sg2rl

English
0
3
12
2.1K