'YZ' Yezhou Yang (杨叶舟)

2.1K posts

'YZ' Yezhou Yang (杨叶舟) banner
'YZ' Yezhou Yang (杨叶舟)

'YZ' Yezhou Yang (杨叶舟)

@prof_yz

Prof_YZ @APGASU , RTesearching AI/CV/GenAI @SCAI_ASU ; Amazon Scholar @PrimeVideo ; PhD @umdcs; BA @ZJU_China 奇共赏 疑与析 修栈道 渡陈仓 🤠

Phoenix, AZ Katılım Şubat 2018
453 Takip Edilen1.8K Takipçiler
'YZ' Yezhou Yang (杨叶舟)
@phillip_isola Very insightful work! Meanwhile, Hmm… couldn’t the findings also be framed as suggesting that even modern LLM benchmarks capture only a partial picture of true capabilities?
English
1
0
1
55
Phillip Isola
Phillip Isola@phillip_isola·
Sharing “Neural Thickets”. We find: In large models, the neighborhood around pretrained weights can become dense with task-improving solutions. In this regime, post-training can be easy; even random guessing works Paper: arxiv.org/abs/2603.12228 Web: thickets.mit.edu 1/
Phillip Isola tweet media
English
26
123
911
134.4K
'YZ' Yezhou Yang (杨叶舟) retweetledi
Yi Ma
Yi Ma@YiMaTweets·
Anyone who claims to be interested in Intelligence should know optimal control theory developed in 1960s, or even cybernetics, as early as in 1940s... My new book was trying to set straight the related history and concepts...
Yann LeCun@ylecun

The basic idea of world models is very old. Optimal control folks were using model-based planning in the 1960s (using the "adjoint state" methods, which deep learning people would now call "backprop through time"). But the real question is what you do with this idea and how you reduce it to practice.

English
13
63
749
77.6K
'YZ' Yezhou Yang (杨叶舟) retweetledi
Percy Liang
Percy Liang@percyliang·
I think it’s pretty clear that simulation is the next frontier for AI. The most impressive feats of AI to date are when we have a clear environment + reward, whether it be beating Le Sedol at Go, winning an IMO gold medal, or writing entire apps from scratch. In these cases, the RL algorithm can try different actions, and observe the well-defined consequences in the safety of a docker container. But what about messy real-world situations involving people? The rewards are unclear, the stakes are high, and you can’t experiment in the real world. But these situations are precisely where the next big opportunity in AI is. To crack this, we need to *simulate* society (“put society into a docker container”). Concretely, this means building a model that can predict what will happen in any given situation (real or hypothetical). If we can do this, we are only limited by our imagination: predict the future, optimize for better outcomes, answer hypothetical (“what if”) questions. Ultimately, this goes beyond making better decisions, but it’s about giving us a better understanding of ourselves and the world. Simulation is the whole enchilada. And this is exactly the research that @simile_ai is working on. Read more here: simile.ai/blog/simulatio…
English
44
111
1.1K
107.4K
'YZ' Yezhou Yang (杨叶舟)
If an inquiry now starts with a customer service line asking, “Are you a real person?”, something has clearly gone awry. 😆 I just did that this morning...
English
0
0
0
208
'YZ' Yezhou Yang (杨叶舟)
I think the next single person tech unicorn could just be hatched from a local Fedex store... These places have everything one needs...
'YZ' Yezhou Yang (杨叶舟) tweet media
English
0
0
1
198
'YZ' Yezhou Yang (杨叶舟) retweetledi
Yoshua Bengio
Yoshua Bengio@Yoshua_Bengio·
OpenReview is a pillar of progress in the AI research community. Now it needs our support. Along with several of my colleagues, I have pledged to help, and I encourage anyone who can to do the same. openreview.net/donate
English
23
48
356
60.9K
'YZ' Yezhou Yang (杨叶舟) retweetledi
Zihan "Zenus" Wang
Zihan "Zenus" Wang@wzenus·
Everything is a world model if you squint hard enough.
Zihan "Zenus" Wang tweet media
English
29
112
886
55.5K
'YZ' Yezhou Yang (杨叶舟) retweetledi
AI Research Impact Rankings
AI Research Impact Rankings@ai_impact_rank·
CSRankings counts publication in top conferences to rank professors/universities. But this encourages researchers to pursue quantity rather than quality. We propose impactrank.org, a new university ranking system that tries to measure quality instead of quantity of publications. How can we measure the quality of the publications? We believe that 1) The quality of research is best understood and evaluated by peers in the same research area; 2) With careful and informed use, LLMs can reveal the implicit quality judgments that peers convey through their citation practices and writing across large volumes of scholarly work. Hence, we developed the new ranking system where we analyze research papers from major AI conferences with LLMs. For each paper, we ask an LLM what are the 5 most important papers to this paper. In other words, the five works that most strongly influence the study. By doing this, we trace which papers and authors are consistently seen as inspirational and foundational to new discoveries in the field. We ran the model on all papers from top conferences in machine learning, computer vision, natural language processing and information retrieval from 2020 - 2025, and filtered references to only have those from 2000 onwards. Next, we map these influential authors to their affiliated universities using the CSRankings name–affiliation database. Each time a paper is recognized as one of the “top five references” in another work, its authors and their institutions receive credit. To keep the scoring fair, points are divided by the number of co-authors, ensuring balanced recognition across collaborations. The result is a new kind of academic ranking: one that rewards universities not just for publishing often, but for producing research that endures, inspires, and drives the field forward. This approach highlights scholarly influence and provides students, researchers, and institutions with a clearer picture of where the most impactful work is happening. Note that we believe that CSRankings had substantially improved university rankings in computer science by replacing subjective, reputation-based measures, such as those in US News, with more objective indicators, but the LLM era allows us to do something potentially better! Due to computational resource limits, we were only able to run it with a small 7B language model. It is also a project primarily led by undergraduate and master students from Oregon State University and University of California Santa Cruz. As a result, the system is very much a work in progress and will inevitably contain errors and blind spots. We actively welcome community feedback, new collaborators and contributions of GPU compute so that we can run larger LLMs, obtain more reliable results and improve the methodology.
AI Research Impact Rankings tweet mediaAI Research Impact Rankings tweet mediaAI Research Impact Rankings tweet mediaAI Research Impact Rankings tweet media
English
14
40
370
175.5K
'YZ' Yezhou Yang (杨叶舟) retweetledi
Yuchen Jin
Yuchen Jin@Yuchenj_UW·
Linus's views on AI: - “AI is clearly a bubble, but it will change how most skilled jobs get done.” - “Vibe coding is great for getting into programming, but it's a horrible thing to maintain.” - “I'm a huge believer in AI. I'm not a huge believer in the things around AI. I find the market and marketing to be sick. There is going to be a crash.”
English
195
949
12.8K
1.1M
IronRed | SandHive
IronRed | SandHive@IronRedSandHive·
@prof_yz Sounds promising! A simple Zapier–GPT bridge can turn each email into a draft and drop it in Dropbox automatically. Only slightly illegal yet incredibly efficient. A capable researcher knows when to "drop" a research idea.
English
1
0
0
19
'YZ' Yezhou Yang (杨叶舟) retweetledi
Patel Maitreya
Patel Maitreya@patelmaitreya·
I’m heading to #NeurIPS2025 (Dec 2-6)! ✈️ Excited to present our Spotlight paper: 🔥 EraseFlow: Learning Concept Erasure Policies via GFlowNet-Driven Alignment 🔥 Reach out (DM's open) to meet/chat about diffusion models, unified multimodal models, or anything vision/ML! 👋
Patel Maitreya tweet media
English
1
1
5
416
'YZ' Yezhou Yang (杨叶舟) retweetledi
Kostas Daniilidis
Kostas Daniilidis@KostasPenn·
I fully support this suggestion @CSProfKGD. Whether you are an author or reviewer or an AC you are signing a bilateral anonymity "contract" when you submit or sign up. #ICLR2026 has unintentionally violated the terms of anonymity. They should allow all authors, reviewers, ACs to retract their papers or reviews/comments etc.
Kosta Derpanis (sabbatical in Munich 🇩🇪)@CSProfKGD

Suggestion for #ICLR2026 @iclr_conf: Allow authors to withdraw their papers without public disclosure of the submission at the conclusion of the review process. No matter what fixes are implemented now, the review process has been compromised, and is not what the authors agreed to when they first submitted their papers.

English
0
8
53
18K
'YZ' Yezhou Yang (杨叶舟)
Reading ICLR reviews flagged as potentially LLM generated from my AC batch, and finding myself puzzled with why they are being flagged... Am I being corrupted with all the LLM generated texts thus I am not "human" enough anymore? Hmm... 🤔
English
0
0
1
506
'YZ' Yezhou Yang (杨叶舟) retweetledi
WACV
WACV@wacv_official·
Registration for WACV 2026 is open! wacv.thecvf.com/Conferences/20… Early registration end January 6th.
English
2
4
20
20.9K
'YZ' Yezhou Yang (杨叶舟)
My last NLP conference was ACL Beijing 10 years ago. You can tell how practical and adaptive the #NLP community is 👇. It traveled with me on each international trip since. Next week #EMNLP2025 Suzhou, friends in NLP, let's talk about the past, the future and #AI x #Creativity
'YZ' Yezhou Yang (杨叶舟) tweet media'YZ' Yezhou Yang (杨叶舟) tweet media'YZ' Yezhou Yang (杨叶舟) tweet media
English
0
0
7
697