Feng Liu

208 posts

Feng Liu banner
Feng Liu

Feng Liu

@AlexFengLiu1

Machine Learning Researcher | Senior Lecturer (US Associate Professor) @UniMelb. Visiting Scientist @RIKEN_AIP_EN. Focusing on Statistical Trustworthy ML.

Melbourne, Australia Katılım Kasım 2018
582 Takip Edilen465 Takipçiler
Sabitlenmiş Tweet
Feng Liu
Feng Liu@AlexFengLiu1·
Excited to share our ICML 2026 Hypothesis Testing Workshop in Seoul, this July! @icmlconf 🎉This workshop aims to bring together researchers developing modern hypothesis testing methodology and applying it to machine learning problems such as robustness, distribution shift, security, medicine, and LLM evaluation. In other words, if you care about how we make ML claims rigorous, this workshop is for you. We now have four confirmed speakers: Arthur Gretton @ArthurGretton, Yao Xie @yaoxie21851119, Bo Li @uiuc_aisecure, and Yisong Yue @yisongyue. The organizing team includes Xiuyuan Cheng (Duke), Feng Liu @AlexFengLiu1, Lester Mackey @LesterMackey, Shayak Sen @shayaksen, Danica J. Sutherland @d_j_sutherland, and Nathaniel Xu (UBC). 📌 Submission deadline: 10 May 2026 📌 Notification: 26 May 2026 📌 Camera-ready: 17 June 2026 📌 Workshop date: July 10 or 11, 2026 (TBA) 🚩Check more information below! 🔗Website: testing.ml 🔗Submission Portal: openreview.net/group?id=ICML.… We’re also recruiting PC members/reviewers. 🔗 Reviewer interest form: docs.google.com/forms/d/e/1FAI… 🏁Please feel free to share this with colleagues, collaborators, and students who may be interested. #ICML #ICML26
Feng Liu tweet media
English
1
10
55
12.7K
Feng Liu
Feng Liu@AlexFengLiu1·
Deep learning under distribution shift often relies on maximizing "disagreement discrepancy" to bound errors without target labels. But are we actually optimizing it? In our new #ICLR2026 paper, we prove existing surrogate losses are mathematically flawed (not Bayes consistent). 🧵👇 Optimization here is a tug-of-war: the model wants to agree on source data, but disagree on target data. We show prior surrogates pull with the wrong strength. In certain regions, perfectly optimizing the surrogate completely fails to optimize the true objective! 🛑 💡 The Fix: We introduce the first provably Bayes consistent surrogate. By pairing a standard cross-entropy agreement loss with a novel, symmetric disagreement loss, we restore the balance and mathematically close the optimality gap. ⚖️ In practice, our consistent surrogate yields: 📈 Larger (better) disagreement discrepancy in ~80% of tested scenarios 🛡️ Strong robustness against adversarial target data 🕵️ Promising improvements in statistical power for harmful shift detection Come say hi at our poster! 📍 Pavilion 4, P4-#4101 📅 Sat, Apr 25 | AM – PM 🔗 Paper: openreview.net/forum?id=VwCyR… w/ @NGMarchant, Andrew Cullen, and Sarah Erfani @iclr_conf #ICLR #MachineLearning #LearningTheory #DistributionShift
Feng Liu tweet media
English
0
0
2
124
Feng Liu
Feng Liu@AlexFengLiu1·
Evaluating machine unlearning (MU) remains a fundamental challenge, as existing methods typically require retraining reference models or performing membership inference attacks. These paradigms often rely on prior access to training configurations or supervision labels, rendering them impractical for real-world deployment. To address this, we shift the focus from individual sample analysis toward evaluating whether a subset exhibits training-induced internal dependencies as a collective signal. This approach is grounded in the insight that since model parameters are shaped by their training data, the output representations of a trained subset will exhibit significant statistical dependence. We propose Split-half Dependence Evaluation (SDE), which uses the Hilbert–Schmidt Independence Criterion (HSIC) to assess these dependencies, for evaluating unlearning without requiring retraining or auxiliary classifiers. In controlled experiments, SDE effectively distinguishes whether a given subset is part of the training dataset. Furthermore, evaluations on various unlearning algorithms demonstrate that SDE provides robust verification of unlearning success even in settings where existing evaluations fail to provide conclusive evidence. @iclr_conf #iclr #iclr26 #machinelearning #machineUnlearning #ai
Feng Liu tweet media
English
0
1
6
113
Feng Liu
Feng Liu@AlexFengLiu1·
At ICLR 2026, we will present our work, CARPRT, a training-free, black-box method for improving zero-shot vision-language models through class-aware prompt reweighting. Unlike existing approaches that assign global weights to prompts, CARPRT computes class-aware prompt relevance directly from unlabeled data. This simple yet effective strategy better captures prompt–class dependencies, leading to consistent gains across diverse datasets and backbones. Notably, CARPRT requires no training, gradients, or model access, making it a plug-and-play solution for real-world applications. Extensive experiments show that modeling class-aware prompt importance is key to unlocking the full potential of prompt ensembling. @ICLR #ICLR #ICLR26 #VLM #BlackBox
Feng Liu tweet media
English
0
0
10
149
Feng Liu
Feng Liu@AlexFengLiu1·
Excited to share our ICML 2026 Hypothesis Testing Workshop in Seoul, this July! @icmlconf 🎉This workshop aims to bring together researchers developing modern hypothesis testing methodology and applying it to machine learning problems such as robustness, distribution shift, security, medicine, and LLM evaluation. In other words, if you care about how we make ML claims rigorous, this workshop is for you. We now have four confirmed speakers: Arthur Gretton @ArthurGretton, Yao Xie @yaoxie21851119, Bo Li @uiuc_aisecure, and Yisong Yue @yisongyue. The organizing team includes Xiuyuan Cheng (Duke), Feng Liu @AlexFengLiu1, Lester Mackey @LesterMackey, Shayak Sen @shayaksen, Danica J. Sutherland @d_j_sutherland, and Nathaniel Xu (UBC). 📌 Submission deadline: 10 May 2026 📌 Notification: 26 May 2026 📌 Camera-ready: 17 June 2026 📌 Workshop date: July 10 or 11, 2026 (TBA) 🚩Check more information below! 🔗Website: testing.ml 🔗Submission Portal: openreview.net/group?id=ICML.… We’re also recruiting PC members/reviewers. 🔗 Reviewer interest form: docs.google.com/forms/d/e/1FAI… 🏁Please feel free to share this with colleagues, collaborators, and students who may be interested. #ICML #ICML26
Feng Liu tweet media
English
1
10
55
12.7K
Yisong Yue
Yisong Yue@yisongyue·
Participation in peer review at @NeurIPSConf (or @icmlconf, @iclr_conf, @CVPR, @COLM_conf, etc.) can be considered providing a "service" under U.S. sanctions law. U.S. law generally prohibits providing services to designated sanctioned individuals or entities, including cases where the process is effectively providing a service to a sanctioned institution. Violations can lead to significant fines and compliance overhead; willful violations can carry criminal exposure to the organizers and board members. Note that the "informational materials" exemption (Berman Amendment) likely does not apply here, based on legal advice.
Yisong Yue tweet media
English
9
6
103
31.3K
Feng Liu
Feng Liu@AlexFengLiu1·
@mengyer However, OFAC actually can allow academic services from Sanctions list (clearly stating this from 2004). The reason is simple, academic information should be freely distributed. So, probably, only NeurIPS wants the true sanctions. See the letter here: home.treasury.gov/news/press-rel…
English
0
0
8
920
Feng Liu
Feng Liu@AlexFengLiu1·
@jiqizhixin @yazhe_li if you look at the file released by OFAC in 2004, you can find that OFAC even did not ban publications produced by people in sanctions countries. NeurIPS Foundation even steps further than US official suggestions?? see the file here: home.treasury.gov/news/press-rel…
English
0
0
6
3.4K
机器之心 JIQIZHIXIN
机器之心 JIQIZHIXIN@jiqizhixin·
Breaking: Academic freedom no more. The NeurIPS Foundation has announced it will no longer accept submissions from US-sanctioned institutions.
机器之心 JIQIZHIXIN tweet media
English
16
61
353
260.3K
Feng Liu
Feng Liu@AlexFengLiu1·
@yuxiangw_cs @PandaAshwinee But some advanced models will directly ignore the watermark and give normal reviews. Guardrails can prevent from these injections.
English
0
0
0
137
Yu-Xiang Wang
Yu-Xiang Wang@yuxiangw_cs·
@PandaAshwinee I don’t know how advanced the approach they adopted is… ICML hasn’t revealed the details yet. But those phrases don’t have to be weird to be effective. They just have to be each providing a small statistical advantage
English
1
0
6
1.4K
Yu-Xiang Wang
Yu-Xiang Wang@yuxiangw_cs·
AI watermarking in action at #ICML's avant garde peer-review experiments this year! Quite a few casualties in my SAC batch (an example below --- appropriately redacted hopefully)
Yu-Xiang Wang tweet media
English
13
34
329
78K
Feng Liu
Feng Liu@AlexFengLiu1·
@SharonYixuanLi Thanks for the great efforts made to improve the reviewing process!
English
0
0
0
845
Sharon Li
Sharon Li@SharonYixuanLi·
#ICML2026 has released the review assignments. This may have been the most complex matching process ICML has executed to date, both in scale and in the number of constraints. The matching balances expertise, senior/junior reviewer composition for each paper, LLM reviewing policy compatibility, geographic diversity, and other fairness and load constraints---all while operating at unprecedented scale. Huge thanks to the organizing team and to our reviewers for stepping up. If you’re curious what large-scale constrained optimization looks like in the wild… well, you’re reviewing it. 😄
English
5
4
135
27.5K
Feng Liu retweetledi
Arthur Gretton
Arthur Gretton@ArthurGretton·
At #NeurIPS ? Visit our posters! 🧵 Demystifying Spectral Feature Learning for Instrumental Variable Regression: #2600, Wed 11am Regularized least squares learning with heavy-tailed noise is minimax optimal: #3012, Wed 4:30pm ✨spotlight✨ 1/2
English
2
6
22
1.9K
Feng Liu
Feng Liu@AlexFengLiu1·
will the current discussions be deleted too? @iclr_conf
English
0
0
0
317
Feng Liu
Feng Liu@AlexFengLiu1·
We meet the most serious privacy leakage issue in the ML history, even to say, it might exist for a while. Nowadays, we are desperated to have high-quality reviews, if we cannot even protect reviewers' privacy, how many reviewer invitation declines we will receive in the future?
English
0
0
1
78
Feng Liu retweetledi
Peter Richtarik
Peter Richtarik@peter_richtarik·
I am an AC for ICLR 2026. One of the papers in my batch was just withdrawn. The authors wrote a brief response, explaining why the reviewers failed at their job. I agree with most of their comments. The authors gave up. They are fed up. Just like many of us. I understand. We pretend the emperor has clothes, but he is naked. Here is the final part of their withdrawal notice. I took the liberty to make it public, to highlight that what we are doing with AI conference reviews these last few years is, basically, madness. --- Comment: We thank the reviewers for their time. However, upon reading the reviews for our paper, it became immediately apparent that the four "reject" ratings are not based on good-faith academic disagreement, but on a critical failure to read the submitted paper. The reviews are rife with demonstrably false claims that are directly contradicted by the text. The core justifications for rejection rely on asserting that key components are "missing" when they are explicitly detailed in the manuscript. Some specific examples are (and many are even fake claims). Claim: Harder tasks like GSM8K are missing. Fact: GSM8K results are in many tables, like Table 2 (Section 4.2) and Appendix G. Claim: The method does not use per-layer ranks. Fact: This is the entire point of our method. The reviewer clearly mistook our method for the baselines. (Section 2, Table 1). Claim: The GP kernel is not specified. Fact: It is specified in Appendix E (Table 6). Claim: There is no ablation of the method's three stages. Fact: Section 4.4 ("Ablation Study") and Appendix J are dedicated to this. Reviewers have a fundamental responsibility to read and evaluate the work they are assigned. The nature of these errors is so fundamental, so systemic in overlooking explicit content, that it goes far beyond what "limited time" or "oversight" can explain. This work has gone through several rounds of revision over the last year. In earlier submissions, the paper usually received borderline or weak-accept scores. Numerous signs strongly suggest that some reviewers are relying entirely on AI tools to automatically generate peer reviews, rather than fulfilling their fundamental responsibility of personally reading and evaluating manuscripts. We strongly protest this. This is a gross disrespect to the authors. It is a flagrant desecration of the reviewer's sacred duty. It fundamentally undermines the integrity of the entire peer-review process. Given that the reviews are not based on the actual content of our paper, we have decided to withdraw the submission. We leave this comment so that future readers of the OpenReview page are aware that the items described as "missing" are already present in the submitted manuscript. These negative reviews for this submission are factually unsound and do not reflect the content of the paper. We cannot and will not accept an assessment that is not based on the work we actually submitted.
English
33
205
1.5K
149.9K
Feng Liu retweetledi
ICLR
ICLR@iclr_conf·
Our reviewers are wrapping things up 💻The review will be released by the end of today (11/11, AOE) !!! Thanks for your patience 😊
GIF
English
20
36
433
88.8K