Chong Liu

88 posts

Chong Liu banner
Chong Liu

Chong Liu

@ChongLiuCS

Assistant Professor of CS @UAlbany @SUNY. Machine learning, Optimization, Drug discovery, LLM math reasoning. Area Chair @ICLR_conf’26.

New York, USA Katılım Ocak 2018
458 Takip Edilen1.1K Takipçiler
Chong Liu
Chong Liu@ChongLiuCS·
Thank you Zhuokai for introducing our RLMesh work recently accepted by @AISTATS_conf-2026! Paper link: arxiv.org/pdf/2603.02066
Zhuokai Zhao@zhuokaiz

Working on something closer to scientific computing and AI for science was a refreshing change of scene from the usual LLM grind. And we are excited to introduce our #AISTATS2026 paper, RLMesh, which uses RL to adaptively select mesh points for training neural PDE surrogates. The backstory: Partial differential equations (PDEs) govern an enormous range of physical phenomena — fluid dynamics, heat transfer, subsurface flow, weather systems. Solving them numerically is foundational across science and engineering, but also expensive: classical solvers discretize the domain onto a fine mesh, and the cost scales steeply with resolution. This is why neural surrogates like Fourier Neural Operators (FNOs) pioneered by @zongyili_nyu have been so exciting — they learn the solution operator directly from data and can produce predictions dramatically faster at inference time. However, training these surrogates requires thousands of expensive solver runs on fine grids, and the standard approach trains on the full solution field for every instance, even in regions where the solution is smooth and carries little learning signal. So our key motivation was: what if the surrogate doesn't need to train on the full solution field, and what if we could learn which points to train on? We frame mesh point selection as a reinforcement learning (RL) problem. For each PDE instance, an RL agent sequentially picks a small budget of grid locations where the solver should be queried. And the policy learns to place points where they actually matter: - For Burgers' equation, it learns to focus on shock fronts; - For Darcy flow, it learns to concentrate along high-contrast permeability channels and boundary layers. In practice, the RL policy takes the PDE input and sequentially selects mesh points, the solver provides solutions at those sparse locations, and the FNO trains on the accumulated non-uniform data. One design choice worth highlighting is that we can't retrain the FNO after every RL episode to compute a reward signal — that would make the training loop impossibly slow. So we use a kernel ridge regression proxy instead — it retrains in under a second while maintaining ~0.99 correlation with actual FNO error trends. Across Burgers (shock dynamics), Darcy flow (discontinuous coefficients), and Lorenz-96 (chaotic lattice dynamical system), RLMesh consistently outperforms all heuristic baselines (uniform, random, gradient-based, variance-based, intensity-based) under identical query budgets. On Burgers, we reach a given accuracy with roughly 33–50% fewer solver calls. In wall-clock simulation time the gap widens further — ~40s to reach an RMSE that baselines need 80–150s+ for. (In our main accuracy curves, we use an oracle uniform-grid solver to isolate the effect of point selection; for wall-clock simulation time, we use a non-uniform solver to reflect the real savings.) One thing I find conceptually appealing is: prior active learning work for neural PDE surrogates focuses on which instances to simulate, which is always on a full grid. We're asking a complementary question from an orthogonal axis — where within each instance to query? In principle, combining both could push data efficiency even further. On a personal note, I really enjoyed this collaboration with Yang Meng (@justinmeng19), Yuxin's (@yuxinch) new PhD student who drove much of the effort, and the rest of the team @propitious1235 @ChongLiuCS @WillettBecca. Find out more: arxiv.org/pdf/2603.02066

English
0
0
1
207
Chong Liu
Chong Liu@ChongLiuCS·
Two papers on accelerated Bayesian optimization and accelerated PDE surrogate learning have been accepted by the @AISTATS_Conf 2026 conference! @UAlbany @CNSE Department of Computer Science
Chong Liu tweet mediaChong Liu tweet media
English
0
0
6
243
Chong Liu
Chong Liu@ChongLiuCS·
Attending the @NeurIPSConf-2025 conference! 🤝 I'm organizing the "AI Virtual Cells and Instruments: A New Era in Drug Discovery and Development" (AI4D3-2025) workshop this Saturday December 6! lnkd.in/gZxaBr9G
Chong Liu tweet mediaChong Liu tweet media
English
1
0
0
85
Chong Liu retweetledi
Open Review
Open Review@openreviewnet·
Initial Analysis of OpenReview API Security Incident
Open Review tweet media
English
10
32
104
46.1K
Chong Liu retweetledi
Tuo Zhao
Tuo Zhao@tourzhao·
The problem with “LLM reviews” isn’t the LLM—it’s people outsourcing their thinking. If you let an LLM decide your opinions, your review will be hollow. Real reviewing still requires a human brain. The only thing the model should do is clean up your wording. (1/4)
English
1
1
4
819
Chong Liu
Chong Liu@ChongLiuCS·
I need two emergency reviews for #ICLR2026 (need to be completed by Friday, Nov 7th) on linear bandits and federated optimization, respectively. If you are interested, DM me with your CV/Google Scholar page!
English
0
0
0
1.9K
Chong Liu
Chong Liu@ChongLiuCS·
Flying to Atlanta to attend @INFORMS -2025! I'm organizing the "Preference Learning in Large Language Models" session (TE69). I'll also give a talk in the "Bayesian Optimization" session (TC26). I'll be in Atlanta until Wednesday, feel free to grab a coffee and let's chat! 😊
English
0
0
1
174
Chong Liu retweetledi
The University of Chicago
The University of Chicago@UChicago·
Remembering Chen Ning Yang, PhD'48, the Nobel Prize-winning physicist who redefined our understanding the nature of particles. Yang wrote his Ph.D. dissertation on the concepts of symmetry and natural phenomenon—themes of his field-defining research. ms.spr.ly/6016t63fq
English
3
9
33
3.4K
Chong Liu
Chong Liu@ChongLiuCS·
Our @NeurIPSConf-2025 AI for Drug Discovery and Development (#AI4D3) workshop submission deadline has been extended to Sun Sep 7 in AOE time! Look forward to your work! See you in San Diego this December! #NeurIPS2025 #AI #DrugDiscovery
NeurIPS AI4D3 Workshop@AI4D3

(1/6) We’re thrilled 🎉 to launch the #NeurIPS2025 Workshop on AI Virtual Cells and Instruments: A New Era in Drug Discovery & Development (AI4D3-2025) in San Diego, CA on December 6 or 7!🥳 🔗Workshop site: ai4d3.github.io

English
0
0
7
786