Ellen Vitercik

19 posts

Ellen Vitercik banner
Ellen Vitercik

Ellen Vitercik

@vitercik

Assistant Professor @Stanford Management Science & Engineering and Computer Science | Machine Learning & Optimization

Stanford, CA Katılım Aralık 2025
237 Takip Edilen140 Takipçiler
Ellen Vitercik
Ellen Vitercik@vitercik·
@DimitrisPapail @boazbaraktcs Yes, I have a no-laptops/devices policy for my PhD-level discussion classes and highly recommend it! I think it's demoralizing for a student to give a seminar presentation to an audience of students distracted on their laptops.
English
0
0
3
50
Boaz Barak
Boaz Barak@boazbaraktcs·
Tempted to announce that my AI safety course will: 1. Have mandatory attendance. 2. Projects will be expected to be research paper quality. 3. Won't satisfy any departmental requirements 4. No one will get more than A- to get right kind of self selection. (3&4 would be new)
English
17
2
221
33.4K
Nika Haghtalab
Nika Haghtalab@nhaghtal·
This week I was promoted to the rank of Associate Professor at @Berkeley_EECS ! In a remarkable show of enthusiasm, the committee apparently tore a hole in spacetime to make me an Associate Professor 9 months ago!
English
58
14
719
54.5K
Ellen Vitercik
Ellen Vitercik@vitercik·
Proud to share that my awesome PhD student Yingxi Li has received an Amazon AI PhD Fellowship. Congratulations, Yingxi, and thank you, @AmazonScience @amazon!
Ellen Vitercik tweet media
English
0
1
52
2.4K
Ellen Vitercik
Ellen Vitercik@vitercik·
@_vaishnavh I have a spreadsheet I maintain where I use exactly this idea to estimate my "load" on the reviewing system! Then I make sure the number of papers I review each year is at least my load.
English
1
0
1
44
Vaishnavh Nagarajan
Vaishnavh Nagarajan@_vaishnavh·
To clarify, my proposal is a system where, if a group of authors submit K papers, they should guarantee N*K reviews (where N is say, 3 or 4).
English
1
2
5
1.3K
Vaishnavh Nagarajan
Vaishnavh Nagarajan@_vaishnavh·
Curious why conferences don't have a system where the authors of every paper together guarantee to provide N reviews (and they can distribute the load amongst themselves). This way wouldn't we tax authors in proportion to the number of papers they burden the system with?
English
3
1
23
5.8K
Ellen Vitercik
Ellen Vitercik@vitercik·
Last week, I spoke at Juna AI about our recent research on machine learning for optimization, which aims to leverage historical structure to make large, discrete decision-making faster and more robust. Two themes: 1. LLMs for “cold-start” integer programming solver configuration: we generate cutting-plane separator settings after testing only a handful of candidate configs, often with large speedups vs solver defaults. (CPAIOR’25; w/ @LawlessOpt, Yingxi Li, Anders Wikum, @madeleineudell) 2. Decision-making under uncertainty: calibrated uncertainty estimates allow us to move beyond “trust-the-predictor” optimization, with applications to scheduling. (ICML’25 spotlight; w/ @judyhshen, Anders Wikum) Slides in the first reply.
Ellen Vitercik tweet mediaEllen Vitercik tweet mediaEllen Vitercik tweet mediaEllen Vitercik tweet media
English
2
0
4
241
Ellen Vitercik
Ellen Vitercik@vitercik·
New at ITCS this week: we obtain an O(1)-competitive algorithm for online metric matching in d-dimensional Euclidean metrics (d > 2) using only a single sample from each request distribution. Mingwei Yang is presenting our paper: Smoothed Analysis of Online Metric Matching with a Single Sample: Beyond Metric Distortion Yingxi Li, Ellen Vitercik, Mingwei Yang (arXiv and talk in first reply) Setting: n servers (e.g., rideshare drivers) are available in advance, and n requests (e.g., riders) arrive online. Each request must be matched immediately to an available server, paying the distance in an underlying metric. Contributions: • We move beyond i.i.d.: each request is drawn from its own distribution (under a mild smoothness condition). • No distributional knowledge: we use one sample per request distribution. • O(1) competitive ratio for Euclidean metrics when d > 2. Conceptual idea: We sidestep the usual Ω(log n) barrier from probabilistic metric embeddings by bounding our algorithm’s cost directly on a deterministic embedding, and relating it to OPT via majorization arguments.
Ellen Vitercik tweet media
English
1
0
2
133
Ellen Vitercik
Ellen Vitercik@vitercik·
@LawlessOpt and I are excited to present our #AAAI2026 tutorial on “LLMs for Optimization: Modeling, Solving, and Validating with Generative AI.” When: Tuesday, Jan 20, 2026, 8:30am–12:30pm (Singapore time) Where: Garnet 216 (Singapore EXPO) Optimization is central to planning, scheduling, and decision-making, but deploying solvers requires deep expertise. Our tutorial covers how LLMs can support the end-to-end optimization pipeline (model formulation, solver configuration, and model validation) and highlights open research directions. Tutorial page (agenda + reading list): conlaw.github.io/llm_opt_tutori… (Connor’s intro slides are shown here.) Thanks to @leo_bix and @madeleineudell for helping put the proposal together. CC @RealAAAI
Ellen Vitercik tweet mediaEllen Vitercik tweet mediaEllen Vitercik tweet mediaEllen Vitercik tweet media
English
0
1
3
188
Ellen Vitercik
Ellen Vitercik@vitercik·
Structural reasoning—order, hierarchy, connectivity—is a prerequisite for reliable multi-step decision-making. DSR-Bench (leads: @yuhe441 and Yingxi Li) probes this without tools via data-structure operations and surfaces clear failure modes in SOTA LLMs.
Yu He@yuhe441

[0/n] Can LLMs 𝘢𝘤𝘵𝘶𝘢𝘭𝘭𝘺 reason about structure—order, hierarchy, connectivity, and how parts fit together?🧩 We introduce 𝗗𝗦𝗥-𝗕𝗲𝗻𝗰𝗵: a data-structure benchmark designed to test this 𝘄𝗶𝘁𝗵𝗼𝘂𝘁 𝘁𝗼𝗼𝗹𝘀. Even SOTA LLMs still struggle in the hardest settings.

English
0
0
1
161
Ellen Vitercik
Ellen Vitercik@vitercik·
4) Theoretical Guarantees - Optimizing Solution-Samplers for Combinatorial Problems: The Landscape of Policy-Gradient Methods (@cmcaram et al., NeurIPS’23) - Approximation Algorithms for Combinatorial Optimization with Predictions (Antoniadis et al., ICLR’25)
Ellen Vitercik tweet media
English
0
0
0
56
Ellen Vitercik
Ellen Vitercik@vitercik·
3) Math Optimization - Using LLMs to Model and Solve Optimization Problems at Scale (@aliteshnizi et al., arXiv’25) - Contrastive Predict-and-Search for Mixed Integer Linear Programs (Huang et al., ICML’24) - Differentiable Integer Linear Programming (Geng et al., ICLR’25)
Ellen Vitercik tweet media
English
1
0
0
69
Ellen Vitercik
Ellen Vitercik@vitercik·
Excited to share the materials (slides + reading list) from my Stanford seminar “AI for Algorithmic Reasoning & Optimization.” We covered formal frameworks for LLM reasoning, GNNs for combinatorial optimization, ML for math optimization and theory guarantees. Link in first reply.
Ellen Vitercik tweet media
English
1
0
3
93