Ryan Cory-Wright

192 posts

Ryan Cory-Wright banner
Ryan Cory-Wright

Ryan Cory-Wright

@RyanCoryWright

Assistant Professor @ImperialBiz | Prev: Postdoc @IBMResearch PhD @ORCenter @MIT | Optimization+Machine Learning+Renewable Energy | Runner | Kiwi 🇳🇿

London, UK Katılım Nisan 2020
498 Takip Edilen827 Takipçiler
Sabitlenmiş Tweet
Ryan Cory-Wright
Ryan Cory-Wright@RyanCoryWright·
New paper 🚨 optimization-online.org/2025/01/improv… “Improved Approximation Algorithms for Low-Rank Problems Using Semidefinite Optimization” (w/ Jean Pauphilet) Inspired by Goemans-Williamson’s success in binary quadratic optimization, we generalize to semi-orthogonal and low-rank matrices.
English
1
2
16
1.5K
Ryan Cory-Wright
Ryan Cory-Wright@RyanCoryWright·
@mmaaz_98 I feel like when they say "it won't work" they mean that they don't think it is implementable within the time limit they have. And "it won't work" is what they have been trained to say in that situation. You can usually get around it by using a more specific prompt.
English
0
0
1
26
Maaz
Maaz@mmaaz_98·
One of the funniest things the coding agents do is refusing to implement something because they think it won’t work. I’m trying to see if it’ll work, it’s called research!
English
1
0
6
459
Ryan Cory-Wright retweetledi
(((ل()(ل() 'yoav))))👾
there is a much more fundamental problem than 50 accepted neurips papers with made up citations. and this problem is over 5200 accepted neurips papers.
English
11
14
220
21.5K
Ryan Cory-Wright retweetledi
Neil Zeghidour
Neil Zeghidour@neilzegh·
Me defending my O(n^3) solution to the coding interviewer.
English
422
5K
49.7K
4M
Ryan Cory-Wright retweetledi
Katherine Boyle
Katherine Boyle@KTmBoyle·
Here’s a writing tip that will make me sound as old as I am, but it has saved me from costly mistakes. Never put your name on something you didn’t write. This rule will stop you from signing petitions or statements you may come to regret. But it will also save you from shortcuts that will lead to AI editing or authoring your works. It may seem convenient to have the machine write your most useless content, but that habit will accelerate over time. You will spend years developing your voice. A machine can silence it in days with a few clicks. Make it a habit not to lend your name to anyone else’s words or to think you can borrow words because they’re free. They’re never free.
English
15
17
248
20.7K
Ryan Cory-Wright
Ryan Cory-Wright@RyanCoryWright·
The paper answers the question: "when we make a new discovery that is not explained by existing theory, what background knowledge were we missing?" in a systematic way using algebraic geometry (primary decompositions). Relevant with the advent of many new discoveries from AI
English
0
0
0
74
Ryan Cory-Wright
Ryan Cory-Wright@RyanCoryWright·
I will be presenting this in Building B Level 2 B201 on 26 Oct at INFORMS in Atlanta. Join us!
Ryan Cory-Wright tweet media
English
0
1
1
83
Ryan Cory-Wright
Ryan Cory-Wright@RyanCoryWright·
Updated paper 🚨 Our paper "Improved Approximation Algorithms for Low-Rank Problems Using Semidefinite Optimization" now contains an approximation algorithm for orthogonality constrained quadratic optimization logarithmic in no. columns. Check it out ⬇️ optimization-online.org/2025/01/improv…
Ryan Cory-Wright@RyanCoryWright

New paper 🚨 optimization-online.org/2025/01/improv… “Improved Approximation Algorithms for Low-Rank Problems Using Semidefinite Optimization” (w/ Jean Pauphilet) Inspired by Goemans-Williamson’s success in binary quadratic optimization, we generalize to semi-orthogonal and low-rank matrices.

English
1
1
6
255
Ryan Cory-Wright
Ryan Cory-Wright@RyanCoryWright·
@NateSilver538 Strong disagree. Attention is All You Need was published at NeurIPS (the top ML conference, with a good review process here: papers.nips.cc/paper/2017/fil…) and has nearly 200k citations. Journalists spent most of the last 20 years giving Andrew Wakefield the time of day.
English
1
0
3
483
Nate Silver
Nate Silver@NateSilver538·
Academic journals might be a lost cause but they'd probably be better if you had some non-academic practitioners serving as reviewers. Journalists have their problems too but they have much better bullshit detectors, for instance.
English
398
52
813
1.8M
Ryan Cory-Wright
Ryan Cory-Wright@RyanCoryWright·
@mmaaz_98 Exactly! Add in “oh I make a good salary now so I deserve to go to a fancy gym etc.” and it can go fast. I feel like everyone who makes 100k+ should be required to make a spreadsheet of their expenses before they can complain. Usually it’s inflated expectations haha
English
0
0
1
89
Maaz
Maaz@mmaaz_98·
@RyanCoryWright Yup for sure, I lived in a little studio for like $1600/mo but that was a COVID-era deal. Then I moved into a 2 bedroom w my friend and the place was much nicer, and I still only paid $1600/mo. But I’d likely want my own place after I graduate and not a little studio haha
English
1
0
2
464
Ryan Cory-Wright
Ryan Cory-Wright@RyanCoryWright·
@profkuang Also look at Bertsimas and Popescu (2005)—for a fixed A, b, Sigma, mu, you can compute lower and upper bounds on the probability by solving a polynomial optimization problem. Cost would quickly increase with the dimension though.
English
0
0
1
119
Ryan Cory-Wright
Ryan Cory-Wright@RyanCoryWright·
@profkuang No worries! We were recently looking at something related (expectations where you try to relate Sigma to xx’/||x||_^2 to within a constant factor via the semidefinite order) in lemma EC.3 of this paper. Could be related. arxiv.org/abs/2501.02942
English
1
0
1
135
Kuang Xu
Kuang Xu@ProfKuang·
Can I get your help X probability trolls? I have matrix, A, vector b, and a normal multi-dimensional normal vector X~ N(mu, \Sigma). I want some kind of large deviations principle bound on P( A x \leq b) where \leq is understood to be entry-wise. If dim(b)=1, this is just large deviation with 1-d Gaussian, but when dim(b) >2, it's measuring some probability of the normal vector satisfying some linear constraint. Looking for (tight) lower bound. It feels like the kind of problem where there's some classical results yielding a rate function that depends on some geometrical properties of A, b, mu, Sigma? @miniapeur @aryehazan @michaelchchoi any hints?
English
5
0
5
2.3K
Ryan Cory-Wright
Ryan Cory-Wright@RyanCoryWright·
@ErnestRyu Interesting! I found with O3, chat-GPT was quite bad, but with O3-pro there was a significant improvement in proving small things, especially if you ask it to verify numerically (it self corrects if it finds a mistake via the numerics). Similar to advising PhD students!
English
0
0
0
56
Ernest Ryu
Ernest Ryu@ErnestRyu·
@RyanCoryWright Gemini sometimes gives me interesting proof ideas that pan out or locates a relevant theorem that I did not know. ChatGPT also does this, but less frequently.
English
1
0
1
83
Ernest Ryu
Ernest Ryu@ErnestRyu·
Two cents on AI getting International Math Olympiad (IMO) Gold, from a mathematician. Background: Last year, Google DeepMind (GDM) got Silver in IMO 2024. This year, OpenAI solved problems P1-P5 for IMO 2025 (but not P6), and this performance corresponds to Gold. (1/10)
English
54
301
3.9K
719.3K
Ryan Cory-Wright
Ryan Cory-Wright@RyanCoryWright·
@ErnestRyu What precisely do you mean by outperforms? Like it’s wrong less often, or it’s more creative? Asking because I have been experimenting with using Chat-GPT to prove things but it seems to make a lot of mistakes, even with GPT-pro
English
1
0
0
85
Ernest Ryu
Ernest Ryu@ErnestRyu·
5. In my experience using LLMs for math research, Gemini outperforms ChatGPT. We will see if the next-gen models (which seem to be what OpenAI and GDM are using for IMO) perform at research-level math. (5/10)
English
2
22
521
55.3K
Maaz
Maaz@mmaaz_98·
What are the most “academic” industry research places?
English
1
0
10
771