Bo Jensen
5.6K posts

Bo Jensen
@MrBoJensen
Works at Optibus, previous developer of high performance optimization software at CPLEX/IBM., MOSEK and SULUM. Opinions are my own.
Strib, Denmark Katılım Kasım 2010
292 Takip Edilen1.2K Takipçiler
Sabitlenmiş Tweet
Bo Jensen retweetledi

This maybe the most important thing that happened in the mathematical optimization solvers since I left IBM and CPLEX in 2020. A scalable method for LP. (I did not contribute to it)
developer.nvidia.com/blog/accelerat…
English

And now there are two more.. (at least XOPT and LEOPT are new to me, Chinese optimization solvers are expanding 🙂)
Bo Jensen@MrBoJensen
I left CPLEX and the traditional optimization biz two years ago and didn't expect much change in this area. Boy was I wrong. There seem to be a new solver in the HM benchmarks every time I check 🙃 - Exciting times for optimization software indeed !
English

@ed_uchoa @d_rehfeldt I agree this is missing, but I don't think it's suggested to replace either barrier or simplex, but should more be viewed as an additional option.
English

@MrBoJensen @d_rehfeldt This is really great! However, a critical dimension is missing here. Simplex is great at reoptimizing after some rows or columns are added, this is what happens in MIP algorithms like B&C and B&P. FOMs are good at reoptimizing?

English

GPU LP solver comparable to state of the art commercial solvers - Exciting times - Looking forward to a closer read. arxiv.org/abs/2311.12180 - Thanks to @d_rehfeldt for the reference.
English

📢 First post of the year! I will be the @InformsOS Vice-Chair of Computational Optimization and Software until 2025! Excited to boost visibility of #computational #optimization ⚙️in and beyond @INFORMS! #orms
connect.informs.org/optimizationso…

English

@timoberthold @d_rehfeldt @gurobi There is also a c version, I assume they don't use sequential : arxiv.org/pdf/2312.14832…
English

@MrBoJensen @d_rehfeldt Wait....I was excited when reading this but then saw @gurobi is run single-threadedly here. That spoils the whole paper imho. Comparing GPUs and CPUs is inevitably comparing 🍎 and 🍊, but crippling the baseline by making it sequential (and then bragging about parallelization)??
English

@timoberthold @d_rehfeldt @gurobi I did not have time to carefully read it yet, I missed that part and agree it's an odd comparison.
English

@MrBoJensen @d_rehfeldt Interesting. I will be _that guy_ though to point out that at least in my experience I always tend to bump into requirements to write MIP/BIP formulations. Thus solving LPs fast is nice, but the real trouble has always been with the exponentiality embedded into MIP.
English

@PeeterMeos @d_rehfeldt Work on LP is exciting. How much time do you think a commercial MILP solver spend in solving LPs ? Furthermore I know several large scale applications in which solving the root LP is troublesome due to memory consumption in barrier (cases in which simplex sucks). What then ?
English

@MFischetti I agree that changing parameters can act as seed changes and help in high variability cases such as these. But as I remember the last CPLEX numbers at HM, I don't think it covers a full explanation for such good results.
English

I have the same doubts. It seems that they use parameter tuning to trigger #performance_variability (kind of changing the random seed) and than they choose a-posteriori the best run and report the associated parameter.
Nathan Brixius@natebrix
@MrBoJensen @MFischetti Things I didn’t understand after a quick read: 1. Is the time to run MindOpt itself included in the results, or is it just the CPLEX time? 2. Is mindopt applied for each problem or “trained” on the whole set? 3. If the latter, shouldn’t there be a holdout?
English

@SparseRunner Seems they pulled it back, I can't seem to find it in the MIPLIB results anymore, but the paper is still there : arxiv.org/pdf/2312.13527…
English

@wangmengchang I wonder how much real customer value comes from tuning on these instances.
English

@wangmengchang I don't think it's a surprise that parameters has a huge impact, but I am surprised that one can parameter tune a solver that (I assume) had little (read none) development for years without writing new presolve tricks on top.
English

@ryanjoneil Seems so long time ago 🤗 A lot of lessons and new skills learned 🙃
English

