Bo Jensen

5.6K posts

Bo Jensen banner
Bo Jensen

Bo Jensen

@MrBoJensen

Works at Optibus, previous developer of high performance optimization software at CPLEX/IBM., MOSEK and SULUM. Opinions are my own.

Strib, Denmark Katılım Kasım 2010
292 Takip Edilen1.2K Takipçiler
Sabitlenmiş Tweet
Bo Jensen
Bo Jensen@MrBoJensen·
Did not look much on HM benchmarks, since I left CPLEX. I now see that a Chinese company is leading the MIP benchmark by doing parameter tuning on CPLEX... Still laughing of my own prediction that nothing would happen in this solver segment 😂😂
English
6
3
20
14.7K
Bo Jensen
Bo Jensen@MrBoJensen·
@JFPuget I am sure you will do fine. Also obviously you don't have to do them :-) Those kinda interviews are a (trained) skill of it's own. Personally I hope never to be dependent on them, cause I am really bad at it.
English
0
0
1
103
JFPuget 🇫🇷🇺🇦🇨🇦🇬🇱
I don't think I'd pass any SWE interview these days based on what I read on twitter. Yet I developed various SOTA code over the years. But they were in commercial software, not open source. And I focus on things like memory locality and other efficiency concerns.
English
12
4
105
8.3K
Bo Jensen
Bo Jensen@MrBoJensen·
@JFPuget Danglish has the reversed effect, I have learned 🙃
English
1
0
1
74
Bo Jensen
Bo Jensen@MrBoJensen·
@ed_uchoa @d_rehfeldt I agree this is missing, but I don't think it's suggested to replace either barrier or simplex, but should more be viewed as an additional option.
English
0
0
0
107
Eduardo Uchoa
Eduardo Uchoa@ed_uchoa·
@MrBoJensen @d_rehfeldt This is really great! However, a critical dimension is missing here. Simplex is great at reoptimizing after some rows or columns are added, this is what happens in MIP algorithms like B&C and B&P. FOMs are good at reoptimizing?
Eduardo Uchoa tweet media
English
1
0
2
201
Bo Jensen
Bo Jensen@MrBoJensen·
GPU LP solver comparable to state of the art commercial solvers - Exciting times - Looking forward to a closer read. arxiv.org/abs/2311.12180 - Thanks to @d_rehfeldt for the reference.
English
4
9
63
10.3K
T-mo B-hold
T-mo B-hold@timoberthold·
@MrBoJensen @d_rehfeldt Wait....I was excited when reading this but then saw @gurobi is run single-threadedly here. That spoils the whole paper imho. Comparing GPUs and CPUs is inevitably comparing 🍎 and 🍊, but crippling the baseline by making it sequential (and then bragging about parallelization)??
English
3
0
14
958
Peeter Meos
Peeter Meos@PeeterMeos·
@MrBoJensen @d_rehfeldt Interesting. I will be _that guy_ though to point out that at least in my experience I always tend to bump into requirements to write MIP/BIP formulations. Thus solving LPs fast is nice, but the real trouble has always been with the exponentiality embedded into MIP.
English
2
0
0
288
Bo Jensen
Bo Jensen@MrBoJensen·
@PeeterMeos @d_rehfeldt Work on LP is exciting. How much time do you think a commercial MILP solver spend in solving LPs ? Furthermore I know several large scale applications in which solving the root LP is troublesome due to memory consumption in barrier (cases in which simplex sucks). What then ?
English
2
0
0
236
Bo Jensen
Bo Jensen@MrBoJensen·
Does anyone have insight to why it was taken down from the HM site ? 😇
English
1
0
0
1.8K
Bo Jensen
Bo Jensen@MrBoJensen·
Did not look much on HM benchmarks, since I left CPLEX. I now see that a Chinese company is leading the MIP benchmark by doing parameter tuning on CPLEX... Still laughing of my own prediction that nothing would happen in this solver segment 😂😂
English
6
3
20
14.7K
Bo Jensen
Bo Jensen@MrBoJensen·
@MFischetti I agree that changing parameters can act as seed changes and help in high variability cases such as these. But as I remember the last CPLEX numbers at HM, I don't think it covers a full explanation for such good results.
English
0
0
3
943
Matteo Fischetti
Matteo Fischetti@MFischetti·
I have the same doubts. It seems that they use parameter tuning to trigger #performance_variability (kind of changing the random seed) and than they choose a-posteriori the best run and report the associated parameter.
Nathan Brixius@natebrix

@MrBoJensen @MFischetti Things I didn’t understand after a quick read: 1. Is the time to run MindOpt itself included in the results, or is it just the CPLEX time? 2. Is mindopt applied for each problem or “trained” on the whole set? 3. If the latter, shouldn’t there be a holdout?

English
3
0
7
2K
Bo Jensen
Bo Jensen@MrBoJensen·
@wangmengchang I wonder how much real customer value comes from tuning on these instances.
English
2
0
2
8.7K
Bo Jensen
Bo Jensen@MrBoJensen·
@wangmengchang I don't think it's a surprise that parameters has a huge impact, but I am surprised that one can parameter tune a solver that (I assume) had little (read none) development for years without writing new presolve tricks on top.
English
1
0
0
708
Ryan O'Neil
Ryan O'Neil@ryanjoneil·
Picking up LaTeX again is just like riding a bike. A brutal, angry bike.
English
2
1
4
572
Bo Jensen
Bo Jensen@MrBoJensen·
This week a short function that was thought to be O(N), but in really was O(N^2), after running for eons, finally hit it's worst case complexity. Boom. Impressive.
English
1
0
4
1.3K
Bo Jensen
Bo Jensen@MrBoJensen·
@ryanjoneil Seems so long time ago 🤗 A lot of lessons and new skills learned 🙃
English
0
0
2
86