**By Hans Mittelmann (mittelmann at asu.edu)**

For many years our benchmarking effort had included the solvers CPLEX, Gurobi, and XPRESS. Through an action by Gurobi at the 2018 INFORMS Annual Meeting this has come to an end. IBM and FICO demanded that results for their solvers be removed. See here for more details. The resulting void was filled by other developers.

In August 2024 Gurobi decided to withdraw from the benchmarks as well and their results have been removed. See the note at the bottom of the MIPLIB benchmark.

**Note that on top of the benchmarks a link to logfiles is given!**

**
See this ****graphical tool for visualization of the results including a virtual best (ensemble)
**

* Concorde-TSP with different LP solvers (3-3-2023)*

* LPfeas Benchmark (find a PD feasible point) (8-21-2024)*

* LPopt Benchmark (find optimal basic solution) (8-21-2024)*

* Large Network-LP Benchmark (commercial vs free) (8-21-2024)*

* MILP Benchmark - MIPLIB2017 (8-20-2024)*

* MILP cases that are slightly pathological (8-20-2024)*

* Infeasibility Detection for MILP Problems (8-20-2024)*

* SQL problems from the 7th DIMACS Challenge (8-8-2002)*

* Several SDP codes on sparse and other SDP problems (8-28-2024)*

* Infeasible SDP Benchmark (8-24-2023)*

* Large SOCP Benchmark (8-21-2024)*

* AMPL-NLP Benchmark (8-20-2024)*

* Non-commercial convex QP Benchmark (9-16-2021)*

* Non-Convex QUBO-QPLIB Benchmark (8-21-2024)*

* Binary Non-Convex QPLIB Benchmark (8-21-2024)*

* Discrete Non-Convex QPLIB Benchmark (non-binary) (8-21-2024)*

* Continuous Non-Convex QPLIB Benchmark (8-21-2024)*

* Convex Continuous QPLIB Benchmark (ext) (8-21-2024)*

* Convex Discrete QPLIB Benchmark (8-21-2024)*