**By Hans Mittelmann (mittelmann at asu.edu)**

For many years our benchmarking effort had included the solvers CPLEX, Gurobi, and XPRESS. Through an action by Gurobi at the 2018 INFORMS Annual Meeting this has come to an end. IBM and FICO demanded that results for their solvers be removed and then we decided to remove those of Gurobi as well. CPLEX was in fifteen of the benchmarks, Gurobi and XPRESS in thirteen. See here for more details.

A partial record of previous benchmarks can be obtained from this webpage and some additional older benchmarks

* Concorde-TSP with different LP solvers (3-29-2019)*

* Benchmark of Simplex LP solvers (8-24-2019)*

* Benchmark of Barrier LP solvers (8-24-2019)*

* Large Network-LP Benchmark (commercial vs free) (8-24-2019)*

* MILP Benchmark - MIPLIB2017 (8-19-2019)*

* MILP cases that are slightly pathological (9-6-2019)*

* SQL problems from the 7th DIMACS Challenge (8-8-2002)*

* Several SDP codes on sparse and other SDP problems (5-7-2019)*

* Infeasible SDP Benchmark (5-21-2019)*

* Large SOCP Benchmark (8-30-2019)*

* Non-commercial convex QP Benchmark (9-16-2019)*

* Binary Non-Convex QPLIB Benchmark (9-8-2019)*

* Discrete Non-Convex QPLIB Benchmark (non-binary) (8-20-2019)*

* Continuous Non-Convex QPLIB Benchmark (8-20-2019)*

* Convex Continuous QPLIB Benchmark (8-11-2019)*

* Convex Discrete QPLIB Benchmark (8-5-2019)*