Here we provide information on testruns comparing different solution methods on standardized sets of testproblems, running on the same or on different computer systems. Benchmarking is a difficult area for nonlinear problems, since different codes use different criteria for termination. Although much effort has been invested in making results comparable, in a critical situation you should try the candidates of your choice on your specific application. Many benchmark results can be found in the literature.
For pure selections of benchmark problems see under Testcases. Below we list our own benchmarks as well as a selection of those done by others:
BENCH | Hans Mittelmann's benchmarks |
betaRMP | benchmarking visualization tool for creating plots that concisely compare optimization methods evaluated on large heterogenous sets of test problems |
COAP | COAP's Software Forum |
General papers related to benchmarking:
Performance Profiles | suggestions for performance metric and associated Matlab&Python scripts |
A Note on Performance Profiles | urging caution when assessing more than two solvers |
Experimental Analysis of Algorithms | recommendations on this closely related topic |
Benchmarks performed by others:
cuPDLP in C | GPU-based PDLP on very large LPs; link to code |
Tuned CPLEX on MIPLIB | Comparison of a tuned CPLEX with other solvers on MIPLIB2017 |
MiniZinc 2023 Challenge | Comparison of CP and MIP solvers on CP problems |
Global MINLP | Global Optimization of Mixed-Integer Nonlinear Programs with SCIP 8 |
QUBO | A new open source exact QUBO/MaxCut solver in comparison to Gurobi |
ML4CO | Machine Learning for Combinatorial Optimization: Results and Insights |
convex MINLP | A review and comparison of solvers for convex MINLP |
OSQP compared | convex QP benchmark |
SAS MILP solvers | selected of our benchmarks run by SAS |
Semidefinite solver SDPA | overview and comparison to other solvers |
First order SDP solvers | Thorough comparison of four methods |
Algorithms and Software for Convex Mixed Integer Nonlinear Programs | Description and Analysis of six MINLP codes |
Nonsmooth Benchmark | Empirical and Theoretical Comparisons of Several Nonsmooth Minimization Methods and Software |
Benchmarking Derivative-Free Optimization Algorithms | Comparison of NMSMAX, APPSPACK, NEWUOA on smooth, noisy, and piecewise smooth functions |
Performance Assessment of Multi-Objective Optimizers | An Analysis and Review |
Comparison of derivative-free optimization algorithms | by N. Sahinidis et el |
Discrete gradient method: a derivative free method for nonsmooth optimization | Comparison with DNLP and CONDOR |
Nonlinear Regression | Comparison of codes by Borchers&Mondragon |
MPECs as NLPs | Extensive comparison of SQP and IPM solvers |
CONDOR | a new Parallel, Constrained extension of Powell's UOBYQA algorithm. Experimental results and comparison with the DFO algorithm |
Comparing Continuous Optimisers | testbed, tools, results |
COCONUT Benchmark | various codes on global optimization problems |
MCNF Benchmark | Simplex and cost-scaling codes for large minimum cost flow problems |
KNITRO | KNITRO, SNOPT, LOQO, and filterSQP with the CUTE(AMPL) problems. |
LOQO | KNITRO, SNOPT, and LOQO, with the CUTE(AMPL) problems. |
IPOPT | IPOPT, KNITRO, and LOQO on the CUTEr testset |
LANCELOT | LANCELOT and MINOS with the CUTE problems. |
HOPDM | Benchmarks of LP/QP problems for HOPDM-2.30. |
MCP | A Comparison of Algorithms for Large Scale Mixed Complementarity Problems. |
TSP | TSPLIB Benchmarks for Concorde |
Global | Comparison of public-domain software for black box global optimization |
Global-stochastic | Comparison of stochastic global optimization methods |
Vehicle Routing Software Survey | Comparison of commercial solvers |
Results for the COPS project at Argonne (Large Scale Nonlinearly Constrained Optimization Problems) can be found in a paper at
COPS | by Bondarenko, Bortz, Dolan and More |