Here we provide information on testruns comparing different solution methods on standardized sets of testproblems, running on the same or on different computer systems. Benchmarking is a difficult area for nonlinear problems, since different codes use different criteria for termination. Although much effort has been invested in making results comparable, in a critical situation you should try the candidates of your choice on your specific application. Many benchmark results can be found in the literature.

For pure selections of benchmark problems see under Testcases. Below we list our own benchmarks as well as a selection of those done by others:

BENCH Hans Mittelmann's benchmarks
Performance World Initiative by GAMS World
COAP COAP's Software Forum

General papers related to benchmarking:

Performance Profiles suggestions for performance metric and associated Matlab&Python scripts
A Note on Performance Profiles urging caution when assessing more than two solvers
Experimental Analysis of Algorithms recommendations on this closely related topic

Benchmarks performed by others:

SAS MILP solvers selected of our benchmarks run by SAS
MINLPs see end of this talk; nonconvex and convex cases
Semidefinite solver SDPA overview and comparison to other solvers
Algorithms and Software for Convex Mixed Integer Nonlinear Programs Description and Analysis of six MINLP codes
Nonsmooth Benchmark Empirical and Theoretical Comparisons of Several Nonsmooth Minimization Methods and Software
Benchmarking Derivative-Free Optimization Algorithms Comparison of NMSMAX, APPSPACK, NEWUOA on smooth, noisy, and piecewise smooth functions
Performance Assessment of Multi-Objective Optimization Algorithms from CEC-2007 competition; also Matlab/C codes
Comparison of derivative-free optimization algorithms by N. Sahinidis et el
Discrete gradient method: a derivative free method for nonsmooth optimization Comparison with DNLP and CONDOR
Nonlinear Regression Comparison of codes by Borchers&Mondragon
MPECs as NLPs Extensive comparison of SQP and IPM solvers
MILP04 Detailed comparison of eight noncommercial MILP solvers
CONDOR a new Parallel, Constrained extension of Powell's UOBYQA algorithm. Experimental results and comparison with the DFO algorithm
Comparing Continuous Optimisers testbed, tools, results
COCONUT Benchmark various codes on global optimization problems
MCNF Benchmark Simplex and cost-scaling codes for large minimum cost flow problems
Simplex benchmark A Comparison of Simplex Method Algorithms (with Matlab scripts)
KNITRO KNITRO, SNOPT, LOQO, and filterSQP with the CUTE(AMPL) problems.
LOQO KNITRO, SNOPT, and LOQO, with the CUTE(AMPL) problems.
IPOPT IPOPT, KNITRO, and LOQO on the CUTEr testset
LANCELOT LANCELOT and MINOS with the CUTE problems.
SNOPT SNOPT and MINOS with the CUTE problems.
HOPDM Benchmarks of LP/QP problems for HOPDM-2.30.
MCP A Comparison of Algorithms for Large Scale Mixed Complementarity Problems.
TSP TSPLIB Benchmarks for Concorde
Global Comparison of public-domain software for black box global optimization
Global-stochastic Comparison of stochastic global optimization methods
Vehicle Routing Software Survey Comparison of commercial solvers

Results for the COPS project at Argonne (Large Scale Nonlinearly Constrained Optimization Problems) can be found in a paper at

COPS by Bondarenko, Bortz, Dolan and More