next up previous
Next: The Problems, Input formats, Up: The Codes Previous: BMZ

BUNDLE

Authors: Helmberg [Kiwiel, Rendl]
Version: SBmethod1.1.1, 11/2000;
Available: yes, from http://www.mathematik.uni-kl.de/ helmberg/SBmethod/
Key papers: [12,13,15,16]
Features: eigenvalue optimization
Language, Input format: C++; graph
Error computations: norm of subgradient only
Solves: maxcut, FAP, bisection, theta

SBmethod, Version 1.1.1, is an implementation of the spectral bundle method [15,13] for eigenvalue optimization problems of the form

$\displaystyle \min_{y\in \mathbb{R}^m}\;\; a\;\lambda_{\max}(C-\sum_{i=1}^{m} A_i y_i)+b^Ty.$ (1)

The design variables $ y_i$ may be sign constrained, $ C$ and the $ A_i$ are given real symmetric matrices, $ b\in\mathbb{R}^m$ allows to specify a linear cost term, and $ a>0$ is a constant multiplier for the maximum eigenvalue function $ \lambda_{\max}(\cdot)$. The code is intended for large scale problems and allows to exploit structural properties of the matrices such as sparsity and low rank structure (see the manual [12]). Problem (1) is equivalent to the dual of semidefinite programs $ \max \{\langle C,X\rangle: X\succeq 0, \langle A_i,X\rangle =b_i$ for $ i=1,\ldots m\}$ with constant trace, $ \mathrm{tr} X=a$.

The code is a subgradient method in the style of the proximal bundle method of [16]. Function values and subgradients of the nonsmooth convex function $ f(y):= a\;\lambda_{\max}(C-\sum_{i=1}^{m} A_i y_i)+b^Ty$ are determined via computing the maximum eigenvalue and a corresponding eigenvector by the Lanczos method. Starting from a given point $ \hat y=y^0$, the algorithm generates the next candidate $ y^+:=\mathrm{argmin}_y\hat
f(\cdot)+\frac{u}{2}\Vert\cdot-\hat y\Vert^2$, where $ \hat f$ is an accumulated semidefinite cutting model minorizing $ f$ and the dynamically updated weight $ u>0$ keeps $ y^+$ near $ \hat y$. A descent step $ \hat
y^+=y^+$ occurs if $ f(\hat y)-f(y^+)\ge\kappa[f(\hat y)-\hat f(y^+)]$, where $ \kappa\in(0,1)$. Otherwise a null step $ \hat y^+=\hat y$ is made but the next model is improved with the new eigenvector computed at $ y^+$.

The algorithm stops, when $ f(\hat y)-\hat f(y^+)\le
\varepsilon_{\mathrm{term}}(\vert f(\hat y)\vert+1.)$ for a given termination parameter $ \varepsilon_{\mathrm{term}}>0$ which is $ 10^{-5}$ by default.


next up previous
Next: The Problems, Input formats, Up: The Codes Previous: BMZ
Hans D. Mittelmann 2002-08-17