Given a point with The predictor direction is found by solving the following system:
After expanding the left hand side of (4-7) and ignoring the higher order terms and , one has
We describe two algorithms using the predictor direction and the corrector-centering direction mentioned above. All initial data are chosen as in Algorithm4.2.
Table 2 lists the results for these algorithms as
well as for the long-step path-following method, i.e., Algorithm
4.2 combined with Algorithm 4.3 and it compares
those with all four algorithms for SOCP problems we had evaluated
for the Seventh DIMACS implementation Challenge .
Similar to LP,
Table 2 also shows that in our test case Mehrotra's
version of the predictor-corrector method works better than
long-step path-following methods. Moreover, the quadratic
combination of predictor and corrector directions, i.e., Algorithm
4.5, has a few great results, but it is still worse
than Algorithm 4.4 overall, sometimes even worse than
the long-step algorithm.
Besides the above two algorithms,
we would have two variants for Algorithm 4.4. First,
inspired by [3,8], we implemented a code for the inexact
Newton method by adding a small residual term to the right hand
side. We use inexact_pc to denote this algorithm. Second, using
the predictor direction one more time if the reduction of is
significant. We denote this method as ppc. Moreover, instead of
solving the full system, we use the normal equations in Table
3. In order to solve large scale problems, solving
the full linear system is not as efficient as solving the normal
equations. Nevertheless, for some problems, the condition number
of the coefficient matrix for the normal equations is much larger
than for the full linear system as the point is getting closer to
the solution, for example, EX2-75 (see Figure 2).
|A. 4.4||A. 4.5||A.4.2||SeDu||SDPT||MOSK||LOQO|
We also show the results for using the full system and the normal equations. In the tables, 'f' denotes failure. SDPT3 and LOQO do not handle rotated cone constraints. Our best method performs quite well compared to the other codes. As the evaluation  had shown, the code MOSEK which is the only one specialized for convex optimization including SOCP is the most robust and efficient for such problems. Its performance is very comparable to that of our best method because the approaches are very similar except for a better exploitation of sparsity in MOSEK. SeDuMi uses also a self-dual embedding technique and was mainly optimized for robustness, which is reflected in the uniformly moderate but not lowest iteration counts and the lack of failures. SDPT3 uses an infeasible path-following method and could probably handle larger sparse cases nearly as well as MOSEK. LOQO finally, is a general NLP code and it treats the SOCP problems as smoothed nondifferentiable NLPs which in some cases leads to larger iteration counts.