To minimize for r as in (3) a general multi-objective nonlinear optimization algorithm, such as FFSQP in , cannot be utilized. The fact that the objective function is not differentiable at the interpolation nodes did not prevent the application of FFSQP in  because there the objective function had exactly one local maximum between adjacent nodes, permitting the minimization of these maxima between the interpolation points, thus avoiding the points of nondifferentiability. Here the objective function in general has an unknown number of maxima between adjacent nodes.
The following numerical results were obtained with two different algorithms. For small N a discrete differential correction algorithm according to  was used, while for larger N the simulated annealing method of  was applied. While the algorithm used in  in general finds local extrema, both methods used here will in principle locate a desired global maximum, in the first case in a systematic and guaranteed way evaluating the error not continuously but on a fine grid. The simulated annealing method cannot be guaranteed to find the global extremum but, when used for an extensive search, will produce a reasonable approximation of it.
Since the poles are always different from the interpolation points,
the denominator never vanishes at the nodes.
This eliminates the risk that unattainable points render the
problem unsolvable .
Let the function
As noted in the introduction, the classical rational interpolant typically exhibits annoying poles in the interval of interpolation when N is small. An example of this fact, seemingly proposed by Cordellier  (in a slightly different form), is the piecewise linear function taking the values at the 9 equidistant points on [-1,1]. Figure 1a and Figure 1b show the function and its polynomial interpolant, respectively the function and its classical rational interpolant in , while in Figure 2a and Figure 2b our interpolants with 2, respectively 4 attached poles are graphed, again together with the function. It is clear that, from an approximation point of view, our interpolant is vastly superior to the polynomial and the rational interpolants in [20, p. 281]. Table 1 lists errors and pole locations: the first column gives the number Pof poles, the second the maximal error, the third and fourth one pole from each pair of conjugate complex poles and the fifth the modulus of ci in (5) for the corresponding poles.
Next we present an example where one can actually watch the distance of the poles to the singularities of the function change as N and P grow. For that purpose we consider the function f(x) = e1/(x+1.2)/(1 + 25x2)used in [6,4]. Table 3 displays the results for equidistant interpolation points. (If N and P are not small, standard double precision calculations are not sufficient to cope with the bad condition of the problem. For that reason, and in contrast with our other results, the numbers in Table 3 were computed in quadruple precision.)
For the sake of comparison, we give in Table 2 the approximation error of the classical rational interpolant in with the same number of interpolation points and poles. The barycentric form has been used as well for determining this classical interpolant by means of the kernel of the matrix (17) in . The comparison shows the obvious superiority of our interpolant, especially as long as N and P are small and classical rational interpolation is unreliable.
Table 4 displays the same results for Cebysev points of the second kind, again to be compared with those of Table 2: the results are similar.
In both tables, it is interesting to watch a pair of optimized poles approach the poles of f at as N and P increase. Note also how some other pole(s) come to lie in the vicinity of the essential singularity at x=-1.2, without tending toward this point, in accordance with Weierstrass' theorem on the values of a function in the neighborhood of an essential singularity [1, p. 129].
Finally we consider a case with a large derivative in
the interior of the interval of interpolation, as motivated by the introduction.
standard error function and for given positive
function to approximate
is chosen as 
For not too large values of the cases of moderate numbers Pof poles could be relatively easily solved, while the problem is getting harder with increasing due to the steep gradient near x=0. For example, with everything works perfectly: with two pairs of poles, the error decreases exponentially with N, from for N=7 to for N=63, whereas the polynomial error decreases merely from to . For and Cebysevpoints of the second kind, in two of the cases the algorithm used failed to produce the desired results. In all other cases, however, the numbers in Table 6 show again that the attachment of a small number of poles leads to a significant improvement of the approximation properties of the interpolant.
Note the decreasing values of , the test for the presence of the poles, as N grows. This stems from the fact, noted above, that this quantity is a divided difference of order N: if the derivatives do not increase as fast as the corresponding factorials, divided differences become smaller as their order increases.
For small N, classical rational interpolation (Table 5) has a hard time with the latter example. Poles occur in the interpolation interval at least until N=15, where our r has already decreased the error to , a value the classical interpolant does not even reach with 64 points for as small a denominator degree.
While the generation of some of the pole coordinates in our tables took a long time, these computations were only done to show the error behavior. For practical applications more efficient algorithms may well be found.