next up previous
Next: Conclusion Up: No Title Previous: The optimization problem

Numerical experiments

In this section results will be reported from the application of the rational interpolation with optimized zeros of the denominator, as outlined in the previous section, to several typical problems. In all examples the sup-norm $\Vert\qquad\Vert _\infty$ has been estimated by considering the 1000 equally spaced points

\begin{displaymath}{\widehat x}_\ell = - {5\over 4} +{\ell-1\over 999}{10\over 4},\qquad

on the interval [-5/4,5/4] and computing the maximal absolute value of the error at those ${\widehat x}_\ell$lying in [-1,1]; this way the points do not change with N.

To minimize $\Vert r - f\Vert _\infty$ for r as in (3) a general multi-objective nonlinear optimization algorithm, such as FFSQP in [21], cannot be utilized. The fact that the objective function is not differentiable at the interpolation nodes did not prevent the application of FFSQP in [5] because there the objective function had exactly one local maximum between adjacent nodes, permitting the minimization of these maxima between the interpolation points, thus avoiding the points of nondifferentiability. Here the objective function in general has an unknown number of maxima between adjacent nodes.

The following numerical results were obtained with two different algorithms. For small N a discrete differential correction algorithm according to [14] was used, while for larger N the simulated annealing method of [9] was applied. While the algorithm used in [5] in general finds local extrema, both methods used here will in principle locate a desired global maximum, in the first case in a systematic and guaranteed way evaluating the error not continuously but on a fine grid. The simulated annealing method cannot be guaranteed to find the global extremum but, when used for an extensive search, will produce a reasonable approximation of it.

 Since the poles are always different from the interpolation points, the denominator never vanishes at the nodes. This eliminates the risk that unattainable points render the problem unsolvable [19]. Let the function

\begin{displaymath}f(x) = \sin\big(\pi(x-0.5)\big) - {16\over 3 \mathop{\hbox{\rm Log}}2} x(x^2 - 1)\mathop{\hbox{\rm Log}}{2x+3\over

be interpolated between five equidistant points in [-1,1]. The value vector is [1,2,-1,0, 1] and the point (1,1) is unattainable for the classical rational interpolant in $\Rscr_{2,2}$ [6]. The latter takes the value .2 at x = 1, while the optimal raccording to (3) with P = 2 has a maximal absolute error .01799. The attached poles are located at about $-2.6149 \pm 3.3794 i$ and the modulus of c1 and c2 in (5) is about 48.93.

 As noted in the introduction, the classical rational interpolant typically exhibits annoying poles in the interval of interpolation when N is small. An example of this fact, seemingly proposed by Cordellier [20] (in a slightly different form), is the piecewise linear function taking the values $[-{1\over 2},-{1\over 4},0,0,0,{1\over 4},0,
-{1\over 4},-{1\over 2}]$ at the 9 equidistant points on [-1,1]. Figure 1a and Figure 1b show the function and its polynomial interpolant, respectively the function and its classical rational interpolant in $\Rscr_{4,4}$, while in Figure 2a and Figure 2b our interpolants with 2, respectively 4 attached poles are graphed, again together with the function. It is clear that, from an approximation point of view, our interpolant is vastly superior to the polynomial and the rational interpolants in [20, p. 281]. Table 1 lists errors and pole locations: the first column gives the number Pof poles, the second the maximal error, the third and fourth one pole from each pair of conjugate complex poles and the fifth the modulus $\mathop{\hbox{\rm abs}}(c_i)$ of ci in (5) for the corresponding poles.

\includegraphics[height=4.0in,trim=177 0 177 0]{}

Figure 1a.

\includegraphics[height=4.0in,trim=177 0 177 0]{}

Figure 1b.

Table 1: Results for Example 2
P max error Re(zi) Im(zi) abs(ci)
0 .479296      
2 .387673E-01 .151498 .302879 3.803
4 .239559E-01 -.454952 .406143 .4450
    .202325 .163394 1.148

Table 2: Example 3, errors for classical rational interpolation
N P equidist. Cebysev
7 2 .1937E01 .5212E01
  4 .3826 .1416
  6 .5998E-02 .8739
15 2 .4990 .2340
  4 .2448E-01 .1014E-02
  6 .3113E-03 .1352E-04
  8 .7052E-06 .3452E-07
31 2 .2355E-01 .6927E-02
  4 .2132E-03 .1692E-06
  6 .1493E-05 .7481E-09
  8 .3160E-08 .1372E-11
63 2 .2500E-04 .9849E-08
  4 .3115E-07 .7477E-14
  6 .6136E-12 .6844E-18

Table 3: Results for Example 3, equidistant points
N P max error Re(zi) Im(zi) abs(ci)  
7 0 2.00705        
  2 .557636 -.846124 .220082 3.021  
  4 .5086E-03 -1.068126 .321507E-01 .2729  
      -.312498E-03 .200091 10.78  
  6 .871959E-05 -1.094751 .133455E-01 1.142E-03  
      -.295555E-06 .199999 14.14  
      -1.099497 .610652E-01 9.286E-04  
15 0 8.10977        
  2 .108660 -1.41066E-02 .112443 326.6  
  4 .129434E-04 -.324645E-05 .200002 406.2  
      -1.083496 .275526E-01 .1252  
  6 .942704E-06 -.568190E-07 .200000 553.6  
      -1.098424 .226468E-01 8.115E-04  
      -1.133404 .570938E-01 2.311E-04  
  8 .167838E-07 -.552288E-12 .200000 713.7  
      -1.125287 .173849E-01 4.644E-07  
      -1.101591 .103767E-01 4.545E-07  
      -1.122610 .557111E-01 9.178E-07  
31 0 2623.53        
  2 .171958E-02 -1.217508E-06 .200000 2.864E05  
  4 .123776E-06 -1.099067 .228277E-01 1.095E-02  
      -.990894E-10 .200000 3.576E05  
  6 .419991E-07 -.131845E-09 .200000 1.131E06  
      -1.100112 .223975E-01 2.515E-02  
      -.9650369 1.497751 1.692E-04  
  8 .204998E-09 .973004E-15 .200000 7.736E05  
      -1.213014 .411931E-01 9.748E-08  
      -1.147001 .224904E-03 2.780E-07  
      -1.129839 .206229E-01 4.406E-07  
63 0 .152604E09        
  2 .752845E-06 .112984E-15 .200000 1.431E11  
  4 .485318E-10 .391884E-18 .200000 1.832E11  
      -1.113237 .180890E-01 113.5  
  6 .768149E-12 -.207903E-20 .200000 2.453E11  
      -1.130093 .247649E-01 231.8  
      -1.122707 .677363E-02 238.0  

\includegraphics[height=4.0in,trim=115 0 115 0]{}

Figure 2a.

\includegraphics[height=4.0in,trim=115 0 115 0]{}

Figure 2b.

 Next we present an example where one can actually watch the distance of the poles to the singularities of the function change as N and P grow. For that purpose we consider the function f(x) = e1/(x+1.2)/(1 + 25x2)used in [6,4]. Table 3 displays the results for equidistant interpolation points. (If N and P are not small, standard double precision calculations are not sufficient to cope with the bad condition of the problem. For that reason, and in contrast with our other results, the numbers in Table 3 were computed in quadruple precision.)

For the sake of comparison, we give in Table 2 the approximation error of the classical rational interpolant in $\Rscr_{N-P,P}$ with the same number of interpolation points and poles. The barycentric form has been used as well for determining this classical interpolant by means of the kernel of the matrix (17) in [6]. The comparison shows the obvious superiority of our interpolant, especially as long as N and P are small and classical rational interpolation is unreliable.

Table 4 displays the same results for Cebysev points of the second kind, again to be compared with those of Table 2: the results are similar.

In both tables, it is interesting to watch a pair of optimized poles approach the poles of f at $\pm .2\mbox{i}$ as N and P increase. Note also how some other pole(s) come to lie in the vicinity of the essential singularity at x=-1.2, without tending toward this point, in accordance with Weierstrass' theorem on the values of a function in the neighborhood of an essential singularity [1, p. 129].

 Finally we consider a case with a large derivative in the interior of the interval of interpolation, as motivated by the introduction. With $\mathop{\hbox{\rm erf}}$ denoting the standard error function and for given positive $\epsilon$ the function to approximate is chosen as [11]

\begin{displaymath}f(x)=\cos\pi x+{{\mathop{\hbox{\rm erf}}(\delta x)}\over{\mathop{\hbox{\rm erf}}(\delta)}},\qquad\delta=

This function has values -2 at x=-1 and 0 at x=1 and has a steep gradient near x=0 for large $\epsilon$. Figure 3 shows the graph of ffor $

\includegraphics[height=3.5in,trim=115 0 115 0]{}

Figure 3.

For not too large values of $\epsilon$ the cases of moderate numbers Pof poles could be relatively easily solved, while the problem is getting harder with increasing $\epsilon$due to the steep gradient near x=0. For example, with $\epsilon=100$everything works perfectly: with two pairs of poles, the error decreases exponentially with N, from $4.0\cdot 10^{-3}$for N=7 to $4.1\cdot 10^{-14}$ for N=63, whereas the polynomial error decreases merely from $1.3\cdot 10^{-1}$ to $2.1\cdot 10^{-6}$. For $
\epsilon=10,000$ and Cebysevpoints of the second kind, in two of the cases the algorithm used failed to produce the desired results. In all other cases, however, the numbers in Table 6 show again that the attachment of a small number of poles leads to a significant improvement of the approximation properties of the interpolant.

Note the decreasing values of $\mathop{\hbox{\rm abs}}(c_i)$, the test for the presence of the poles, as N grows. This stems from the fact, noted above, that this quantity is a divided difference of order N: if the derivatives do not increase as fast as the corresponding factorials, divided differences become smaller as their order increases.

Table: Results for Example 3, Cebysevpoints of the second kind
N P max error Re(zi) Im(zi) abs(ci)  
7 0 1.274540        
  2 1.041588 -.984145 .155790E-01 .2485E-01  
  4 .794517E-03 -1.066884 .326360E-01 .9262E-01  
      -.346010E-03 .199926 .3411  
  6 .573120E-04 -1.077953 .284679E-01 .2300E-02  
      -.175400E-04 .200010 .4741  
      -1.142417 .132298 .2558E-03  
15 0 .237659        
  2 .224479 -.849569 .118225E-01 .2066E-02  
  4 .124278E-04 -.264222E-04 .199990 .1632  
      -1.084673 .271153E-01 .1041E-01  
  6 .565391E-06 -1.139591 .894443E-01 .1234E-04  
      -.589498E-06 .200000 .2234  
      -1.094256 .232891E-01 1.1587  
  8 .331751E-07 -1.125766 .299623E-01 .7444E-05  
      -.153199E-07 .200000 .4492  
      -1.311525 .468433 .6263E-07  
      -1.122292 .170493E-01 .9048E-05  
31 0 .926485E-02        
  2 .179527E-03 -.241920E-02 .189619 .1062E-01  
  4 .290629E-08 -1.104158 .209447E-01 .2457E-04  
      -.125961E-06 .199999 .1456E-01  
  6 .200699E-09 -1.171927 .882604E-01 .1141E-07  
      -.528615E-08 .200000 .2091E-02  
      -1.110073 .190116E-01 .3098E-06  
  8 .169550E-10 -1.099230 .226391 .1820E-12  
      -1.110667 .170531E-01 .4685E-08  
      -1.130098 .601730E-01 .6161E-09  
      -.933120E-11 .200000 .2521E-01  
63 0 .159734E-04        
  2 .698346E-10 -0.554176E-06 .199998 .4071E-04  
  4 .399680E-14 -.133091E-10 .200000 .5286E-04  
      -1.121621 .155754E-01 .1398E-10  

For small N, classical rational interpolation (Table 5) has a hard time with the latter example. Poles occur in the interpolation interval at least until N=15, where our r has already decreased the error to $5.5\cdot10^{-3}$, a value the classical interpolant does not even reach with 64 points for as small a denominator degree.

Table: Example 4, $\epsilon =10^4$, errors for classical rational interpolation
N P max error  
7 2 pole  
  4 1.398  
  6 pole  
15 2 .6274  
  4 pole  
  6 pole  
  8 pole  
31 2 .3719  
  4 .3154  
  6 .2833  
  8 .2659  
63 2 .1388  
  4 .1991E-01  
  6 .5993  
  8 .9756E-01  

While the generation of some of the pole coordinates in our tables took a long time, these computations were only done to show the error behavior. For practical applications more efficient algorithms may well be found.

Table: Results for Example 4, $\epsilon =10^4$, Cebysevpoints of the second kind
N P max error Re(zi) Im(zi) abs(ci)  
7 0 .860929        
  2 .585487 .498187E-01 .855217E-01 .9510E-01  
  4 .250594 .963782 .195789 .1388  
      .173890E-02 -.519644E-01 .3671E-01  
  6 .136934 .381364 -.503092 .4074  
      -1.490736 .527043 .1002  
      .218230E-02 -.363523E-01 .1622E-01  
15 0 .731061        
  2 .152567 .445251E-07 .386686E-01 .1594E-01  
  4 .129811E-01 .691377E-05 -.226144E-01 .1851E-03  
      .714004E-04 -.148341 .2528E-02  
  6 .550262E-02 .178129E-01 .334510 .1433E-03  
      .199111E-02 -.105384 .1369E-03  
      .180070E-04 .209114E-01 .2179E-04  
31 0 .527525        
  2 .347874E-01 -.303433E-12 -.251649E-01 .2897E-03  
  4 .609649E-02 -.899892E-12 .994387E-01 .1162E-03  
      -.279530E-12 .207341E-01 .2197E-04  
63 0 .269966        
  2 .612221E-02 .378870E-09 -.208431E-01 .3708E-09  
  6 .808776E-03 .628162E-02 -.190003E-01 .3167E-10  
      -.118652E-08 .694756E-01 .1117E-09  
      -.628161E-02 .190003E-01 .3167E-10  
127 0 .102178        
  2 .2822739E-02 -.263671E-03 .217792E-01 .2285E-15  
  4 .584158E-03 -.674335E-02 -.204741E-01 .1388E-14  
      .674335E-02 .204741E-01 .1110E-14  
  6 .143965E-04 .156860E-01 -.249369E-01 .1221E-14  
      -.156860E-01 .249370E-01 .7772E-15  
      -.106757E-07 .263960E-01 .2224E-15  

next up previous
Next: Conclusion Up: No Title Previous: The optimization problem
Hans Mittelmann