Under construction!
This note experimentally investigates various approximation techniques for smooth functions. We use the arcsin function because it is monotonic and infinitely differentible, except at arcsin(1). That is, while it appears simple enough that simple approximations should exist, it's not trivial.
I am particularly interested in rational approximations. They are more powerful than polynomial methods. For example, unlike any polynomial approximation method, there are rational approximations that can uniformly approximate nonpolynomial-like functions like abs(x). Even for smooth functions, like exp(x), which have excellent polynomial approximations, the best rational approximation is more efficient. (Ref: D. J. Newman, Approximation with Rational Functions, American Mathematical Society, Conference Board of the Mathematical Sciences, Regional Conference Series in Mathematics, vol 41.).
Rational approximations are sometimes confused with Pade approximations, which were much studied in the last century. As I understand it, a Pade approximation is a formal transformation from a series like a Taylor or Chebyshev approximation to a ratio. Altho this formal transformation cannot add any information to the approximation, experimentally it appears that, for many functions, the Pade transformation of the Taylor or Chebyshev approximation is better than the original approximation. (This implies that there must exist functions whose Pade approximation is worse than the Taylor or Chebyshev original approximation, but, presumably we don't use those functions as often.)
Occasional defectiveness is one characteristic of the Pade approximation. That is, for some degree pairs [m,n], the Pade approximation is actually of lower degree. On a quarter-plane plot giving the Pade approximation for each degree pair [m,n], the defective degree pair points form rectangles.
If we modifiy a Pade approximation's coefficients to smooth out the error plot, we'll get a proper rational approximation, with a minimax, equi-oscillating error plot. The only method that I'm aware of for this is the Remez method, which uses a multi-dimensional search to equalize the errors in a non-optimal starting rational approximation.
Both the power and the difficulty of using rational approximations stem from their nonlinearity. Unlike a Taylor or Chebyshev approximation, the rational approximation has no set of basis functions that you linearly combine. also, There seem not to be blackbox engineering toolkits to calculate rational approximations; indeed most publications seem to be theoretical.
One theoretical advantage of a rational approximation over splines is this. A sequence of splines loses the information that this is one, usually entire, function that we are approximating. It would be ugly if this information did not lead to a more efficient approximation.
However, the best approximation would probably result from a combo of partitioning the interval into a few pieces, and the doing our best on each interval. There is methodological evaluation issue here, in how to assign costs to the various combos of more intervals or higher degrees.
I have been experimenting with rational approximations for several years, with discouraging results. Minimax, the Maple routine for the Remez method, fails above fairly low degrees. Increasing Digits, the number of significant digits, and modifying the source to cause the program not to give up so soon, do only a little good. Minimax in Maple tends to have one of the following outcomes on degrees above about [3,3].
Minimax exits with an error message, such as that the function does not oscillate enough, or the more digits are required.
Minimax returns, but with a result whose error is even close to not minimax.
Minimax returns a minimax-error result, but of defective degree.
Given the difficulty in approximating a function over an interval, I tried to simplify the problem by selecting a sequence of points on the function, and finding a Thiele rational interpolation thru them. This was also surprisingly hard.
If the X-values of the data points are evenly spaced, then formal singularities often appear. That is if you evaluate the formulae for the numerator and denominator of the approximation, then both may be zero. If you simplified the algebraic expression before substituting in, then these zero factors would cancel, and all would be fine. However, attempting to simplify the algebraic expression leads to unacceptable intermediate swell. My solution was to perturb the x-values.
The interpolated function often has poles like 1/x. I observe that inserting another point near the pole and re-interpolating usually removes the pole.
Even if finite, the interpolated function's error is not nearly equi-oscillating. However, it does appear tolerable.
All this leads me to the following hypotheses about rational approximations.
The Remez method does not work for high-degree approximations. At least, it's not easy to make it work. This is not surprising, given that it's a multidimensional search. It is disappointing that none of the references to Remez mention this, which leads me to wonder if anyone one has ever used it.
Alternatively, it's possible that rational approximations are defective, and do not exist for many degrees.
If rational approximations do exist, they may intrinsically just be very difficult to find.
They may also be numerically very sensitive. Slight perturbations in the approximation coefficients, or calculation roundoffs, may drastically change the resulting approximate values.
This web page attemps to illustrate some of the above issue with a candidate function, arcsin:
This is a nice demonstration of generally-useful function approximation techniques. Arcsin is not completely simple, since its derivative is infinite at x=1. Nevertheless, neither is it completely pathological.
We will see several polynomial and rational methods, such as these:
The techniques will be combined with
The arithmetic operations used in the approximations will either be
The optimization criterion will be to:
Since the distribution of the error can also be interesting, for most methods, I will also plot the error between arcsin and the approximating function.
This problem was suggested by a question in the comp.graphics.algorithms newsgroup.
Altho arcsin is defined for the interval [-1,1], we will exploit its symmetry, and approximate it over the interval [0,1], unless otherwise stated.
For uniformity,each approximation here will have 7 degrees of freedom, unless otherwise stated.
I am not a professional function approximator; corrections and amplifications by experts are always welcome.
Again, unless otherwise stated, all plots here are error plots, of the difference between the function being approximated and the approximation.
If your processor can do arctan and sqrt as fast as a division, then there is no problem to solve.
Available Operations | Best Method | Max Error |
---|---|---|
Only +,-,* | Minimax polynomial | 0.033 |
Also division | (The same) minimax polynomial! (Division doesn't help here.) | 0.033 |
One sqrt | [3,3] Minimax quotient | 4.7e-7 |
Two sqrt | Minimax polynomial | 2.8e-7 |