Conceptual vs Numerical

One of the things that makes numerical computation interesting is that it often reverses the usual conceptual order of things, using advanced math to compute things that are introduced earlier.

Here’s an example I recently ran across [1].

The hyperbolic functions are defined in terms of the exponential function: But it may be more efficient to compute the exponential function in terms of hyperbolic functions.

Specifically, Why would you want to compute the simple thing on the left in terms of the more complicated thing on the right? Because hyperbolic sine is an odd function.

This means that half its power series coefficients are zero: an odd function only has odd powers in its power series.

Suppose you need to compute the exponential function in an environment where you only have a limited number of registers to store constants.

You can get more accuracy out of the same number of registers by using them to store coefficients in the power series for hyperbolic sine than for exp.

If we store n coefficients for sinh, we can include powers of x up to 2n – 1.

And since the coefficient of x2n is zero, the error in our Taylor approximation is O(x2n+1).

See this post for more on this trick.

If we stored n coefficients for exp, we could include powers of x up to n-1, and our error would be O(xn).

To make things concrete, suppose n = 3 and x = 0.

01.

The error in the exp function would be on the order of 10-6, but the error in the sinh function would be on the order of 10-14.

That is, we could almost compute exp to single precision and sinh to almost double precision.

(I’m glossing over a detail here.

Would you really need to store, for example, the 1 coefficient in front of x in either series? For simplicity in the argument above I’ve implicitly said yes.

Whether you’d actually need to store it depends on the details of your implementation.

) The error estimate above uses big-oh arguments.

Let’s do an actual calculation to see what we get.

from math import exp, sinh, sqrt def exp_taylor(x): return 1 + x + 0.

5*x**2 def sinh_taylor(x): return x + x**3/6 + x**5/120 def exp_via_sinh(x): s = sinh_taylor(x) return s + sqrt(s*s + 1) def print_error(approx, exact): print(f”Computed: {approx}”) print(f”Exact: {exact}”) print(f”Error: {approx – exact}”) x = 0.

01 approx1 = exp_taylor(x) approx2 = exp_via_sinh(x) exact = exp(x) print(“Computing exp directly:
“) print_error(approx1, exact) print() print(“Computing exp via sinh:
“) print_error(approx2, exact) This produces Computing exp directly: Computed: 1.

0100500000000001 Exact: 1.

010050167084168 Error: -1.

6708416783473012e-07 Computing exp via sinh: Computed: 1.

0100501670841682 Exact: 1.

010050167084168 Error: 2.

220446049250313e-16 Our errors are roughly what we expected from our big-oh arguments.

More numerical analysis posts Iterative linear solvers Gauss-Lobotto quadrature Runge-Kutta methods [1] I saw this identity in Matters Computational: Ideas, Algorithms, Source Code by Jörg Arndt.

.

Leave a Reply