If a function is smooth and has thin tails, it can be well approximated by sinc functions.

These approximations are frequently used in applications, such as signal processing and numerical integration.

This post will illustrate sinc approximation with the function exp(-x²).

The sinc approximation for a function f(x) is given by where sinc(x) = sin(πx)/πx.

Do you get more accuracy from sampling more densely or by sampling over a wider range? You need to do both.

As the number of sample points n increases, you want h to decrease something like 1/√n and the range to increase something like √n.

According to [1], the best trade-off between smaller h and larger n depends on the norm you use to measure approximation error.

If you’re measuring error with the Lebesgue p-norm [2], choose h to be where q is the conjugate exponent to p, i.

e.

When p = 2, q is also 2.

For the sup norm, p = ∞ and q = 1.

So the range between the smallest and largest sample will be Here are a few plots to show how quickly the sinc approximations converge for exp(-x²).

We’ll look at n = 1 (i.

e.

three sample points) , n = 2 (five sample points) and n = 4 (nine sample points).

For n > 1, the sinc approximations are so good that the plots are hard to distinguish.

So we’ll show the plot of exp(-x²) and its approximation for n = 1 then show the error curves.

And now the error plots.

Note that the vertical scales are different in each subplot.

The error for n = 3 is two orders of magnitude smaller than the error for n = 1.

Related posts Nyquist sampling theorem Double exponential numerical integration [1] Masaaki Sugihara.

Near Optimality of the Sinc Approximation.

Mathematics of Computation, Vol.

72, No.

242 (Apr.

, 2003), pp.

767-786.

[2] The reference above doesn’t use the p-norms on the real line but on a strip in the complex plane containing the real line, and with a weight function that penalizes errors exponentially in the tails.

.