Monte Carlo Simulations with Python (Part 1)

Monte Carlo Simulations with Python (Part 1)Patrick HanburyBlockedUnblockFollowFollowingJan 25Monte Carlo’s can be used to simulate games at a casino (Pic courtesy of Pawel Biernacki)This is the first of a three part series on learning to do Monte Carlo simulations with Python.

This first tutorial will teach you how to do a basic “crude” Monte Carlo, and it will teach you how to use importance sampling to increase precision.

Part 2 will introduce the infamous metropolis algorithm, and Part 3 will be a specialized piece for budding physicists (we’ll learn how to use Monte Carlo simulations to solve problems in quantum mechanics!)Monte Carlo methods are widely used heuristic techniques which can solve a variety of common problems including optimization and numerical integration problems.

These algorithms work by cleverly sampling from a distribution to simulate the workings of a system.

Applications range from solving problems in theoretical physics to predicting trends in financial investments.

The TaskIn this introduction, we will develop a Python implementation of Monte Carlo approximations to find a solution to this integral:Integral we will approximate using Monte Carlo methodsI’m borrowing this example and it’s solution from Sunkanta Deb’s Variational Monte Carlo Technique article.

If you’re looking to learn more about simulating quantum mechanical systems using Monte Carlos, definitely check out that article.

You’ll notice that this integral cannot be solved analytically, and therefore we will need to approximate it numerically.

Let’s start with a simple approach to the problem: the Crude Monte Carlo.

The Crude Monte Carlo: IntuitionThe Crude Monte Carlo is the easiest technique to understand, so we’ll start here.

But as the name suggests, it’s also the least accurate.

We’ll fix this later though.

 :)You may recall from high school calculus the following identity, which shows how to calculate the average value of a function (for a proof, check out this link):This theorem is used to find the average value of a functionJust as we can find the average value of a function by integrating, we can also find the value of an integral by determining the average value of it’s integrand, f(x).

The Monte Carlo technique is built upon this principle: instead of evaluating an indefinite integral, which can sometimes be impossible, let’s instead estimate the average of the integrand and use that to approximate the integral.

And that’s exactly what we’re going to do!.So how do we do that?Well, the easiest way to do it would be to randomly sample input values x from all possible input values.

Let’s say we have a simple linear function, like y = 2x, and we want to find the average value of y in the range [0,2].

Approximating the average of y = 2x using 10 randomly selected valuesTo calculate the average, we’ll just evaluate y at all randomly determined x and average the results.

This process is exactly the Crude Monte Carlo.

Now let’s see how use this technique in Python to solve our problem from earlier.

The Crude Monte Carlo: ImplementationYou can find all of the code for this tutorial on my Github here.

Here’s the start of our code:Imports for our Monte Carlo scriptWe really don’t need much.

numpy will be used once to find the minimum argument of a list.

We'll use math for defining our functions and random will be used for sampling.

matplotlib will help visualize our results as we go.

Some Helpful FunctionsFirst let’s define a function to generate a random number in a particular range.

Let’s also define our integrand function, f(x):Perform the Crude Monte CarloImplementing the Crude Monte Carlo should be fairly straightforward.

Our algorithm looks like this:Get a random input value from the integration rangeEvaluate the integrandRepeat Steps 1 and 2 for as long as you likeDetermine the average of all these samples and multiple by the rangeLet’s see what it looks like in code:Function to execute the Crude Monte CarloPerforming this approximation with N=10000 samples gave me an estimate of 0.

6994.

Not too far off from Wolfram's approximation of 0.

696092 (which we'll take to be the Holy Truth).

Determine the Variance of Our EstimationBut how confident are we in our answer?.How do I know that 10,000 samples is enough to get a good approximation.

We can quantify our accuracy by finding the variance of our estimations.

The variance is defined to be “the average of the square distances from the mean” .

It can be be shown to equal this equation:General Variance EquationThe variance gives us an idea of how much f(x) varies in the domain of x.

It should be constant with the number of samples used, but we can calculate the error in integration by taking the square root of the variance divided by the number of samples.

For more information on variance, check out Professor Adrian Feiguin’s Monte Carlo Integration notebook, or Wolfram’s article on variance.

Equation for calculating the variance in our Crude Monte Carlo simulationThis is the equation for variance we’ll use in our simulations.

Let’s see how to do this in Python.

This implementation of the Crude Monte Carlo gave me a variance of 0.

266 which corresponds to an error of 0.

005.

For a quick, back of the envelop estimate, this isn't bad at all, but what if we need a lot more precision?.We could always just increase the number of samples, but then our computation time will increase as well.

What if, instead of using random sampling, we cleverly sampled from the right distribution of points.

we might call this, importance sampling.

Importance Sampling: The IntuitionImportance sampling is a method for reducing the variance of a Monte Carlo simulation without increasing the number of samples.

The idea is that instead of randomly sampling from the whole function, let’s just sample from a distribution of points similarly shaped to the function.

Let’s say you have a step function that looks like this:Graph of a step function and it’s “weight function” g(x)Shown above we have a step function active on the range [0,2] and inactive from [2,6].

Sampling 10 times might yield estimates like this:10 sample estimation of f(x)These samples (which I swear are random) correspond to a most likely distribution of samples, and yield an integral estimation of 1.

8.

But, what if instead, we estimate the ratio between our function f(x) and some special weight function g(x) whose value is almost always about 1/2 the value of f(x) for any given x?.What if we also bias our samples to appear in the most active ranges of our function (which we’ll find to minimize the error).

You’ll see that that the average of these ratios is a lot closer the real value of our integral, which is 2.

Estimation of step function area using importance samplingThe importance sampling method is used to determine this optimal function g(x).

The MathI will provide a quick overview of importance sampling mathematics, but the main purpose of this post is the implementation, so if you desire more mathematical rigor, check out Deb’s article I mentioned earlier.

Let’s see if we can find a g(x) such that:For some constant kBasically, we want g(x) to look like a scaled version of f(x).

We’ll also need g(x) to satisfy a few criteria:g(x) is integrableg(x) is non-negative on [a,b]The indefinte integral of g(x), which we’ll call G(x), has a real inverseThe integral of g(x) in the range [a,b] must equal 1In ideal case, f(x) = k * g(x), where k is a constant.

However, if f(x) = k * g(x), then f(x) would be integrable and we would have no need to perform a Monte Carlo simulation; we could just solve the problem analytically!So, we’ll settle for f(x) ≈ k * g(x).

We won’t get a perfect estimate of course, but you’ll find it performs better than our crude estimation from earlier.

We’ll define G(x) as follows, and we’ll also perform a change of variables to r.

Definition of G(x)Change of variables to r (where r will be random samples between 0 and 1)r will be restricted to the range [0,1].

Since the integral of g(x) was defined to be 1, G(x) can never be greater than 1, and therefore r can never be greater than one.

This is important because later, we will randomly sample from r in the range [0,1] when performing the simulation.

Using these definitions, we can produce the following estimation:This sum is what we will be calculating when we perform the Monte Carlo.

We’ll randomly sample from r in order to produce our estimate.

Simple right?.Don’t be intimated if this doesn’t make sense at first glance.

I intentionally focused on the intuition and breezed through the math quite a bit.

If you’re confused, or you just want more mathematical rigor, check out the resource I talked about earlier until you believe that final equation.

Importance Sampling: Python ImplementationOk, now that we understand the math behind importance sampling, let’s go back to our problem from before.

Remember, we’re trying to estimate the following integral as precisely as we can:Integral we’re trying to approximateVisualizing our ProblemLet’s start by generating a template for our g(x) weight function.

I’d like to visualize my function f(x), so we’ll do that using matplotlib:Plotting this function will help us determine an optimal structure of g(x)Ok, so we can see that our function is mostly active in the rough range of [0,3-ish] and is mostly inactive on the range [3-ish, inf].

So, let’s see if we can find a function template that can be parameterized to replicate this quality.

Deb’s proposes this function:Proposed weight function template [1]After we find the ideal values for A and λ, we’ll be able to construct this plot of f(x) and our optimal weight function g(x):You can see that in many ways g(x) does not ideally replicate the shape of f(x).

This is ok.

A crude g(x) can still do marvels for decreasing your estimation’s variance.

Feel free to experiment with other weight functions g(x) to see if you can find even better solutions.

Parametrize g(x)Before we can perform the simulation, we will need to find the optimal parameters λ and A.

We can find A( λ) using the normalization restriction on g(x):The normalization condition on g(x) can be used to prove that A(λ) = λNow, all we need to do is find the ideal λ, and we’ll have our ideal g(x).

To do this, let’s calculate the variance for different λ on the range [0.

05,3.

0] in increments of 0.

5, and use the λ with the lowest variance.

When using importance sampling, we calculate the variance of the ratio between f (x) and g(x).

General equation for variance when using importance samplingAnd we’ll want to use this equation to approximate the integral:We’ll recalculate the variance for different λ, changing the weight function accordingly each time.

After, we’ll use our optimal λ to calculate the integral with minimal variance.

The algorithm will look like this:Start at λ = 0.

05Calculate the varianceIncrement λRepeat steps 2 and 3 until you reach the last λPick the λ with the lowest variance — this is your optimal λUse importance sampling Monte Carlo with this λ to calculate the integralIn code, finding the optimal λ looks like this:You’ll see that running this optimization code using 10,000 samples produces a λ value of 1.

65 , and a variance of 0.

0465, which corresponds to an error of 0.

022.

Wow!.Using importance sampling allowed us to reduce our error by a factor of 2 with the same number of samples.

Run the SimulationNow, all we have to do is run the simulation with our optimized g(x) function, and we’re good to go.

Here’s what it looks like in code:Running this code gave me an approximation of 0.

6983 which is much closer to the Wolfram-provided grand truth of 0.

696.

Wrapping Things UpIn this tutorial, we learned how to perform Monte Carlo simulations for estimating a definite integral.

We used both the crude method and the importance sampling method, and found that importance sampling provided a significant decrease in the error.

Check back soon for Part 2 of this tutorial where we’ll go over how to use the metropolis algorithm in your Monte Carlo simulations!Good luck, and as always feel free to drop questions and comments :)References and Further Reading[1] Variational Monte Carlo Technique by Sukanta Deb[2] Computation Physics Examples by Adrian Feiguin[3] Variance by Eric Weisstein (Wolfram).

. More details

Leave a Reply