Kalman Filters : A step by step implementation guide in pythonThis article will simplify the Kalman Filter for you.

Hopefully you’ll learn and demystify all these cryptic things that you find in Wikipedia when you google Kalman filters.

Garima NishadBlockedUnblockFollowFollowingMar 8So let’s get started!To know Kalman Filter we need to get to the basics.

In Kalman Filters, the distribution is given by what’s called a Gaussian.

What is a Gaussian though?Gaussian is a continuous function over the space of locations and the area underneath sums up to 1.

Gaussian in graphThe Gaussian is characterized by two parameters, the mean, often abbreviated with the Greek letter Mu, and the width of the Gaussian, often called the variance(Sigma square).

So, our task in common phases is to maintain a Mu and a Sigma square as our best estimate of the location of the object we are trying to find.

Graph showing mean & varianceThe exact formula is an exponential of a quadratic function where we take the exponent of the expression.

Now in the equation below if x equals Mu, then the numerator becomes 0, and if x of 0, which is one.

It turns out we have to normalize this by a constant, 1 over the square root of 2 Pi Sigma square.

Gaussian EquationWhat is Variance?The variance is a measure of Gaussian spread (i.

e.

the spread of area under the curve).

Larger variances correspond to shorter Gaussians.

Variance is also a measure of certainty; if you are trying to find something like the location of a car with the most certainty, you’ll want a Gaussian whose mean is the location of the car and with the smallest uncertainty/spread.

Gaussian Function Implementation in CodeHow to shift the mean?In Kalman filters, we iterate measurement(measurement update) and motion (prediction).

And the update will use Bayes rule, which is nothing else but a product or a multiplication.

In prediction, we use total probability which is a convolution or simply an addition.

Measurement update & Prediction cycleImplementation of measurement cycle and then the prediction cycle is as follows:Suppose you’re localizing another vehicle and you have a prior distribution that looks as follows; it is a very wide Gaussian with the mean .

This is an example in our prior we were fairly uncertain about location but the measurement told us quite a bit as to where the vehicle is.

The final mean gets shifted which is in between the two old means, the mean of the prior, and the mean of the measurement.

It’s slightly further on the measurement side because the measurement was more certain as to where the vehicle is than the prior.

The more certain we are, the more we pull the mean on the direction of the certain answer.

Localization of the mean (in green)How to update the parameter ?Suppose we multiply two Gaussians, as in Bayes rule, a prior and a measurement probability.

The prior has a mean of Mu and a variance of Sigma square, and the measurement has a mean of Nu, and a covariance of r-square.

Then, the new mean, Mu prime, is the weighted sum of the old means.

The Mu is weighted by r-square, Mu is weighted by Sigma square, normalized by the sum of the weighting factors.

The new variance term, would be Sigma square prime .

Clearly, the prior Gaussian has a much higher uncertainty therefore Sigma square is larger and that means the nu is weighted much much larger than the Mu.

So, the mean will be closer to the nu than the mu.

Interestingly enough, the variance term is unaffected by the actual means, it just uses the previous variances.

Update FunctionIt’s implementation in code is as follows:How to implement the Gaussian motion?A new mean, is your old mean plus the motion often called u.

So, if you move over 10 meters in x direction, this will be 10 meters and you knew sigma square is your old sigma square plus variance of the motion Gaussian.

This is all you need to know, it’s just an addition.

The resulting Gaussian in the prediction step just adds these two things up, mu plus u and sigma square plus r square.

The motion update/ predict functionIt’s implementation in code is as follows:So now let’s put everything together.

Let’s write a main program that takes these 2 functions, update and predict, and feeds into a sequence of measurements and motions.

In the example I’ve chosen :measurements of 5.

, 6.

, 7.

, 9.

, and 10.

motions are 1.

, 1.

, 2.

, 1.

, 1.

This all would work out really well if the initial estimate was 5, but we’re setting it to 0 with a very large uncertainty of 10,000.

Let’s assume:the measurement uncertainty is constant 4, and the motion uncertainty is constant 2.

When you run this, your first estimate for position should basically become 5–4.

99, and the reason is your initial uncertainty is so large, the estimate is dominated by the first measurement.

Your uncertainty shrinks to 3.

99, which is slightly better than the measurement uncertainty.

You then predict that you add 1, but the uncertainty increases to 5.

99, which is the motion uncertainty of 2.

You update again based on the measurement 6, you get your estimate of 5.

99, which is almost 6.

You move 1 again.

You measure 7.

You move 2.

You measure 9.

You move 1.

You measure 10, and you move a final 1.

And out comes as the final result, a prediction of 10.

99 for the position, which is your 10 position moved by 1, and the uncertainty — residual uncertainty of 4.

OutputPlotting a Gaussian by looping through a range of x values and creating a resulting list of Gaussian values would result in:GraphWanna check it out in code?Click here : Kalman Filter!.