particle_monte

# Particle Filter/Monte Carlo Approximation

Particle filters are used to estimate position by using techniques that model the simulation. This is done by estimating Bayesian models through a double sampling method. Exactly like a histogram filter, particle filters approximate, using a finite number of parameters, the posterior. A well designed particle filter is faster than a Markov filter, which makes it apply to a much broader range of platforms, including mobile platforms with low computing power. The particle filter is similar to a Kalman filter except that through a large of amount of samples the models approach an optimal estimate which makes them more accurate than a typical Kalman filter. The key idea of a particle filter is to start by representing the posterior by a randomly generated set of state samples. Since a particle filter represents a distribution with a set of samples the representation is approximate yet nonparametric. Being nonparametric allows the particle filter to represent a much broader space distribution than other similar filters. A particle filter is also capable of representing a nonlinear transformation of random variables, while most other filters cannot.[1]

## How it works:

The particle filter estimates the sequence of parameters based on measured data. Because particle filters are Bayesian they follow suite and use a probability posterior distribution. The model starts by assuming a set of parameters and then comparing them to a set of measured values. The parameters are then multiplied by a weighting factor depending on how close they are to the measured value. In other words, a parameter that is exactly at the measured value will have a large weighting and become a large number, where as a parameter that has a large difference between itself and the measure value will have a very low weighting which will result in a low overall value for that parameter. These newly weighted parameters are then sampled from again using a genetic or survival of the fittest, or roulette type technique which slowly weeds out the parameters with a low value and slowly increases the number of high value parameters. This process repeats over and over again as the robot is moving which results in an approximation of where the robot is on the map that gets more accurate over time. This works because as time moves forward your set of parameters pre weighting becomes more and more accurate and you begin having an equal weight distribution of points which over time eliminates error in position. One problem with the particle filter is that once the robot stops, if the model continues to run it will eventually throw out all points except the exact spot where you’re at, which might seem good, but once the robot begins moving again it essentially starts over with a completely random set of parameters which causes you to have a large error percentage again up until you run enough to eliminate it. Below is a copy of code given by Sebastian Thrun in his book “Probabilistic Robotics”. The code is followed by a copied explanation of the code in detail.

## Thrun Code Explanation:

1. Line 4 generates a hypothetical state ${x_{t}}^{[m]}$ for time t based on the particle ${x_{t-1}}^{[m]}$ and the control $u_{t}$. The resulting sample is indexed by m, indicating that it is generated from the m-th particle in $X_{t-1}$. This step involves sampling from the state transition distribution $p(x_{t} \mid u_{t},x_{t-1})$. To implement this step, one needs to be able to sample from this distribution. The set of particles obtained after M iterations is the filter’s representation of $\bar{bel}(x_{t})$.[2]

2. Line 5 calculates for each particle ${x_{t}}^{[m]}$ the so-called importance factor, denoted ${w_{t}}^{[m]}$. Importance factors are used to incorporate the measurement $z_{t}$ into the particle set. The importance, thus, is probability of the measurement $z_{t}$ under the particle ${x_{t}}^{[m]}$, given by ${w_{t}}^{[m]}=p(z_{t} \mid {x_{t}}^{[m]})$. If we interpret ${w_{t}}^{[m]}$ as the weight of a particle, the set of weighted particles represents (in approximation) the Bayes filter posterior $bel(x_{t})$.[2]

3. The real “trick” of the particle filter algorithm occurs in lines 8 through 11. These lines implemented what is known as resampling or importance sampling. The algorithm draws with replacement M particles from the temporary set $\bar{X}_{t}$. The probability of drawing each particle is given by its importance weight. Resampling transforms a particle set of M particles into another particle set of the same size. By incorporating the importance weights into the resampling process, the distribution of the particles change: Whereas before the resampling step, they were distributed according to $\bar{bel}(x_{t})$, after the resampling they are distributed (approximately) according to the posterior $bel(x_{t})=\eta p(z_{t} \mid {x_{t}}^{[m]})\bar{bel}(x_{t})$. In fact, the resulting sample set usually possesses many duplicates, since particles are drawn with replacement. More important are the particles not contained in $X_{t}$: Those tend to be the particles with lower importance weights.[2]

from page 99 in the text

## Our Sample Code Explanation:

Please see the downloaded files and read through the comments. The code is well commented with explanations on what each part of the code is doing or trying to do.

## Some Discussion on the Code

It is known that the code in its current state is buggy and doesn't work properly every time. This has to do with the random function and how we are selecting our points for weighting. Sometimes you will get unlucky and a point that might be in the right location but have an incorrect heading will be given a large enough weighting that the data will then begin to follow this incorrect data and will begin moving off in the wrong direction. This error will of course grow as time goes on. Work has been done playing with the weighting scales, however, a the system is still hit or miss. This being said the code is well commented and available for download in the following section. We encourage anyone that is interested to download the files and play with the results. Should you decide to use this code for anything please make note of its original authors: Dr. Stephen Canfield, Ashford Smith, Bradley Thompson, and Timothy Smith (in order of contribution efforts). Please enjoy.