CS 4610/5335 Particle Filter
Robert Platt Northeastern University
Material adapted from:
1. Lawson Wong, CS 4610/5335
2. Peter Corke, Robotics, Vision and Control
3. Sebastian Thrun, Wolfram Burgard, & Dieter Fox,
Probabilistic Robotics
Robot Localization
In robot localization:
We know the map, but not the robot’s position Observations may be vectors of range finder readings
State space and readings are typically continuous (works basically like a very fine grid) and so we cannot store B(X)
Particle filtering is a main technique
2
Particle Filtering
3
Particle Filter Localization
(Monte-Carlo Localization)
4
1-D example
1
2
3
4
0.8
0.2
0
0
Consider a 1-D grid world
The agent can only move right
Previously, represented belief as a vector of numbers of states
Now, represent as a set of particles
B0: {x=1, x=1, x=1, x=2, x=1} Each particle is a possible world
5
1-D example
1
2
3
4
0.8
0.2
0
0
Consider a 1-D grid world
The agent can only move right
Previously, represented belief as a vector of numbers of states
Now, represent as a set of particles
B0: {x=1, x=1, x=1, x=2, x=1}
Each particle is a possible world
To find B1, simulate particles forward
6
1-D example
1
2
3
4
0.8
0.2
0
0
Consider a 1-D grid world
The agent can only move right
Previously, represented belief as a vector of numbers of states
Now, represent as a set of particles
B0: {x=1, x=1, x=1, x=2, x=1}
Each particle is a possible world
To find B1, simulate particles forward
B0→1: { Sample from P(x’ | x=1),
Sample from P(x’ | x=1), Sample from P(x’ | x=1), Sample from P(x’ | x=2), Sample from P(x’ | x=1) }
7
1-D example
1
2
3
4
0.8
0.2
0
0
Consider a 1-D grid world
The agent can only move right
Previously, represented belief as a vector of numbers of states
Now, represent as a set of particles
B0: {x=1, x=1, x=1, x=2, x=1}
Each particle is a possible world
To find B1, simulate particles forward
B0→1: { x’ = 2,
x’ = 1, x’ = 2, x’ = 3, x’=1 }
8
1-D example
1
2
3
4
0.8
0.2
0
0
Consider a 1-D grid world
The agent can only move right
Previously, represented belief as a vector of numbers of states
Now, represent as a set of particles
B0: {x=1, x=1, x=1, x=2, x=1}
Each particle is a possible world
To find B1, simulate particles forward
B0→1: {x=2, x=1, x=2, x=3, x=1}
9
1-D example
1
2
3
4
0.8
0.2
0
0
Consider a 1-D grid world
The agent can only move right
Previously, represented belief as a vector of numbers of states
Now, represent as a set of particles
B0: {x=1, x=1, x=1, x=2, x=1}
Each particle is a possible world
To find B1, simulate particles forward
B0→1: {x=2, x=1, x=2, x=3, x=1}
What distribution does this implicitly represent?
10
1-D example
1
2
3
4
0.8
0.2
0
0
Consider a 1-D grid world
The agent can only move right
Previously, represented belief as a vector of numbers of states
Now, represent as a set of particles
B0: {x=1, x=1, x=1, x=2, x=1}
Each particle is a possible world
To find B1, simulate particles forward
B0→1: {x=2, x=1, x=2, x=3, x=1}
Incorporate observation:
weight each particle by P(y | x)
11
1-D example
1
2
3
4
0.8
0.2
0
0
Consider a 1-D grid world
The agent can only move right
Previously, represented belief as a vector of numbers of states
Now, represent as a set of particles
B0: {x=1, x=1, x=1, x=2, x=1}
Each particle is a possible world
To find B1, simulate particles forward
B0→1: {x=2, x=1, x=2, x=3, x=1}
Incorporate observation:
weight each particle by P(y | x)
w0→1: {0.9, 0.1, 0.9, 0.1, 0.1} (numbers are P(y = wall | x))
12
1-D example
1
2
3
4
0.8
0.2
0
0
Consider a 1-D grid world
The agent can only move right
Previously, represented belief as a vector of numbers of states
Now, represent as a set of particles
B0: {x=1, x=1, x=1, x=2, x=1}
Each particle is a possible world
To find B1, simulate particles forward
B0→1: {x=2, x=1, x=2, x=3, x=1}
Incorporate observation:
weight each particle by P(y | x)
w0→1: {0.9, 0.1, 0.9, 0.1, 0.1}
Sample B1 from weighted distribution
13
Draw particles from belief at time t
Another example
Elapse
W eight
Resam ple
Particles:
(3,2) (2,3) (3,2) (3,1) (3,3) (3,2) (1,3) (2,3) (3,2) (2,2)
Particles :
(3,2) w=.9 (2,3) w=.2 (3,2) w=.9 (3,1) w=.4 (3,3) w=.4 (3,2) w=.9 (1,3) w=.1 (2,3) w=.2 (3,2) w=.9 (2,2) w=.4
(New) Particles:
(3,2) (2,2) (3,2) (2,3) (3,3) (3,2) (1,3) (2,3) (3,2) (3,2)
Particles:
(3,3) (2,3) (3,3) (3,2) (3,3) (3,2) (1,2) (3,3) (3,3) (2,3)
14
Draw particles from belief at time t
Simulation gives
SEimlaupslaete
W eight
Resam ple
Another example
predictive
belief (particles) at time t+1
Particles:
(3,3) (2,3) (3,3) (3,2) (3,3) (3,2) (1,2) (3,3) (3,3) (2,3)
Particles:
(3,2) (2,3) (3,2) (3,1) (3,3) (3,2) (1,3) (2,3) (3,2) (2,2)
Particles :
(3,2) w=.9 (2,3) w=.2 (3,2) w=.9 (3,1) w=.4 (3,3) w=.4 (3,2) w=.9 (1,3) w=.1 (2,3) w=.2 (3,2) w=.9 (2,2) w=.4
(New) Particles:
(3,2) (2,2) (3,2) (2,3) (3,3) (3,2) (1,3) (2,3) (3,2) (3,2)
15
Draw particles from belief at time t
Simulation gives
Weighting by obs. likelihood gives posterior belief (particles) at time t+1
SEimlaupslaete
W eight
Resam ple
Another example
predictive
belief (particles) at time t+1
Particles:
(3,3) (2,3) (3,3) (3,2) (3,3) (3,2) (1,2) (3,3) (3,3) (2,3)
Particles:
(3,2) (2,3) (3,2) (3,1) (3,3) (3,2) (1,3) (2,3) (3,2) (2,2)
Particles :
(3,2) w=.9 (2,3) w=.2 (3,2) w=.9 (3,1) w=.4 (3,3) w=.4 (3,2) w=.9 (1,3) w=.1 (2,3) w=.2 (3,2) w=.9 (2,2) w=.4
(New) Particles:
(3,2) (2,2) (3,2) (2,3) (3,3) (3,2) (1,3) (2,3) (3,2) (3,2)
16
Draw particles from belief at time t
Simulation gives
Weighting by obs. likelihood gives posterior belief (particles) at time t+1
Resampling converts weighted belief into unweighted belief at time t+1
SEimlaupslaete
W eight
Resam ple
Another example
predictive
belief (particles) at time t+1
Particles:
(3,3) (2,3) (3,3) (3,2) (3,3) (3,2) (1,2) (3,3) (3,3) (2,3)
Particles:
(3,2) (2,3) (3,2) (3,1) (3,3) (3,2) (1,3) (2,3) (3,2) (2,2)
Particles :
(3,2) w=.9 (2,3) w=.2 (3,2) w=.9 (3,1) w=.4 (3,3) w=.4 (3,2) w=.9 (1,3) w=.1 (2,3) w=.2 (3,2) w=.9 (2,2) w=.4
(New) Particles:
(3,2) (2,2) (3,2) (2,3) (3,3) (3,2) (1,3) (2,3) (3,2) (3,2)
17
Draw particles from belief at time t
Simulation gives
Weighting by obs. likelihood gives posterior belief (particles) at time t+1
Resampling converts weighted belief into unweighted belief at time t+1
SEimlaupslaete
W eight
Resam ple
Another example
predictive
belief (particles) at time t+1
Particles:
(3,3) (2,3) (3,3) (3,2) (3,3) (3,2) (1,2) (3,3) (3,3) (2,3)
Particles:
(3,2) (2,3) (3,2) (3,1) (3,3) (3,2) (1,3) (2,3) (3,2) (2,2)
Particles :
(3,2) w=.9 (2,3) w=.2 (3,2) w=.9 (3,1) w=.4 (3,3) w=.4 (3,2) w=.9 (1,3) w=.1 (2,3) w=.2 (3,2) w=.9 (2,2) w=.4
(New) Particles:
(3,2) (2,2) (3,2) (2,3) (3,3) (3,2) (1,3) (2,3) (3,2) (3,2)
18
Mobile robot localization (Hallway example)
From
“Probabilistic Robotics” (Ch. 7-8)
by
Sebastian Thrun, Wolfram Burgard,
& Dieter Fox
Particle filter
(sequential Monte-Carlo); Approximate HMM inference
19
Particle filtering
More generally, at every time step:
0. Start with particle set from previous step (or initial belief)
1. Simulate each particle forward: sample from P(x’ | x)
2. Weight each particle by observation model: compute P(y | x)
20
Particle filtering
More generally, at every time step:
0. Start with particle set from previous step (or initial belief)
1. Simulate each particle forward: sample from P(x’ | x)
2. Weight each particle by observation model: compute P(y | x) 3. Now we have a weighted set of particles
– Sample a new unweighted set of particles
(do sampling with replacement from the weighted set)
21
Particle filtering
More generally, at every time step:
0. Start with particle set from previous step (or initial belief)
1. Simulate each particle forward: sample from P(x’ | x)
2. Weight each particle by observation model: compute P(y | x) 3. Now we have a weighted set of particles
– Sample a new unweighted set of particles
(do sampling with replacement from the weighted set)
Is this correct?
22
Particle filtering
More generally, at every time step:
0. Start with particle set from previous step (or initial belief)
1. Simulate each particle forward: sample from P(x’ | x)
2. Weight each particle by observation model: compute P(y | x) 3. Now we have a weighted set of particles
– Sample a new unweighted set of particles
(do sampling with replacement from the weighted set)
Is this correct? Yes; reflects the exact equations below.
– With infinitely many samples, PF converges to true distribution
23
Particle filtering
More generally, at every time step:
0. Start with particle set from previous step (or initial belief)
1. Simulate each particle forward: sample from P(x’ | x)
2. Weight each particle by observation model: compute P(y | x) 3. Now we have a weighted set of particles
– Sample a new unweighted set of particles
(do sampling with replacement from the weighted set)
Is this correct? Yes; reflects the exact equations below.
– With infinitely many samples, PF converges to true distribution
What can go wrong?
24
Particle filtering
More generally, at every time step:
0. Start with particle set from previous step (or initial belief)
1. Simulate each particle forward: sample from P(x’ | x)
2. Weight each particle by observation model: compute P(y | x) 3. Now we have a weighted set of particles
– Sample a new unweighted set of particles
(do sampling with replacement from the weighted set)
Is this correct? Yes; reflects the exact equations below.
– With infinitely many samples, PF converges to true distribution
What can go wrong? We do not have infinite samples…
– Occasionally may have to reset by resampling all particles
25
Particle Filter Localization
(Monte-Carlo Localization)
Also see: https://www.youtube.com/watch?v=DZT-zNj61Jg
26