程序代写代做 Bayesian clock algorithm 4. Bayesian Tracking

4. Bayesian Tracking
We derive the first recursive state estimation algorithm in this course for a system with a finite state space. The algorithm has two main steps: 1) The prior update, where the state estimate is predicted forward using the process model; and 2) the measurement up- date, where the prior is combined with observations/measurements (using Bayes’ theorem). These two steps are at the heart of many optimal, recursive estimation methods, and they will recur in the methods that we will develop for other classes of problems throughout this course.
4.1 Problem Statement
Let x(k) ∈ X be the vector-valued state at time k (k = 0, 1, . . . ) which we want to estimate. We assume it is a discrete random variable which can only take on a finite number of values; that is, X is finite.
Let z(k), k = 1, 2, . . . , be a vector valued measurement that we can observe. It can be a continuous or a discrete random variable.
Model
x(k) = qk−1 (x(k−1),v(k−1)), k = 1,2,… z(k) = hk (x(k), w(k)) ,
where x(0), {v(·)} and {w(·)} are independent with known PDFs. Note that even though a known input is not explicitly included in the model, it can be implicitly modeled by absorbing it into qk−1 and hk.
Last update: 2020-02-06 at 14:31:33
4–1

Ob jective
Let z(1:k) denote the set {z(1), . . . , z(k)}. We want to calculate f (x(k)|z(1:k)) efficiently. Note that we want to estimate the full conditional probability density function of the state.
Recall that f(x(k)|z(1:k)) is shorthand for fx(k)|z(1:k)(x ̄(k)|z ̄(1:k)), where:
• x(k) is a random variable,
• x ̄(k) is a specific value that x(k) could take, • z(1 : k) is a random variable, and
• z ̄(1 : k) is the observed value of z(1 : k).
4.2 Recursive Estimator Equations
We develop a recursive algorithm, where we compute f (x(k)|z(1:k)) from f (x(k−1)|z(1:k−1)), the observation z(k), and the system model. This way, we do not need to store the entire history of measurements z(1:k).
Assume f(x(k−1)|z(1:k−1)) is known. For k = 1, we simply have f(x(0)), which is known. The recursion consists of the prior update and the measurement update.
Prior update
First step: predict the PDF of x(k) based on past measurements z(1:k−1). • By the total probability theorem,
f(x(k)|z(1:k−1)) = 􏰇 f(x(k)|z(1:k−1),x(k−1))f(x(k−1)|z(1:k−1)).
x(k−1) ∈X
􏰐 􏰏􏰎 􏰑
assumed to be known
• Note that x(k) and z(1:k−1) are conditionally independent, given x(k−1): – x(k) = qk−1(x(k−1), v(k−1)), a function of v(k−1) only.
4–2

– z(k−1) = hk−1(x(k−1), w(k−1))
z(k−2) = hk−2(x(k−2), w(k−2)), x(k−2) = qk−3(x(k−3), v(k−3)), etc.
and, therefore,
z(1:k−1) = FUNCTION􏰈 x(k−1), v(1:k−3), w(1:k−1), x(0) 􏰉 􏰐 􏰏􏰎 􏰑
independent of v(k−1)
– Therefore, x(k) and z(1:k−1) are conditionally independent, given x(k−1). There-
fore,
f(x(k)|z(1:k−1),x(k−1)) = f(x(k)|x(k−1)).
The prior update is therefore given by:
f(x(k)|z(1:k−1)) = 􏰇 f(x(k)|x(k−1))f(x(k−1)|z(1:k−1)),
x(k−1) ∈X
where f(x(k)|x(k−1)) can be calculated from f(v(k−1)) and qk−1(·,·) using change of vari-
ables (although it may not be straightforward to do so).
This is an intuitive result: we use the process model to push our estimate forward in time. Note that conditional independence is crucial.
Measurement update
Second step: combine prior with new observation. • By Bayes’ rule,
f(x(k)|z(1:k)) = f(x(k)|z(k),z(1:k−1))
= f(z(k)|x(k),z(1:k−1))f(x(k)|z(1:k−1)) .
f (z(k)|z(1:k−1))
• Note that z(k) and z(1:k−1) are conditionally independent, given x(k):
– z(k) = hk(x(k),w(k)), a function of w(k) only.
– z(1:k−1) = FUNCTION􏰈 v(0:k−2), w(1:k−1), x(0) 􏰉.
􏰐 􏰏􏰎 􏰑
independent of w(k)
– Therefore, z(k) and z(1:k−1) are conditionally independent, given x(k). There-
fore,
f􏰈z(k)|x(k),z(1:k−1)􏰉 = f􏰈z(k)|x(k)􏰉, and furthermore, it can be calculated from f(w(k)) and hk(·, ·).
4–3

• Finally, f(z(k)|z(1:k−1)) is simply a normalization constant that can be calculated using the total probability theorem:
f(z(k)|z(1:k−1)) = 􏰇 f(z(k)|x(k))f(x(k)|z(1:k−1)) x(k)∈X
Using the above, the measurement update reads:
f(x(k)|z(1:k)) =
x ̄(k) ∈X
Summary
Initialization:
f(x(0)|z(1 : 0)) = f(x(0))
Prior update (state prediction):
f (x(k)|z(1:k−1)) =
k = 1, 2, . . .
Measurement update:
4.3
• •
f (z(k)|x(k)) f (x(k)|z(1:k−1)) 􏰇 f (z(k)|x ̄(k)) f (x ̄(k)|z(1:k−1))
x(k−1) ∈X
process model
previous iteration
􏰇􏰎 􏰑􏰐 􏰏􏰎 􏰑􏰐 􏰏
f (x(k)|x(k−1)) f (x(k−1)|z(1:k−1)),
measurement model prior
􏰎 􏰑􏰐 􏰏􏰎 􏰑􏰐 􏰏
f(z(k)|x(k)) f(x(k)|z(1:k−1)) 􏰇 f (z(k)|x ̄(k)) f (x ̄(k)|z(1:k−1))
f(x(k)|z(1:k)) =
x ̄(k) ∈X
Computer Implementation
Enumeratethestate: X ={0,1,…,N−1}.
Define am(i,k) := Prob(x(k) = i|z(1:k)), i = 0,…,N −1, an array with N elements that we use to store the posterior f(x(k)|z(1 : k)). Subscript “m” indicates that we’re using the latest measurement
4–4

• Define ap(i,k) := Prob(x(k) = i|z(1:k−1)), i = 0,…,N − 1, to store the prior f(x(k)|z(1:k−1)). Subscript “p” indicates that we’re predicting, i.e. using the prior before the latest measurement.
• Algorithm: Initialization, k = 0:
am(i,0)=Prob(x(0)=i), i=0,…,N−1
Recursion, k > 0: N−1
ap(i,k)=􏰇Prob(x(k)=i|x(k−1)=j)am(j,k−1), i=0,…,N−1 j=0
am(i,k)= f(z(k)|x(k)=i)ap(i,k) , i=0,…,N−1 N−1
􏰇 f(z(k)|x(k) = j)ap(j,k) j=0
Note that Prob(x(k) = i|x(k−1) = j) can be calculated from x(k) = qk−1(x(k−1), v(k−1)) andf(v(k−1)). Similarly,f(z(k)|x(k)=i)canbecalculatedfromz(k)=hk(x(k),w(k)) and f(w(k)).
4.4 Example
Consider an object moving randomly on a circle in discrete steps. We can measure the distance to the object and want to estimate its location.
4–5

• The object can move in discrete steps. Define x(k) as the object’s location on the circle, x(k) ∈ {0, 1, . . . , N − 1}, with
θ(k) = 2πx(k) . N
• The dynamics are given by
x(k) = mod􏰈x(k−1) + v(k−1), N 􏰉,
where mod(·,N) denotes the modulo N operation; that is, mod(N,N) = 0 and mod(−1, N ) = N − 1. The process noise is given by
􏰧1 with probability p
v(k−1) =
• The model of the distance measurement is
􏰛22
z(k) = (L − cos θ(k)) + sin θ(k) + w(k),
where w(k), uniformly distributed on [−e, e], is the sensor noise.
• We can now construct the PDFs of the process and sensor models using the change
of variables:
p if x(k) = mod􏰈x(k−1) + 1, N 􏰉 f(x(k)|x(k−1))=1−p ifx(k)=mod􏰈x(k−1)−1,N􏰉
−1 with probability 1 − p.
0 􏰧 1
f(z(k)|x(k)) = 2e 0
otherwise .
if z(k) − 􏰛(L − cos θ(k))2 + sin2 θ(k) ∈ [−e, e] otherwise.
• Initialization: f (x(0)) = 1/N for all x(0) ∈ {0, 1, . . . , N − 1}. This captures the state of maximum ignorance.
Simulations
We choose N = 100, i.e. the object moves in steps of 3.6◦. Initial location: x(0) = N/4 = 25, i.e. start at 90◦. For the sensor noise, use e = 0.50.
4–6

1. L = 2.0, p = 0.50. Bimodal distribution. From the distance measurement, the algorithm cannot distinguish the upper versus the lower half plane.
2. L = 2.0, p = 0.55. One mode decays. On average, the object is more likely to move counterclockwise; thus, the mode that corresponds to clockwise movement decays.
3. L = 0.1, p = 0.55. Works! (Sensor inside the circle.)
4. L = 0.0, p = 0.55. Uniform distribution for all time.
5. L = 2.0, p = 0.9. One mode very rapidly decays. Does it completely disappear?
Test the robustness of this: set the nominal parameters to L = 2.0, e = 0.50, p = 0.55, but use different values for the estimator.
6. pˆ = 0.45, eˆ = e. Of course, the estimator gets it wrong. It assumes that clockwise movement is more likely, but the opposite is actually true.
7. pˆ = 0.50, eˆ = e. The estimator can’t differentiate between the two modes.
8. pˆ = 0.90, eˆ = e. Makes incorrect assumption (wrong process model).
9. pˆ = p, eˆ = 0.90. Washes things out (assumes larger uncertainty in the measurement).
10. pˆ = p, eˆ = 0.48. Crashes (division by zero)! What went wrong?
In practise, we never know the model exactly: how can we guard against crashes like in 10?
(Try to reproduce these simulation results yourself)
4–7