程序代写代做 Bayesian algorithm 13. Estimation wrap-up

13. Estimation wrap-up
Short review of estimation techniques covered in this course.
Model
x(k) = qk−1 (x(k−1),v(k−1)), k = 1,2,… z(k) = hk (x(k), w(k)) ,
• x(k) ∈ X: the vector-valued state at time k (k = 0,1,…) (to be estimated), can be CRV or DRV.
• z(k), k = 1, 2, . . . : vector-valued measurement, can be CRV or DRV.
• x(0), {v(·)} and {w(·)} are independent with known PDFs.
Key assumptions:
• known model
• known noise distributions
• noise is independent (‘nature is indifferent’∗).
Last update: 2020-03-20 at 10:45:37
∗Contrast this to ‘worst case’ estimators, where we assume that the noise is being ‘chosen’ by an opponent
to be the worst possible noise for estimation. This is related to H∞ control, and beyond the scope of this lecture. See Part III of Simon’s book.
13 – 1

Optimal estimator
• compute xˆ: estimate of state x (can be ML, MAP, MMSE, . . . ) • use prior knowledge (of x(0))
• use measurement sequence z(k)
Using Bayes’ rule, we can write down the general solution, but it will generally be in- tractable to solve. We thus developed different strategies to approximate this, and looked at special cases where could solve it exactly.
Exact solutions to the Bayesian estimation problem
We derived two algorithms that exactly compute the PDF of the state estimate, conditioned on all of the measurements.
Bayesian tracking: if the state evolves over a finite, discrete state space X, the PDFs are finite dimensional (that is, they’re finite a set of numbers). This means that we can write down an algorithm that computes the full PDF in finite memory and computation time.
The Kalman filter: for linear systems, where the initial state, the process noise, and the measurement noise have Gaussian distributions, we showed that the state estimate is also a Gaussian, and that the Kalman filter exactly tracks its distribution.
We also saw strategies for extracting estimates from PDFs, to get estimates from the estimator’s PDFs.
Approximate solutions to the Bayesian estimation problem
We derived various algorithms that approximate the Bayesian estimation problem in vari- ous ways.
Kalman filtering: we looked at the Kalman filter as a method for tracking mean and variance of a state estimate, and derived the Kalman filter gain as the linear update
13 – 2

rule that minimizes the trace of the estimate covariance. We saw a few instantiations using this gain:
The Kalman filter (KF): for linear systems, with any noise distributions, we saw that the KF exactly kept track of the estimate mean and variance.
The extended Kalman filter (EKF): for ‘mildly’ nonlinear (but differen- tiable) systems, we linearized the prediction and measurement equations to get the EKF. We noted that the system often worked well, even though we gave no theoretical guarantees (can ‘diverge’). Only solves an approximate conditional mean and variance.
The unscented Kalman filter (UKF): another nonlinear application of the Kalman gain. We use the Unscented Transform to predict how the process and measurement equations transform the random quantities, and then applied the Kalman gain. This was shown to (typically) be more accurate than the EKF approach, as it approximates the nonlinearities accurately to a higher order Taylor expansion. However, just like the EKF, there is a danger of the filter ‘diverging’, that is, underestimating its uncertainty so that it does not converge to the true value of the state. The UKF also only solves an approximate conditional mean and variance.
The particle filter (PF): an application of Monte Carlo sampling to state estima- tion, where we use a large number of randomly selected particles to approximate the underlying distributions. This required no restrictions on the system. The quality of the PDF approximation depends on the number of particles we used, and we saw that the filter has the danger of ‘sample impoverishment’. The particle filter was the only stochastic estimator: that is, for given problem data, the estimate is not deterministic.
For systems with a continuous state space, we can show the relationship qualitatively as follows:
13 – 3