程序代写 Probabilistic Reasoning over Time: Dynamic Bayesian Networks CSci 5512: Art

Probabilistic Reasoning over Time: Dynamic Bayesian Networks CSci 5512: Artificial Intelligence II
Instructor:
February 10, 2022
Instructor:

Copyright By PowCoder代写 加微信 powcoder

Probabilistic Reasoning over Time:Dynamic Bayesian Networks

Announcements
HW1 due today by 11:59 PM CST
Probabilistic Reasoning over Time:Dynamic Bayesian Networks
Instructor:

Time and uncertainty
The world changes
Rational agent needs to track and predict Example: Car diagnosis vs. Diabetes
Consider state and evidence variables over time Xt = set of unobservable state variables at time t
Example: BloodSugart, StomachContentst, etc. Et = set of observable evidence variables at time t Example: MeasuredBloodSugart, FoodEatent, etc.
Time can be discrete or continuous Notation: Xa:b =Xa,Xa+1,…,Xb−1,Xb
Probabilistic Reasoning over Time:Dynamic Bayesian Networks
Instructor:

Markov Processes (Markov Chains)
Construct a Bayes net from these variables: Parents? Markov assumption Xt depends on bounded subset of X0:t−1
First-order: P(Xt|X0:t−1) = P(Xt|Xt−1) Second-order: P(Xt|X0:t−1)=P(Xt|Xt−2,Xt−1)
Sensor Markov assumption: P(Et|X0:t,E0:t−1) = P(Et|Xt) Stationary process:
Transition model P(Xt|Xt−1) fixed for all t Sensor model P(Et|Xt) fixed for all t
Probabilistic Reasoning over Time:Dynamic Bayesian Networks
Instructor:

Hidden Markov Models
Xt is a single, discrete variable (usually Et is too) Domain of Xt is {1,…,S}
Transition matrix: Tij = P(Xt = j|Xt−1 = i), e.g., 􏰅0.7 0.3􏰆
Sensor matrix: Eih = P(Et = h|Xt = i) for each time step Message passing viewpoint:
Forward-Backward is a sum-product algorithm Viterbi is a max-product algorithm
Probabilistic Reasoning over Time:Dynamic Bayesian Networks
Instructor:

Rain Network
Instructor:
Probabilistic Reasoning over Time:Dynamic Bayesian Networks

Dynamic Bayesian Networks
Replicated Bayes net
Xt, Et contain arbitrarily many variables
Probabilistic Reasoning over Time:Dynamic Bayesian Networks
Instructor:

DBNs vs HMMs
DBNs are more general
Every HMM is a single-variable DBN Every discrete DBN is an HMM
Example: Network with 20 state variables
HMM has 20 boolean parents/state ⇒ 20 × 220 parameters DBM has 3 parents/state ⇒ 20 × 23 parameters
Sparse dependencies ⇒ exponentially fewer parameters DBN:HMM is equivalent to BN:Full joint distribution
Probabilistic Reasoning over Time:Dynamic Bayesian Networks
Instructor:

DBNs are more general
Every Kalman filter model is a DBN Few DBNs are Kalman filters
Real world requires non-Gaussian posteriors Example: What’s the battery charge?
Probabilistic Reasoning over Time:Dynamic Bayesian Networks
Instructor:

Exact Inference in DBNs
Naive method: Unroll the network and run any exact algorithm
Problem: inference cost for each update grows with t Rollup filtering: Add slice t + 1, “sum out” slice t Computational complexity
Largest factor is O(dn+1), update cost O(dn+2) HMM update cost O(d2n)
Probabilistic Reasoning over Time:Dynamic Bayesian Networks
Instructor:

Likelihood Weighting for DBNs
Set of weighted samples approximates the belief state All samples run in parallel
Problem: LW samples pay no attention to the evidence Fraction “agreeing” falls exponentially with t
Number of samples required grows exponentially with t
Probabilistic Reasoning over Time:Dynamic Bayesian Networks
Instructor:

Particle Filtering
Tracks the high-likelihood regions of the state-space Tracking is done using a population of samples (“particles”) Three main steps:
Propagate particles using transition model P(Xt+1|Xt) Weight samples based on new evidence et+1
Resample particles based on weights to get new population
Applications
Widely used for tracking nonlinear systems, esp. in vision Used for simultaneous localization and mapping in robots
Probabilistic Reasoning over Time:Dynamic Bayesian Networks
Instructor:

Particle Filtering is Consistent
Assume consistent at time t : N(xt|e1:t)/N = P(xt|e1:t) Propagate forward: Populations of xt+1 are
N(xt+1|e1:t) = 􏰂P(xt+1|xt)N(xt|e1:t) xt
Weight samples by their likelihood for et+1 W(xt+1|e1:t+1) = P(et+1|xt+1)N(xt+1|e1:t)
Resample to obtain populations proportional to W N(xt+1|e1:t+1)/N = αW (xt+1|e1:t+1)
= αP(et+1|xt+1)N(xt+1|e1:t)
= αP(et+1|xt+1)􏰂P(xt+1|xt)N(xt|e1:t)
= α′P(et+1|xt+1)􏰂P(xt+1|xt)P(xt|e1:t)
= P(xt+1|e1:t+1)
Probabilistic Reasoning over Time:Dynamic Bayesian Networks
Instructor:

Temporal models use (state,sensor) variables over time Markov assumptions and stationarity assumption, so we need
Transition model: P(Xt|Xt−1) Sensor model: P(Et|Xt)
Tasks are filtering, prediction, smoothing, most likely sequence Done recursively with constant cost per time steps
Hidden Markov models have a single discrete state variable Kalman filters allow n state variables, use linear Gaussians Dynamic Bayes nets subsume HMMs, Kalman filters
Exact update intractable
Particle filtering is a good approximate filtering algoritmh
Probabilistic Reasoning over Time:Dynamic Bayesian Networks
Instructor:

程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com