CS代写 Probabilistic Reasoning: Approximate Inference

Probabilistic Reasoning: Approximate Inference
CSci 5512: Artificial Intelligence II
Instructor:
February 1, 2022

Copyright By PowCoder代写 加微信 powcoder

Instructor:
Probabilistic Reasoning: Approximate Inference

Announcements
Homework 1 posted today (due Feb. 10 at 11:59 PM CST)
Probabilistic Reasoning: Approximate Inference
Instructor:

Inference by Stochastic Simulation
Basic idea:
Draw N samples from a sampling distribution
Compute an approximate posterior probability Pˆ Show this converges to the true probability P
Sampling approaches:
Sampling from an empty network
Rejection sampling
Likelihood weighting
Markov chain Monte Carlo (MCMC)
Probabilistic Reasoning: Approximate Inference
Instructor:

Sampling from an empty network
Consider a Bayesian Network P(X1, . . . , Xn) The joint distribution factorizes as
P(X1, . . . , Xn) = 􏰃 P(Xi |Parents(Xi ))
For i = 1, . . . , n
Assume Parents(Xi ) have been instantiated Draw a sample xi following P(Xi|Parents(Xi))
(x1 , . . . , xn ) forms a sample from the Bayesian Network
Probabilistic Reasoning: Approximate Inference
Instructor:

Instructor:
Probabilistic Reasoning: Approximate Inference

Instructor:
Probabilistic Reasoning: Approximate Inference

Instructor:
Probabilistic Reasoning: Approximate Inference

Instructor:
Probabilistic Reasoning: Approximate Inference

Instructor:
Probabilistic Reasoning: Approximate Inference

Instructor:
Probabilistic Reasoning: Approximate Inference

Instructor:
Probabilistic Reasoning: Approximate Inference

Sampling from an empty network
Probability of generating (x1, . . . , xn) = P(x1, . . . , xn) Sampling following true prior probability
How to estimate P(x1, . . . , xn) from samples?
Let N(x1,…,xn) = num samples of (x1,…,xn) Then
lim Pˆ(x1,…,xn) = lim N(x1,…,xn)/N N→∞ N→∞
= P(x1,…,xn) Estimates derived from samples are consistent
Pˆ(x1,…,xn) ≈ P(x1,…,xn)
Probabilistic Reasoning: Approximate Inference
Instructor:

Rejection Sampling
Pˆ(X|e) estimates from samples agreeing with e Draw sample x from the Bayesian network
If x is consistent with e, increment N(x) Obtain Pˆ(X|e) by normalization
Estimate P(Rain|Sprinkler = true) using 100 samples
27 samples have Sprinkler = true
Of these, 8 have Rain = true and 19 have Rain = false
Pˆ(Rain = true|Sprinkler = true) = 8 27
Probabilistic Reasoning: Approximate Inference
Instructor:

Analysis of Rejection Sampling
Rejection sampling estimates N(X,e) and N(e) The conditional probability estimate
Pˆ(X|e) = αN(X,e) = N(X,e) ≈ P(X,e) = P(X|e) N(e) P(e)
Obtains consistent posterior estimates
P(e) drops off exponentially with number of evidence variables What if P(e) is very small
Need large number of samples to get reliable estimates
Probabilistic Reasoning: Approximate Inference
Instructor:

Likelihood Weighting
Fix evidence variables, sample only non-evidence variables Weight each sample by the likelihood of the evidence
Set w = 1. For i = 1, . . . , n
If Xi is a non-evidence variable, sample P(Xi|Parents(Xi)) IfXi isanevidencevariableEi,w←w×P(Ei|Parents(Ei))
Then (X,w) forms a weighted sample
Probabilistic Reasoning: Approximate Inference
Instructor:

Likelihood Weighting Example
Instructor:
Probabilistic Reasoning: Approximate Inference

Likelihood Weighting Example
Instructor:
Probabilistic Reasoning: Approximate Inference

Likelihood Weighting Example
Instructor:
Probabilistic Reasoning: Approximate Inference

Likelihood Weighting Example
w =1.0×0.1
Instructor:
Probabilistic Reasoning: Approximate Inference

Likelihood Weighting Example
w =1.0×0.1
Instructor:
Probabilistic Reasoning: Approximate Inference

Likelihood Weighting Example
w =1.0×0.1
Instructor:
Probabilistic Reasoning: Approximate Inference

Likelihood Weighting Example
w =1.0×0.1×0.99=0.099
Instructor:
Probabilistic Reasoning: Approximate Inference

Likelihood Weighting Analysis
Sampling probability for non-evidence component z l
S(z, e) = 􏰃 P(zi |Parents(Zi )) i=1
Sample weight from evidence component e m
w(z,e) = 􏰃P(ei|Parents(Ei)) i=1
Weighted sampling probability is
S(z,e)w(z,e) = 􏰃P(zi|Parents(Zi))􏰃P(ei|Parents(Ei)) i=1 i=1
Likelihood weighting returns consistent estimates Performance degrades with many evidence variables
Probabilistic Reasoning: Approximate Inference
Instructor:

Approximate Inference using MCMC
Construct a Markov chain based on the Bayesian network “State” of network = current assignment to all variables
Generate next state by sampling one variable given Markov blanket
Sample each variable in turn, keeping evidence fixed
More general sampling schedules are admissible
Probabilistic Reasoning: Approximate Inference
Instructor:

The Markov chain
With Sprinkler = true, WetGrass = true, there are four states:
Wander about for a while, average what you see
Instructor:
Probabilistic Reasoning: Approximate Inference

MCMC Example:
Gibbs sampling is a particular form of MCMC
Start with arbitrary state of non-evidence variables
Generate next state by randomly sampling value for one non-evidence variable Xi
Sampling is conditioned on current values of Xi ’s Markov blanket
Algorithm wanders around state space flipping one variable at a time
Probabilistic Reasoning: Approximate Inference
Instructor:

MCMC Example:
Problem: Estimate
P(Rain|Sprinkler = true, WetGrass = true)
Initialize Cloudy and Rain randomly, e.g., [true, false] Initial state S0 = [true, true, false, true]
Sample non-evidence variables repeatedly and in arbitrary order:
Sample Cloudy given MB(Cloudy), i.e., P(Cloudy|Sprinkler = true,Rain = false)
and suppose it returns false
S1 =[false,true,false,true] Sample Rain given MB(Rain), i.e.,
P(Rain|Cloudy = false, Sprinkler = true, WetGrass = true)
and suppose it returns true S2 =[false,true,true,true]
Probabilistic Reasoning: Approximate Inference
Instructor:

MCMC Example:
Count number of times Rain is true and false in the samples Example: Visit 100 states
30 have Rain = true, 70 have Rain = false
P(Rain = true|Sprinkler = true, WetGrass = true) = 30 100
Probabilistic Reasoning: Approximate Inference
Instructor:

Markov Blanket Sampling
Markov blanket of Cloudy is Sprinkler and Rain
Markov blanket of Rain is Cloud,Sprinkler, and WetGrass Probability given the Markov blanket is calculated as
P(xi |MB(Xi )) ∝ P(xi |Parents(Xi )) 􏰃 P(zj |Parents(Zj )) Zj ∈Children(Xi )
Main computational problems
Difficult to tell if convergence has been achieved Can be wasteful if Markov blanket is large
Probabilistic Reasoning: Approximate Inference
Instructor:

程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com