程序代写代做代考 go chain html game C CIS 471/571(Fall 2020): Introduction to Artificial Intelligence

CIS 471/571(Fall 2020): Introduction to Artificial Intelligence
Lecture 16: Bayes Nets – Sampling
Thanh H. Nguyen
Source: http://ai.berkeley.edu/home.html

Bayes’ Nets
§ Representation
§ Conditional Independences
§ Probabilistic Inference
§ Enumeration (exact, exponential
complexity)
§ Variable elimination (exact, worst-case exponential complexity, often better)
§ Inference is NP-complete
§ Sampling (approximate)
§ Learning Bayes’ Nets from Data

Approximate Inference: Sampling

Sampling
§Sampling is a lot like repeated simulation § Why sample?
§ Predicting the weather, basketball games, … §Basic idea
§ Draw N samples from a sampling distribution S § Compute an approximate posterior probability § Show this converges to the true probability P
§ Learning: get samples from a distribution you don’t know
§ Inference: getting a sample is faster than computing the right answer (e.g. with variable elimination)

Sampling
§Sampling from given distribution § Example § Step 1: Get sample u from uniform
C
P(C)
red
0.6
green
0.1
blue
0.3
distribution over [0, 1) § E.g. random() in python
§ Step 2: Convert this sample u into an outcome for the given distribution
§ Each target outcome is associated with a sub-interval of [0,1)
§ Sub-interval size is equal to probability of the outcome.
§ If random() returns u = 0.83, then our sample is C = blue
§ E.g, after sampling 8 times:

Sampling in Bayes’ Nets
§Prior Sampling §Rejection Sampling
§Likelihood Weighting §Gibbs Sampling

Prior Sampling

Prior Sampling
Cloudy Sprinkler
WetGrass
Rain
Samples:
+c, -s, +r, +w -c, +s, -r, +w

+c
0.5
-c
0.5
+c
-c
+s
-s
+s
-s
+s
-s
+r
-r
+r
-r
0.1
0.9
0.5
0.5
+w
-w
+w
-w
+w
-w
+w
-w
0.99
0.01
0.90
0.10
0.90
0.10
0.01
0.99
+c
-c
+r
-r
+r
-r
0.8
0.2
0.2
0.8

Prior Sampling
§For i = 1, 2, …, n
§Sample xi from P(Xi | Parents(Xi))
§Return (x1, x2, …, xn)

Prior Sampling
§This process generates samples with probability:
…i.e. the BN’s joint probability
§ Let the number of samples of an event be § Then
§I.e., the sampling procedure is consistent

Example
§ We’ll get a bunch of samples from the BN: +c, -s, +r, +w
+c, +s, +r, +w -c, +s, +r, -w +c, -s, +r, +w -c, -s, -r, +w
§If we want to know P(W)
§ We have counts <+w:4, -w:1>
C
SR W
§ Normalize to get P(W) = <+w:0.8, -w:0.2>
§ This will get closer to the true distribution with more samples
§ Can estimate anything else, too
§ What about P(C | +w)? P(C | +r, +w)? P(C | -r, -w)?
§ Fast: can use fewer samples if less time (what’s the drawback?)

Rejection Sampling

Rejection Sampling
§Let’s say we want P(C)
§No point keeping all samples around §Just tally counts of C as we go
§Let’s say we want P(C | +s)
§ Same thing: tally C outcomes, but ignore
(reject) samples which don’t have S=+s
§This is called rejection sampling
§It is also consistent for conditional probabilities (i.e., correct in the limit)
C
SR W
+c, -s, +r, +w +c, +s, +r, +w -c, +s, +r, -w +c, -s, +r, +w -c, -s, -r, +w

Rejection Sampling
§ Input: evidence instantiation § For i = 1, 2, …, n
§ Sample xi from P(Xi | Parents(Xi)) § If xi not consistent with evidence
§ Reject: return – no sample is generated in this cycle § Return (x1, x2, …, xn)

Likelihood Weighting

Likelihood Weighting
§ Problem with rejection sampling: § If evidence is unlikely, rejects lots of
samples
§ Evidence not exploited as you sample
§ Consider P( Shape | blue )
pyramid, green
§ Idea: fix evidence variables and sample the rest
§ Problem: sample distribution not consistent! § Solution: weight by probability of evidence
Shape
Color
pyramid, red sphere, blue cube, red sphere, green
given parents
Shape
Color
pyramid, blue pyramid, blue sphere, blue cube, blue sphere, blue

Likelihood Weighting
Cloudy Sprinkler
WetGrass
Rain
Samples:
+c, +s, +r, +w …
+c
0.5
-c
0.5
+c
-c
+s
-s
+s
-s
+s
-s
+r
-r
+r
-r
0.1
0.9
0.5
0.5
+w
-w
+w
-w
+w
-w
+w
-w
0.99
0.01
0.90
0.10
0.90
0.10
0.01
0.99
+c
-c
+r
-r
+r
-r
0.8
0.2
0.2
0.8

Likelihood Weighting
§ Input: evidence instantiation
§ w = 1.0
§ for i = 1, 2, …, n
§ if Xi is an evidence variable
§ Xi = observation xi for Xi
§ Set w = w * P(xi | Parents(Xi)) § else
§ Sample xi from P(Xi | Parents(Xi)) § return (x1, x2, …, xn), w

Likelihood Weighting
§ Sampling distribution if z sampled and e fixed evidence CloCudy
§ Now, samples have weights
S
W
R
§ Together, weighted sampling distribution is consistent

Likelihood Weighting
§ Likelihood weighting is good §
§ We have taken evidence into account as we
generate the sample
§ E.g. here, W’s value will get picked based on the evidence values of S, R
§ More of our samples will reflect the state of the world suggested by the evidence
Likelihood weighting doesn’t solve all our problems
§ Evidence influences the choice of downstream variables, but not upstream ones (C isn’t more likely to get a value matching the evidence)
§ We would like to consider evidence when we sample every variable (leads to Gibbs
sampling)
C
SR W

Gibbs Sampling

Gibbs Sampling
§ Procedure: keep track of a full instantiation x1, x2, …, xn. Start with an arbitrary instantiation consistent with the evidence. Sample one variable at a time, conditioned on all the rest, but keep evidence fixed. Keep repeating this for a long time.
§ Property: in the limit of repeating this infinitely many times the resulting samples come from the correct distribution (i.e. conditioned on evidence).
§ Rationale: both upstream and downstream variables condition on evidence.
§ In contrast: likelihood weighting only conditions on upstream evidence, and hence weights obtained in likelihood weighting can sometimes be very small. Sum of weights over all samples is indicative of how many “effective” samples were obtained, so we want high weight.

Gibbs Sampling Example: P( S | +r)
§ Step 2: Initialize other variables C
§ Step 1: Fix evidence C § R = +r
§Steps 3: Repeat
§ Choose a non-evidence variable X
§ Resample X from P( X | all other variables) CCCCCC
S +r S +r S +r S +r S +r S +r WWWWWW
§ Randomly WW
S +r
S +r

Efficient Resampling of One Variable
§ Sample from P(S | +c, +r, -w)
C
S +r W
§ Many things cancel out – only CPTs with S remain!
§More generally: only CPTs that have resampled variable need to be considered, and joined together

Bayes’ Net Sampling Summary
§Prior Sampling P( Q ) § Rejection Sampling P( Q | e )
§Likelihood Weighting P( Q | e)
§ Gibbs Sampling P( Q | e )

Further Reading on Gibbs Sampling*
§Gibbs sampling produces sample from the query distribution P( Q | e ) in limit of re-sampling infinitely often
§Gibbs sampling is a special case of more general methods called Markov chain Monte Carlo (MCMC) methods
§Metropolis-Hastings is one of the more famous MCMC methods (in fact, Gibbs sampling is a special case of Metropolis-Hastings)
§You may read about Monte Carlo methods – they’re just sampling