Rejection sampling
Use Rejection sampling.
Generate samples as before (using prior sampling)
If the sample is such that e holds use it to build an estimate Otherwise, ignore it
⃝c -Trenn, King’s College London 2
Rejection sampling
function REJECTION-SAMPLING(X,e,bn,N) returns an estimate of P pX |eq
local variables: N, a vector of counts over X, initially zero
for j = 1 to N do
x Ð PRIOR-SAMPLE(bn)
if x is consistent with e then
N[x] Ð N[x]+1 where x is the value of X in x return NORMALIZE(N[X])
⃝c -Trenn, King’s College London 3
Rejection sampling
More efficient than prior sampling, but
For unlikely events, may have to wait a long time to get enough matching samples. Still inefficient.
So, use likelihood weighting
⃝c -Trenn, King’s College London 4
Likelihood weighting
Version of importance sampling.
Fix evidence variable to true, so just sample relevant events. Have to weight them with the likelihood that they fit the evidence. Use the probabilities we know to weight the samples.
⃝c -Trenn, King’s College London 5
Likelihood weighting
Consider we have the following network:
Say we want to establish PpRain|Cloudy “ true, W etGrass “ trueq
⃝c -Trenn, King’s College London 6
Likelihood weighting
We want PpRain|Cloudy “ true, W etGrass “ trueq We pick a variable ordering, say:
Cloudy, Sprinkler, Rain, W etGrass. as before.
Set the weight w “ 1 and we start. Deal with each variable in order.
⃝c -Trenn, King’s College London 7
Likelihood weighting
Remember, we want PpRain|Cloudy “ true, W etGrass “ trueq Cloudy is true, so:
w Ð w ̈ PpCloudy “ trueq w Ð 0.5
Cloudy “ true, Sprinkler “?, Rain “?, WetGrass “?. w “ 0.5
⃝c -Trenn, King’s College London 8
Likelihood weighting
Sprinkler is not an evidence variable, so we don’t know whether it is true or false. Sample a value just as we did for prior sampling:
̈ ̨
0.1,
PpSprinkler|Cloudy “ trueq “ ̋
Let’s assume this returns false.
w remains the same.
Cloudy “ true, Sprinkler “ false, Rain “?, WetGrass “?. w “ 0.5
⃝c -Trenn, King’s College London 9
0.9
‚
Likelihood weighting
Rain is not an evidence variable, so we don’t know whether it is true or false. Sample a value just as we did for prior sampling:
̈ ̨
0.8, PpRain|Cloudy “ trueq “ ̋
0.2
Let’s assume this returns true.
w remains the same.
Cloudy “ true, Sprinkler “ false, Rain “ true, WetGrass “?. w “ 0.5
⃝c -Trenn, King’s College London 10
‚
Likelihood weighting
W etGrass is an evidence variable with value true, so we set:
w Ð w ̈ PpWetGrass “ true|Sprinkler “ false,Rain “ trueq
w Ð 0.45
Cloudy “ true, Sprinkler “ false, Rain “ true, WetGrass “ true.
w “ 0.45
⃝c -Trenn, King’s College London 11
Likelihood weighting
So we end with the event rtrue, f alse, true, trues and weight 0.45.
To find a probability we tally up all the relevant events, weighted with their weights. The one we just calculated would tally under
Rain “ true As before, more samples means more accuracy.
⃝c -Trenn, King’s College London 12
Likelihood weighting
function LIKELIHOOD-WEIGHTING(X,e,bn,N) returns an esti- mate of P pX |eq
local variables: W, a vector of weighted counts over X, initially zero
for j = 1 to N do
x, w Ð WEIGHTED-SAMPLE(bn) WrxsÐWrxs`w wherexisthevalueofXinx
return NORMALIZE(WrX s)
⃝c -Trenn, King’s College London 13
Likelihood weighting
function WEIGHTED-SAMPLE(bn, e) returns an event and a weight
xÐan event with n elements; wÐ1 for i = 1 to n do
ifXi hasavaluexi ine
then wÐw ˆ PpXi “ xi | parentspXiqq
else xi Ð a random sample from PpXi | parentspXi qq
return x, w
⃝c -Trenn, King’s College London 14