Bayesian networks
CISC 6525
Bayesian Networks
Chapter 14
Outline
Syntax
Semantics
Efficient representations
Inference
Bayesian networks
A simple, graphical notation for conditional independence assertions and hence for compact specification of full joint distributions
Syntax:
a set of nodes, one per variable
a directed, acyclic graph (link ≈ “directly influences”)
a conditional distribution for each node given its parents:
P (Xi | Parents (Xi))
In the simplest case, conditional distribution represented as a conditional probability table (CPT) giving the distribution over Xi for each combination of parent values
Independence
Two random variables A and B are independent iff P(A|B) = P(A)
Or P(A,B)=P(A|B)P(B)=P(A)P(B)
If n boolean variables are independent then their full joint distribution is
P(X1,X2,….,Xn) = i P(Xi)
= P(X1)P(X2)…P(Xn)
Absolute independence is a strong requirement.
Conditional independence
Recall dentist example: Toothache, Cavity, Catch.
Joint distribution has 23-1=7 independent entries
But P(Catch|Toothache,Cavity)= P(Catch|Cavity)
And P(Toothache,Catch|Cavity)=
P(Toothache|Cavity)
P(Toothache,Catch,Cavity)=
P(Toothache|Cavity)P(Catch|Cavity)P(Cavity)
Full joint distribution only has 5 independent numbers
Example
Topology of network encodes conditional independence assertions:
Weather is independent of the other variables
Toothache and Catch are conditionally independent given Cavity
Example
I’m at work, neighbor John calls to say my alarm is ringing, but neighbor Mary doesn’t call. Sometimes it’s set off by minor earthquakes. Is there a burglar?
Variables: Burglary, Earthquake, Alarm, JohnCalls, MaryCalls
Network topology reflects “causal” knowledge:
A burglar can set the alarm off
An earthquake can set the alarm off
The alarm can cause Mary to call
The alarm can cause John to call
Example contd.
Markov Blanket
Each node is conditionally independent of all others given the Markov blanket: parents+ children+children’s parents
Compactness
A CPT for Boolean Xi with k Boolean parents has 2k rows for the combinations of parent values
Each row requires one number p for Xi = true
(the number for Xi = false is just 1-p)
If each variable has no more than k parents, the complete network requires O(n · 2k) numbers
I.e., grows linearly with n, vs. O(2n) for the full joint distribution
For burglary net, 1 + 1 + 4 + 2 + 2 = 10 numbers (vs. 25-1 = 31)
Semantics
The full joint distribution is defined as the product of the local conditional distributions:
P (X1, … ,Xn) = πi = 1 P (Xi | Parents(Xi))
e.g., P(j m a b e)
= P (j | a) P (m | a) P (a | b, e) P (b) P (e)
n
Constructing Bayesian networks
1. Choose an ordering of variables X1, … ,Xn
2. For i = 1 to n
add Xi to the network
select parents from X1, … ,Xi-1 such that
P (Xi | Parents(Xi)) = P (Xi | X1, … Xi-1)
This choice of parents guarantees:
P (X1, … ,Xn) = πi =1 P (Xi | X1, … , Xi-1)
(chain rule)
= πi =1P (Xi | Parents(Xi))
(by construction)
n
n
Suppose we choose the ordering M, J, A, B, E
P(J | M) = P(J)?
Example
Suppose we choose the ordering M, J, A, B, E
P(J | M) = P(J)?
No
P(A | J, M) = P(A | J)? P(A | J, M) = P(A)?
Example
Suppose we choose the ordering M, J, A, B, E
P(J | M) = P(J)?
No
P(A | J, M) = P(A | J)? P(A | J, M) = P(A)? No
P(B | A, J, M) = P(B | A)?
P(B | A, J, M) = P(B)?
Example
Suppose we choose the ordering M, J, A, B, E
P(J | M) = P(J)?
No
P(A | J, M) = P(A | J)? P(A | J, M) = P(A)? No
P(B | A, J, M) = P(B | A)? Yes
P(B | A, J, M) = P(B)? No
P(E | B, A ,J, M) = P(E | A)?
P(E | B, A, J, M) = P(E | A, B)?
Example
Suppose we choose the ordering M, J, A, B, E
P(J | M) = P(J)?
No
P(A | J, M) = P(A | J)? P(A | J, M) = P(A)? No
P(B | A, J, M) = P(B | A)? Yes
P(B | A, J, M) = P(B)? No
P(E | B, A ,J, M) = P(E | A)? No
P(E | B, A, J, M) = P(E | A, B)? Yes
Example
Example contd.
Deciding conditional independence is hard in noncausal directions
(Causal models and conditional independence seem hardwired for humans!)
Network is less compact: 1 + 2 + 4 + 2 + 4 = 13 numbers needed
Example 1
Example 2
Compact Condition Distributions
CPT grows exponentially with num. parents
and is infinite with continuous valued nodes.
Solution: Canonical Distributions – distributions defined in terms of a small number of parameters.
Canonical Distributions
Boolean functions:
NorthAmerican CanadianUS Mexican
Numerical relationships
L/t = inflow + precip – outflow – evap
Canonical Distributions
Hybrid Distributions
Discrete (Subsidy, Buys); Continuous (Harvest, Cost)
Continuous variable, discrete+ continuous parents
Discrete variable, continuous parents
Continuous Canonical
Continuous Canonical
Inference in Bayesian networks
Exact Inference
A query P(X | e) can be answered by computing the sums of products of conditional probabilities over the hidden variables y.
P(X | e) = y P(X, e, y)
Exact Inference
P(B | j,m) = eaP(B, j, m, e, a)
P(b | j,m) = eaP(b)P(e)P(a|b,e)P(j|a)P(m|a)
Computational complexity is O(n2n)
Singly-connected/polytrees, even O(n)
Inexact Inference
Direct sampling:
Sample each variable turn to generate event, generate probabilities from the sample proportions
Rejection sampling
Reject samples that do not match the evidence
Likelihood weighting (importance sampling)
Fix the evidence variables and sample from nonevidence, weighted by the likelihood of the evidence
Markov Chain Monte Carlo (MCMC, Gibbs Sampling)
Generate next sample by making a random change to the previous conditioned on the Markov blanket
Summary
Bayesian networks provide a natural representation for (causally induced) conditional independence
Topology + CPTs = compact representation of joint distribution
Generally easy for domain experts to construct
/docProps/thumbnail.jpeg