CS计算机代考程序代写 python Bayesian Bayesian network Hidden Markov Mode Last Modified: April 20, 2021

Last Modified: April 20, 2021
CS 179: Introduction to Graphical Models: Spring 2021
Homework 3
Due Date: Wednesday, April 28th
The submission for this homework should be a single PDF file containing all of the relevant code, figures, and any text explaining your results. When coding your answers, try to write functions to encapsulate and reuse code, instead of copying and pasting the same code multiple times. This will not only reduce your programming efforts, but also make it easier for us to understand and give credit for your work. Show and explain the reasoning behind your work!
Problem 1 (adapted from R. Dechter) (40 points) Consider the Bayesian network shown below.
(a) (b)
(c) (d)
(e)
(8 points) Draw the Markov random field (undirected graph) corresponding to this model and factorization.
(8 points) Compute the marginal probability p(F) by eliminating variables in the order [A,B,C,D,E,G] (here meaning, A first, then B, then C, etc.) Report which functions are combined at each step (e.g., in each bucket of bucket elimination), and show the scope (arguments of) and table corresponding to the function produced (λi ). What is the induced width of this order (the largest scope of a function produced during the computation)?
Note: you may check your answer easily using the function, but please compute the intermediate functions manually.
(8points)Withoutactuallycomputingthetables,whatistheinducedwidthoftheordering[D,G,E,F,B,C,A]? (D first, then G, etc.)
(8 points) Suppose that we observe evidence D = 0, C = 1. First, simplify the graphical model by assigning the values of these variables (this will remove those nodes from the graph); what is the new Markov random field (undirected graph) associated with this conditional model?
(8 points) Now, compute the probability of evidence p(D = 0, C = 1) by eliminating (summing out) the other variables. Select a good order and compute the intermediate functions manually as in part (b); you do not need to output the tables themselves for this part, just the scopes and the final probability. What was the induced width of the order you selected for this problem? Again, you can check your result easily using the
pyGMs
GraphModel.eliminate
pyGMs
condition
and
functions; see lecture and slides for examples.
eliminate
A
A
p(A)
0 1
0.3 0.7
B
p(B)
0 1
0.6 0.4
D
p(D)
0 1
0.7 0.3
BC
DE
FG
A
C
p(C|A)
0 0 1 1
0 1 0 1
0.15 0.85 0.25 0.75
E
G
p(G|E)
0 0 1 1
0 1 0 1
0.10 0.90 0.30 0.70
B
C
E
p(E|B,C)
0 0 0 0 1 1 1 1
0 0 1 1 0 0 1 1
0 1 0 1 0 1 0 1
0.40 0.60 0.45 0.55 0.60 0.40 0.30 0.70
D
E
F
p(F|D,E)
0 0 0 0 1 1 1 1
0 0 1 1 0 0 1 1
0 1 0 1 0 1 0 1
0.25 0.75 0.60 0.40 0.10 0.90 0.20 0.80
Homework 3 UC Irvine 1/ 2

CS 179: Introduction to Graphical Models Spring 2021
Problem 2: Hidden Markov Model (40 points)
Consider a four-state 4-state finite state machine, with states Yt ∈ {0, 1, 2, 3}, with state transition diagram shown at right. We always start (at time t = 0) in state Y0 = 0. In this problem, we will compute marginal probabilities and posterior marginal probabilities.
(a)
(b)
(c)
(8 points) Create (as a numpy array) the transition matrix T associated with this state transition diagram. For consistency in solutions, please make the first axis correspond to the current time t, and the second axis to the next time (t + 1), i.e., T[a,b] = p(Yt+1 = b|Yt = a). Print the array T. To verify your answer, also calculate by hand and using T the values of
0.3
02
.5
1.0
0.4
(1) p(Y2 = 0|Y0 = 0) (2) p(Y2 = 2|Y0 = 0)
(note: at time 2), and verify that the two are the same.
(8 points) Our observations X t will be deterministic given the current state Yt , with Xt = 0 when Yt ∈ {0,2} and Xt = 1 when Yt ∈ {1,3}, i.e., Xt indicates whether we are in an odd-numbered state. Create (as a numpy array) the observation matrix O, with O[a,c] = p(Xt = c|Yt = a). Print the array O.
(16 points) Compute the following probability distributions, either by hand or using your arrays:
(1) p(Y1)
(2) p(Y1|X1 = 0)
(3) p(Y2)
(4) p(Y2|X1 =0,X2 =1)
(5) p(Y2|X1 =0,X2 =0)
(6) p(Y3|X1 = 0,X2 = 0,X3 = 1)
1
.5
0.3 0.3
3
0.7
(8 points) What is the distribution of states far in the future, e.g., p(Y100)? Problem 3 (Python): (20 points)
(d)
In this problem, we will use the same model as Problem 2, but compute the most likely state sequence, i.e., the (arg) maximum over the Y variables, in Python. Write a python function or script that takes in the transition probability matrix T, where T[i, j] is the probability of transitioning from state i to state j; an observation probability matrix O (where O[i, k] is the probability of observing value k when in state i), an initial state distribution vector p0, andanobservationsequenceO=[O1,…,OT],whereOt ∈{0,…}},andcomputesthemostlikelystatesequence
S = [S1,…,ST] given O using dynamic programming. You may use either represent the messages.
You can use the following test cases to help debug:
◦argmaxSp(Y|X=[
◦argmaxSp(Y|X=[
◦argmaxSp(Y|X=[
◦argmaxSp(Y|X=[
◦argmaxSp(Y|X=[
For your solution, provide a listing of your function, and your predictions on a few more test cases:
(a) argmaxSp(Y|X=[ (b)argmaxSp(Y|X=[ (c)argmaxSp(Y|X=[ (d)argmaxSp(Y|X=[
]) →? ]) →? ]) →?
]) →?
001
]) → [0,2,3]
]) → [0,1,2,2] ]) → [0,2,2,2]
0100
0000
]) → [0,1,2,2,0,1,2,0,1] ]) → [0,2,3,3,3,2,2]
numpy
arrays or pyGMs factors to
010001001
0011100
0001
00011
00010
000100
Homework 3 UC Irvine 2/ 2