T3-2021 Exam¶
COMP9418 – Advanced Topics in Statistical Machine Learning
University of New South Wales
7th December, 2021
Before proceeding, please read and acknowledge the following (double-click on this cell and put an X
between the brackets [X]
):
- [ ] I acknowledge that I will complete all of the work I submit for this exam without assistance from anyone else.
Instructions¶
- This exam will last for 12 hours, starting 7/12/2021 at 09:00:00 AEDT and ending 7/12/2021 at 21:00:00 AEDT.
- Questions will be answered from 9 am to 9 pm AEDT. Questions should be posted in the [WebCMS forum]
- You must provide all answers in this Jupyter notebook.
- You must use the cells provided to answer the questions. Use markdown cells for textual answers and code cells for programming answers.
- Submit this exam by give (command line or WebCMS) before the deadline. If WebCMS submission is slow, or if you are submitting in the last hour, please submit using the give command on the CSE servers (via VLAB or ssh).
The appropriate command is
give cs9418 exam *.ipynb
. We will not accept late submissions. - The exam has three parts: Multiple choice questions (20%); Questions that require a textual answer (50%); and, programming questions in Python (30%).
- This exam is an open book exam. You are permitted to access papers and books as well as the course materials, including slides and solved tutorials. Please, in case of doubt, read the UNSW guidance on open book exams.
- You are not permitted to communicate (email, phone, message, talk, etc.) with anyone during the exam, except COMP9418 staff via email or forum.
- Do not communicate your exam answers after you finish your exam. Some students may have extended time to complete the exam.
- Do not place your exam work in any location accessible to any other person, such as Dropbox and Github.
- Ensure that no other person in your household can access your work.
- Do not disclose your zpass to any other person. If you have revealed your zpass, you should change it immediately.
- We will refer deliberate violations of exam conditions to Student Integrity as serious misconduct.
- This exam has nine questions. The total number of marks is 100.
- Type your student number and name on the next cell.
from DiscreteFactors import Factor
from Graph import Graph
from BayesNet import BayesNet
from GaussianFactor import GaussianFactor
Part 1 [20 marks]¶
Part 1 is composed of four multiple-choice questions of five marks each. To answer each question, double-click the cell with the alternatives and write an X
between the [ ]
of the chosen option.
This is an example before inserting X
- [ ] Alternative one
- [ ] Alternative two
This is an example after inserting X
- [X] Alternative one
- [ ] Alternative two
For all four questions, choose only one among the alternatives.
Question 1 [5 marks]¶
Consider the directed Graph $G_D$ (left) and the undirected graph $G_U$ (right). Also, consider the probabilities distributions $P_D$ and $P_U$ that factorise according to $G_D$ and $G_U$, respectively. Select the correct alternative regarding graph separation and probability independence.
- [ ] $dsep_{G_D}(B,F,D)$ and $sep_{G_U}(B,F,D)$. Therefore, $B \perp D | F$ in both $P_D$ and $P_U$.
- [ ] $dsep_{G_D}(C,\emptyset,D)$ and $sep_{G_U}(C,\emptyset,D)$. Therefore, $C \perp D$ in both $P_D$ and $P_U$.
- [ ] $\neg dsep_{G_D}(B,F,D)$ and $\neg sep_{G_U}(B,F,D)$. Therefore, $B \not\perp D | F$ in $P_D$ and $B \not\perp D | F$ in $P_U$.
- [ ] $\neg dsep_{G_D}(B,F,D)$ and $\neg sep_{G_U}(C,\emptyset,D)$. Therefore, $B \not\perp D | F$ in $P_D$ and $C \not\perp D$ in $P_U$.
- [X] None of the above alternatives.
Question 2 [5 marks]¶
Let’s suppose we apply the Expectation-Maximization (EM) approach to learning parameters from a complete (with no missing values) dataset. Choose the correct alternative:
- [X] EM will converge in a single iteration, and it will return the maximum-likelihood estimates independent of the initial parameter values.
- [ ] EM will converge in one or more iterations, and it will return the maximum-likelihood estimates independent of the initial parameter values.
- [ ] EM will often converge to the maximum-likelihood estimates, but it will depend on the quality of the initial parameter values.
- [ ] EM will not converge to the maximum-likelihood estimates.
- [ ] EM is not applicable to complete data.
Question 3 [5 marks]¶
Suppose we used the Variable Elimination algorithm on a Gaussian Bayesian Network with elimination order $o$. The network has $N$ variables and the width of the network with elimination order $o$ is $w$. What is the time complexity of this algorithm? Choose the option with the tightest upper bound.
- [ ] $N e^w$
- [ ] $N w^3$
- [X] $N w^2$
- [ ] $N w$
- [ ] $N \log w$
Question 4 [5 marks]¶
Consider the following three directed graphs, $G_1$, $G_2$ and $G_3$.
On a dataset $D$, $G_1$ has a log-likelihood of -32.4, $G_2$ has log-likelihood of -28.3, and $G_3$ has log-likelihood of -15.2. $D$ contains 64 data points and six binary variables. Rank the three graphs using the Minimum Description Length (MDL) score, from best to worst on the dataset $D$.
- [ ] $G_1, G_2, G_3$
- [X] $G_1, G_3, G_2$
- [ ] $G_2, G_3, G_1$
- [ ] $G_3, G_2, G_1$
- [ ] $G_3, G_1, G_2$
Part 2 [50 marks]¶
Part 2 is composed of three open questions. To answer each question, edit the markdown cell after the question statement and insert your answer.
Question 5 [20 marks]¶
Andy and Lance are two metal detectorists that like to work together. Their hobby consists of using metal detectors to find buried metallic objects (targets). Andy uses an XP metal detector, while Lance operates a Minelab. When searching for a buried target, they both measure their machines’ response before excavating the target. A target can be trash (bottle caps, cans, nails, etc.) or treasure (coins, rings, historical artefacts, etc.).
The XP correctly recognises trash with 90% accuracy but has a lower accuracy in identifying treasure correctly, 70%. The Minelab machine identifies trash and treasure correctly with 80% and 70% accuracy, respectively.
Both detectors are sensitive machines that may require calibration from time to time. The XP is a more robust machine, and the probability of being uncalibrated is only 1%. The Minelab has a higher chance of being uncalibrated, 5%. An uncalibrated device has its accuracy for detecting treasure reduced by 10% while leaving the trash accuracy unmodified.
Given that 99% of the detected objects are trash, what is the probability of a target being a treasure given that both machines read treasure?
- [5 Marks] Show a Bayesian network structure (graph) for this problem.
- [5 Marks] Briefly explain your network.
- [5 Marks] Show the outcome space and network conditional probability tables (CPTs) for all variables. If any information is missing, assume a uniform distribution.
- [5 Marks] What is the answer to this question? Solve it as a programming exercise, i.e., provide a program that computes the solution. Is the numerical solution what you expect? Briefly comment on the resulting probabilities.
# Your answer for 1 - Bayesian network structure
# Define your graph here
G = Graph({
'T': ('X', 'M'),
'CX': ('X'),
'CM': ('M'),
'X': (),
'M': (),
})
# To improve visualisation
pos = {
'CX': '1,1!',
'T': '3,1!',
'CM': '5,1!',
'X': '2,0!',
'M': '4,0!',
}
G.show(positions=pos)
Your answer for 2
Variables:
- $T$: Target, it can assume values ‘treasure’ or ‘trash’
- $CX$: Calibration XP, it can assume values ‘calibrated’ or ‘uncalibrated’
- $CM$: Calibration Minelab, it can assume values ‘calibrated’ or ‘uncalibrated’
- $X$: XP, it can assime values ‘treasure’ or ‘trash’
- $M$: Minelab, it can assime values ‘treasure’ or ‘trash’
Both XP and Minelab readings can be influenced by the target under them and their calibration state.
# Your answer for 3
outcomeSpace = dict(
T=('treasure', 'trash'),
CX=('calibrated', 'uncalibrated'),
CM=('calibrated', 'uncalibrated'),
X=('treasure', 'trash'),
M=('treasure', 'trash'),
)
BN = BayesNet(G, outcomeSpace)
T_prob = Factor(('T',), outcomeSpace)
T_prob['treasure'] = 0.01
T_prob['trash'] = 0.99
BN.factors['T'] = T_prob
CX_prob = Factor(('CX',), outcomeSpace)
CX_prob['calibrated'] = 0.01
CX_prob['uncalibrated'] = 0.99
BN.factors['CX'] = CX_prob
CM_prob = Factor(('CM',), outcomeSpace)
CM_prob['calibrated'] = 0.05
CM_prob['uncalibrated'] = 0.95
BN.factors['CM'] = CM_prob
X_prob = Factor(('CX', 'T', 'X'), outcomeSpace)
X_prob['calibrated', 'treasure', 'treasure'] = 0.7
X_prob['calibrated', 'treasure', 'trash'] = 0.3
X_prob['calibrated', 'trash', 'treasure'] = 0.1
X_prob['calibrated', 'trash', 'trash'] = 0.9
X_prob['uncalibrated', 'treasure', 'treasure'] = 0.63
X_prob['uncalibrated', 'treasure', 'trash'] = 0.37
X_prob['uncalibrated', 'trash', 'treasure'] = 0.1
X_prob['uncalibrated', 'trash', 'trash'] = 0.9
# Students may also use these probabilities in the case
# they understand that the accuracy should reduce in
# 10 percentual points
# X_prob['uncalibrated', 'treasure', 'treasure'] = 0.6
# X_prob['uncalibrated', 'treasure', 'trash'] = 0.3
# X_prob['uncalibrated', 'trash', 'treasure'] = 0.1
# X_prob['uncalibrated', 'trash', 'trash'] = 0.9
BN.factors['X'] = X_prob
M_prob = Factor(('CM', 'T', 'M'), outcomeSpace)
M_prob['calibrated', 'treasure', 'treasure'] = 0.8
M_prob['calibrated', 'treasure', 'trash'] = 0.2
M_prob['calibrated', 'trash', 'treasure'] = 0.3
M_prob['calibrated', 'trash', 'trash'] = 0.7
M_prob['uncalibrated', 'treasure', 'treasure'] = 0.72
M_prob['uncalibrated', 'treasure', 'trash'] = 0.28
M_prob['uncalibrated', 'trash', 'treasure'] = 0.1
M_prob['uncalibrated', 'trash', 'trash'] = 0.9
# Students may also use these probabilities in the case
# they understand that the accuracy should reduce in
# 10 percentual points
# M_prob['uncalibrated', 'treasure', 'treasure'] = 0.7
# M_prob['uncalibrated', 'treasure', 'trash'] = 0.3
# M_prob['uncalibrated', 'trash', 'treasure'] = 0.1
# M_prob['uncalibrated', 'trash', 'trash'] = 0.9
BN.factors['M'] = M_prob
# Your answer for 4
Q = BN.query(('T',), X='treasure', M='treasure')
print(Q)
╒══════════╤══════════╕ │ T │ Pr │ ╞══════════╪══════════╡ │ treasure │ 0.295431 │ ├──────────┼──────────┤ │ trash │ 0.704569 │ ╘══════════╧══════════╛
Briefly comment on the results of 4
$P(T=treasure|X=treasure, M=treasure)$ is a bit low apart both machines indicating it is a tresure. This happens because the prior probability of a target being treasure is very low.
Question 6 [15 marks]¶
Lecture 10 discussed MAP queries. We developed the MPE VE and MAP VE algorithms based on the Variable Elimination (VE) algorithm. In lecture 11, we learned about jointrees. Suppose we want to replace the VE algorithm with the jointree algorithm for computing MAP and MPE queries. Answer the following questions:
- How can we redefine the jointree messages to answer maximum a posteriori queries?
- Will this new algorithm work for both MAP and MPE queries? Briefly explain.
- What are the benefits and limitations of this new approach compared with variable elimination?
Your answer for 1
There are two approaches for this problem.
In the first one, we can modify the messages to use maxization instead of summation. Therefore, the clusters will store max marginals instead of beliefs. A message defined in the lectures as
$M_{ij} =\sum_{C_{i}\backslash S_{ij}} \Phi_{i}\prod_{k\neq j}M_{ki}$
is redefined as
$M_{ij} =\max_{C_{i}\backslash S_{ij}} \Phi_{i}\prod_{k\neq j}M_{ki}$
In the second approach, we can compute MAP queries with the original jointree messages and maximize out variables in the clusters. In this case, the approach is limited to queries in which the MAP variables are in a same cluster.
Your answer for 2
The first approach only works for MPE queries. MAP queries involve summing out non-MAP variables before maximize out MAP variables. Since we pass messages between all clusters, we do not have control of the order of the max and sum operations.
The second approach works for MAP queries, but it is limited to the case MAP variables are in a same cluster.
Your answer for 3
Both approaches allow us to compute answers to multiple queries. However, the first approach is limited to MPE queries while the second to MAP queries where the MAP variables are in a single cluster.
Question 7 [15 marks]¶
In lecture 15, we performed inference by creating and counting samples. We generated those samples by simulating a Bayesian Network $N$, and we learned that this simulation could be difficult in the presence of evidence. Now, suppose we have a jointree with cluster marginals $P(C_i|\textbf{e})$ and separator marginals $P(S_{ij}|\textbf{e})$ obtained with the execution of the jointree algorithm on the network $N$ and evidence $\textbf{e}$.
- Provide an efficient algorithm for simulating $N$ conditioned on $\textbf{e}$ given the cluster and separator marginals.
- Argue that your algorithm generates a sequence of independent network instantiations $x_1, … , x_n$ where the probability of generating an instance $x_i$ is $P(x_i|\textbf{e})$.
- Discuss the complexity of your algorithm.
Your answer for 1
Input: Jointree $J$ with clusters $\textbf{C}_i$ and separators $\textbf{S}_{ij}$
$~~~~~~~~~$ Evidence $\textbf{e}$
Output: Instantiation $\Sigma$
- $\Sigma \leftarrow \textbf{e}$
- Start a depth-first search from a random cluster $\textbf{C}_i$
- While not all non-evidence variables are sampled
- $~~~~~$$\sigma \leftarrow$ random instantiation of variables in $\textbf{C}_i\backslash\textbf{e}$ according $P(\textbf{C}_i|\textbf{e})$
- $~~~~~$$\Sigma \leftarrow \Sigma \cup \sigma$
- $~~~~~$Find in DFS order a neighbouring cluster to $\textbf{C}_i$. Call this new cluster $\textbf{C}_j$
- $~~~~~$Set evidence in $\textbf{C}_j$ according to $\textbf{S}_{ij} = \sigma$
- $~~~~~$$\textbf{e} \leftarrow \textbf{e} \cup S_{ij} = \sigma$
- $~~~~~$Rename $\textbf{C}_j$ as $\textbf{C}_i$
- return $\Sigma$
Remarks:
- We need to move in DFS order or any order that let us visit the clusters according to the jointree structure. If we pick the clusters in random order, setting evidence is insufficient to provide the correct results. We may have a long chain of influence such as $A \rightarrow B \rightarrow C \rightarrow D …$.
- We can generate an instantiation for all variables in $\textbf{C}_i\backslash\textbf{e}$ in a single step. The procedure generates a number between $[0..1]$ and checks which instantiation the random number falls in. Although generating the random number is $O(1)$, finding out the instantiation is $O(\exp(|\textbf{C}_i|))$ with a naive linear search.
- There is no need to follow topological order, as it happens for Bayesian networks. Topological order was used for Bayes nets because nodes store conditional probabilities of variables given parents.
- The clusters store joint marginal probabilities. We can set evidence by zeroing rows that do not conform with evidence and renormalising.
Your answer for 2
This algorithm uses the marginals provided by the clusters of the jointree. All marginals are “calibrated”, i.e., computing $P(X)$ results in the same probabilities for any marginal $M$ that contains $X$.
As we know, the jointree has the correct marginals for the associated Bayesian network. Also, the marginals have already incorporated evidence. Therefore, there is no need for rejection sampling, and we do not sample the evidence.
The first cluster is sampled according to $P(\textbf{C}_i|\textbf{e})$. The remaining ones are sampled according to $P(\textbf{C}_j|\textbf{C}_i = \sigma, \textbf{e})$. We carefully add more evidence as we sample the jointree clusters, respecting the jointree structure. This step is important so we respect long chaing of influence between variables.
Your answer for 3
This analysis assumes that the simulation procedure generates only one sample. The jointree $J$ is already computed according to the evidence.
Given that we have $n$ clusters in $J$, the DFS procedure visits each cluster once. During each visit, the algorithm executes steps 4 to 9. The most expensive step is 4 that requires searching the table that stores $P(\textbf{C}_i|\textbf{e})$. This table has size $O(\exp(|\textbf{C}_i|))$.
Therefore, the complexity is $O(n \exp(w))$, being $w$ the number of variables in the largest cluster. We can create an indexing structure for searching the instantiation. However, this index does not improve the complexity for a single sample, although it has a better-amortised complexity when generating more samples.
Part 3 [30 marks]¶
Part 3 is composed of two programming questions of 15 marks each. Use the code cell after each question statement to enter your answer. You can use the code of the tutorials in your answers.
Question 8 [15 marks]¶
In Lecture 11, we learned about jointrees. These probabilistic graphical models must obey a property known as running intersection to be considered proper jointrees. Implement a function RIP(J)
that returns true if the jointree J
obeys the running intersection property and false otherwise. The argument J
is a jointree object as defined in the tutorials.
# Write our answer for Question 8 here
def sep(v, w):
if v < w:
return str(v)+str(w)
else:
return str(w)+str(v)
def RIP_dfs(J, v, var):
colour = {node: 'white' for node in J.graph}
return RIP_dfs_r(J, v, var, colour)
def RIP_dfs_r(J, v, var, colour, state = True):
colour[v] = 'gray'
if state:
if var not in J.clusters[v]:
return False
else:
if var in J.clusters[v]:
return False
for w in J.graph.adj_list[v]:
if colour[w] == 'white':
if var in J.separators[sep(v, w)]:
if state:
return RIP_dfs_r(J, w, var, colour)
else:
return False
else:
return RIP_dfs_r(J, w, var, colour, False)
return True
def RIP(J):
for v in J.clusters:
for var in J.clusters[v]:
if not RIP_dfs(J, v, var):
return False
return True
############
# Test code
class JoinTree():
def __init__(self, graph, clusters, separators):
self.graph = graph
self.separators = separators
self.clusters = clusters
G = Graph({
'1': ('2', ),
'2': ('1', '3', '4'),
'3': ('2', ),
'4': ('2', '5'),
'5': ('4', ),
})
S = {
'12': ('S', 'V'),
'23': ('V', ),
'24': ('O', ),
'45': ('T', ),
}
C = {
'1': ('S', 'L', 'H', 'V'),
'2': ('O', 'S', 'V'),
'3': ('C', 'V'),
'4': ('B', 'O', 'T'),
'5': ('A', 'T'),
}
jt = JoinTree(G, C, S)
print(RIP(jt))
###################
# Hidden test cases
marks = 0
G = Graph({
'1': ('2', ),
'2': ('1', '3', '4'),
'3': ('2', ),
'4': ('2', '5'),
'5': ('4', ),
})
S = {
'12': ('S', 'V'),
'23': ('V', ),
'24': ('O', ),
'45': ('T', ),
}
C = {
'1': ('S', 'L', 'H', 'V'),
'2': ('O', 'S', 'V'),
'3': ('C', 'V'),
'4': ('B', 'O', 'T'),
'5': ('A', 'T'),
}
jt = JoinTree(G, C, S)
print(RIP(jt))
if RIP(jt):
marks += 1
G = Graph({
'1': ('2', ),
'2': ('1', '3', '4'),
'3': ('2', ),
'4': ('2', '5'),
'5': ('4', ),
})
S = {
'12': ('S', ),
'23': (),
'24': ('O', ),
'45': ('T', ),
}
C = {
'1': ('S', 'L', 'H', 'V'),
'2': ('O', 'S'),
'3': ('C', 'V'),
'4': ('B', 'O', 'T'),
'5': ('A', 'T'),
}
jt = JoinTree(G, C, S)
print(RIP(jt))
if not RIP(jt):
marks += 1
G = Graph({
'1': (),
})
S = {
}
C = {
'1': ('S', 'L', 'H', 'V'),
}
jt = JoinTree(G, C, S)
print(RIP(jt))
if RIP(jt):
marks += 1
G = Graph({
'1': ('3', ),
'2': ('4', ),
'3': ('1', '4', '5'),
'4': ('2', '3', '6'),
'5': ('3', ),
'6': ('4', ),
})
S = {
'13': ('D', 'F'),
'24': ('A', 'E'),
'34': ('A', 'F'),
'35': ('A', 'D'),
'46': ('E', 'F'),
}
C = {
'1': ('F', 'D', 'G'),
'2': ('A', 'C', 'E'),
'3': ('A', 'D', 'F'),
'4': ('A', 'E', 'F'),
'5': ('A', 'B', 'D'),
'6': ('E', 'F', 'H'),
}
jt = JoinTree(G, C, S)
print(RIP(jt))
if RIP(jt):
marks += 1
G = Graph({
'1': ('3', ),
'2': ('4', ),
'3': ('1', '4', '5'),
'4': ('2', '3', '6'),
'5': ('3', ),
'6': ('4', ),
})
S = {
'13': ('D', 'F'),
'24': ('E'),
'34': ('F'),
'35': ('A', 'D'),
'46': ('E', 'F'),
}
C = {
'1': ('F', 'D', 'G'),
'2': ('A', 'C', 'E'),
'3': ('A', 'D', 'F'),
'4': ('E', 'F'),
'5': ('A', 'B', 'D'),
'6': ('E', 'F', 'H'),
}
jt = JoinTree(G, C, S)
print(RIP(jt))
if not RIP(jt):
marks += 1
G = Graph({
'1': ('3', ),
'2': ('4', ),
'3': ('1', '4', '5'),
'4': ('2', '3', '6'),
'5': ('3', ),
'6': ('4', ),
})
S = {
'13': ('A', 'D', 'F'),
'24': ('E', ),
'34': ('F', ),
'35': ('A', 'D'),
'46': ('E', 'F'),
}
C = {
'1': ('A', 'F', 'D', 'G'),
'2': ('A', 'C', 'E'),
'3': ('A', 'D', 'F'),
'4': ('E', 'F'),
'5': ('A', 'B', 'D'),
'6': ('E', 'F', 'H'),
}
jt = JoinTree(G, C, S)
print(RIP(jt))
if not RIP(jt):
marks += 1
G = Graph({
'1': ('3',),
'2': ('4'),
'3': ('1', '4', '5'),
'4': ('2', '3', '6'),
'5': ('3',),
'6': ('4',),
})
S = {
'13': ('A', 'D', 'F'),
'24': ('A', 'E'),
'34': ('A', 'F'),
'35': ('A', 'D'),
'46': ('A', 'E', 'F'),
}
C = {
'1': ('A', 'F', 'D', 'G'),
'2': ('A', 'C', 'E'),
'3': ('A', 'D', 'F'),
'4': ('A', 'E', 'F'),
'5': ('A', 'B', 'D'),
'6': ('A', 'E', 'F', 'H'),
}
jt = JoinTree(G, C, S)
print(RIP(jt))
if RIP(jt):
marks += 1
G = Graph({
'1': ('3',),
'2': ('4'),
'3': ('1', '4', '5'),
'4': ('2', '3', '6'),
'5': ('3',),
'6': ('4',),
})
S = {
'13': ('D', 'F'),
'24': ('A', 'E'),
'34': ('F', ),
'35': ('D', ),
'46': ('A', 'E', 'F'),
}
C = {
'1': ('A', 'F', 'D', 'G'),
'2': ('A', 'C', 'E'),
'3': ('D', 'F'),
'4': ('A', 'E', 'F'),
'5': ('A', 'B', 'D'),
'6': ('A', 'E', 'F', 'H'),
}
jt = JoinTree(G, C, S)
print(RIP(jt))
if not RIP(jt):
marks += 1
G = Graph({
'1': ('2', ),
'2': ('1', '3'),
'3': ('2', '4'),
'4': ('3', '5'),
'5': ('4', ),
})
S = {
'12': ('A', 'B'),
'23': ('A', 'B'),
'34': ('A', 'B'),
'45': ('A', 'B'),
}
C = {
'1': ('A', 'B'),
'2': ('A', 'B'),
'3': ('A', 'B'),
'4': ('A', 'B'),
'5': ('A', 'B'),
}
jt = JoinTree(G, C, S)
print(RIP(jt))
if RIP(jt):
marks += 1
G = Graph({
'1': ('2', ),
'2': ('1', '3'),
'3': ('2', '4'),
'4': ('3', '5'),
'5': ('4', ),
})
S = {
'12': ('A', 'B'),
'23': ('A', 'B'),
'34': ('B'),
'45': ('B'),
}
C = {
'1': ('A', 'B'),
'2': ('A', 'B'),
'3': ('A', 'B'),
'4': ('B'),
'5': ('A', 'B'),
}
jt = JoinTree(G, C, S)
print(RIP(jt))
if not RIP(jt):
marks += 1
G = Graph({
'1': ('2', ),
'2': ('1', '3'),
'3': ('2', '4'),
'4': ('3', '5'),
'5': ('4', ),
})
S = {
'12': ('A', 'B'),
'23': ('A', 'B'),
'34': ('A'),
'45': ('A'),
}
C = {
'1': ('A', 'B'),
'2': ('A', 'B'),
'3': ('A', 'B'),
'4': ('A'),
'5': ('A', 'B'),
}
jt = JoinTree(G, C, S)
print(RIP(jt))
if not RIP(jt):
marks += 1
G = Graph({
'1': ('2', ),
'2': ('1', '3'),
'3': ('2', '4'),
'4': ('3', '5'),
'5': ('4', ),
})
S = {
'12': ('A', 'B'),
'23': ('A', 'B'),
'34': ('A', 'B'),
'45': ('A', 'B'),
}
C = {
'1': ('A', 'B'),
'2': ('A', 'B'),
'3': ('A', 'B'),
'4': ('A', 'B'),
'5': ('A', 'B'),
}
jt = JoinTree(G, C, S)
print(RIP(jt))
if RIP(jt):
marks += 1
G = Graph({
'1': ('2', ),
'2': ('1', '3'),
'3': ('2', '4'),
'4': ('3', '5'),
'5': ('4', ),
})
S = {
'12': ('A', 'B'),
'23': ('B'),
'34': ('B'),
'45': ('A', 'B'),
}
C = {
'1': ('A', 'B'),
'2': ('A', 'B'),
'3': ('B'),
'4': ('A', 'B'),
'5': ('A', 'B'),
}
jt = JoinTree(G, C, S)
print(RIP(jt))
if not RIP(jt):
marks += 1
G = Graph({
'1': ('2', ),
'2': ('1', '3'),
'3': ('2', '4'),
'4': ('3', '5'),
'5': ('4', ),
})
S = {
'12': ('A', 'B'),
'23': ('A'),
'34': ('A'),
'45': ('A', 'B'),
}
C = {
'1': ('A', 'B'),
'2': ('A', 'B'),
'3': ('A'),
'4': ('A', 'B'),
'5': ('A', 'B'),
}
jt = JoinTree(G, C, S)
print(RIP(jt))
if not RIP(jt):
marks += 1
G = Graph({
'1': ('2', ),
'2': ('1', ),
})
S = {
'12': ('A', 'B', 'C', 'D', 'E', 'F'),
}
C = {
'1': ('A', 'B', 'C', 'D', 'E', 'F'),
'2': ('A', 'B', 'C', 'D', 'E', 'F'),
}
jt = JoinTree(G, C, S)
print(RIP(jt))
if RIP(jt):
marks += 1
print()
print('Total marks: ', marks)
Question 9 [15 marks]¶
Fill in the likelihood_weighted_sampling
method on the GaussianBayesNet class. The section to be filled in is shown with ... #TODO
. Note that each factor in the Bayes Net will be a Gaussian Factor object.
The GaussianFactor class is provided below for your convenience.
Note that this question will be autotested, so do not change any code outside of the area specified.
# Write our answer for Question 8 here
import numpy as np
class GaussianBayesNet():
def __init__(self, graph, factor_dict=None):
self.graph = graph
self.outcomeSpace = dict()
self.factors = dict()
if factor_dict is not None:
self.factors = factor_dict
def likelihood_weighted_sampling(self, n_samples, **evidence):
'''
This function returns a list of samples drawn randomly from the BayesNet distribution.
Arguments:
`n_samples`: number of samples to generate
`evidence`: a dictionary of observations
returns:
weights: List of weights. E.g. [6.823e-05, 1.1409e-50,]
samples: List of sample dicts. E.g. [{'A':0.3, 'B':-1.7, 'C':1.1},{'A':-0.2, 'B':2.01, 'C':2.1},]
'''
########### Answer below
order = self.graph.topological_sort()
samples = []
weights = []
for i in range(n_samples):
sigma = {}
w = 1
for var in order:
if var not in evidence.keys():
sigma[var] = self.factors[var].sample(**sigma)[var]
else:
sigma[var] = evidence[var]
entry = tuple(sigma[v] for v in self.factors[var].domain)
w *= self.factors[var].density(np.array(entry))
samples.append(sigma)
weights.append(w)
###########
return weights, samples
############
# Test code
graph = Graph({'A':['B'], 'B':['C'], 'C':[]})
gbn = GaussianBayesNet(graph)
gbn.factors['A'] = GaussianFactor(['A'], mu=1, sigma=1**2)
gbn.factors['B'] = GaussianFactor(['B','A'], beta=[3], b_mean=0, b_var=0.2**2)
gbn.factors['C'] = GaussianFactor(['C','B'], beta=[-3], b_mean=1, b_var=0.2**2)
weights, samples = gbn.likelihood_weighted_sampling(5,B=2.7)
print(samples)
print(sum(weights))
## Functions for testing
def get_estimated_mean_cov(domain, weights, samples, evidence):
domain = list(k for k in domain if k not in evidence)
tmp = []
for s in samples:
tmp.append(list([s[key] for key in domain]))
sample_array = np.array(tmp)
sample_weights = (np.array(weights).reshape((-1,1)))
estimated_mean = np.sum((sample_weights/np.sum(sample_weights))*sample_array,axis=0)
estimated_cov = (1/(np.sum(sample_weights)-1))*(sample_weights*(sample_array-estimated_mean)).T@(sample_array-estimated_mean)
return estimated_mean, estimated_cov
def model1():
# Model
evidence = {'B':2.7}
graph = Graph({'A':['B'], 'B':['C'], 'C':[]})
gbn = GaussianBayesNet(graph)
gbn.factors['A'] = GaussianFactor(['A'], mu=1, sigma=1**2)
gbn.factors['B'] = GaussianFactor(['B','A'], beta=[3], b_mean=0, b_var=0.2**2)
gbn.factors['C'] = GaussianFactor(['C','B'], beta=[-3], b_mean=1, b_var=0.2**2)
return gbn, evidence
def test(gbn, evidence):
### Testing
full_factor = GaussianFactor(tuple(), mu=[], sigma=[[]])
for f in gbn.factors.values():
full_factor = full_factor*f
weights, samples = gbn.likelihood_weighted_sampling(1000,**evidence)
# fix domain order
domain_order = list(samples[0].keys())
full_factor = full_factor._extend(domain_order)
true_factor = full_factor.evidence(**evidence)
mean, cov = get_estimated_mean_cov(domain_order, weights, samples, evidence)
sample_size = np.sum(weights)
mu_deviation = np.abs(mean-true_factor.mean())
cov_deviation = np.abs(cov-true_factor.covariance())
# check that all evidence is equal to the observation
tmp = []
for s in samples:
tmp.append(list([s[key] for key in evidence]))
evi_equal = np.all(np.array(tmp) == np.array([e for e in evidence.values()]))
return mu_deviation, cov_deviation, evi_equal
def get_threshold(model):
mu_devs, cov_devs = [],[]
gbn, evidence = model()
for i in range(200):
mu_dev, cov_dev, _ = test(gbn,evidence)
mu_devs.append(mu_dev)
cov_devs.append(cov_dev)
return 1.2*np.max(mu_devs,axis=0), 1.2*np.max(cov_devs, axis=0)
Example output:
[{'A': 0.348216516404394, 'B': 2.7, 'C': -7.188024942397417}, {'A': 0.01839518456395206, 'B': 2.7, 'C': -7.103155084034539}, {'A': 1.1960021809931205, 'B': 2.7, 'C': -6.979657064371987}, {'A': 0.39587510763503886, 'B': 2.7, 'C': -6.978733681701745}, {'A': 0.9996062580578883, 'B': 2.7, 'C': -7.014497964752619}]
[2.656373475856643e-15, 2.118460658582052e-38, 0.00010448721114381634, 7.638040102473492e-13, 0.6533391788451658]
def model2():
# Model
evidence = {}
graph = Graph({'A':['B'], 'B':['C'], 'C':[]})
gbn = GaussianBayesNet(graph)
gbn.factors['A'] = GaussianFactor(['A'], mu=1, sigma=1**2)
gbn.factors['B'] = GaussianFactor(['B','A'], beta=[3], b_mean=0, b_var=0.2**2)
gbn.factors['C'] = GaussianFactor(['C','B'], beta=[-3], b_mean=1, b_var=0.2**2)
return gbn, evidence
def model3():
# Model
evidence = {'A':2.0}
graph = Graph({'A':['B'], 'B':['C'], 'C':[]})
gbn = GaussianBayesNet(graph)
gbn.factors['A'] = GaussianFactor(['A'], mu=1, sigma=1**2)
gbn.factors['B'] = GaussianFactor(['B','A'], beta=[3], b_mean=0, b_var=0.2**2)
gbn.factors['C'] = GaussianFactor(['C','B'], beta=[-3], b_mean=1, b_var=0.2**2)
return gbn, evidence
def model4():
# Model
evidence = {'C':-8.0}
graph = Graph({'A':['B'], 'B':['C'], 'C':[]})
gbn = GaussianBayesNet(graph)
gbn.factors['A'] = GaussianFactor(['A'], mu=1, sigma=1**2)
gbn.factors['B'] = GaussianFactor(['B','A'], beta=[3], b_mean=0, b_var=0.2**2)
gbn.factors['C'] = GaussianFactor(['C','B'], beta=[-3], b_mean=1, b_var=0.2**2)
return gbn, evidence
def model5():
# Model
evidence = {'C':-8.0,'A':1.1}
graph = Graph({'A':['B'], 'B':['C'], 'C':[]})
gbn = GaussianBayesNet(graph)
gbn.factors['A'] = GaussianFactor(['A'], mu=1, sigma=1**2)
gbn.factors['B'] = GaussianFactor(['B','A'], beta=[3], b_mean=0, b_var=0.2**2)
gbn.factors['C'] = GaussianFactor(['C','B'], beta=[-3], b_mean=1, b_var=0.2**2)
return gbn, evidence
def model6():
# Model
evidence = {}
graph = Graph({'D':['E','C'], 'E':['B'], 'B':['A'], 'A':[], 'C':['A']})
gbn = GaussianBayesNet(graph)
gbn.factors['D'] = GaussianFactor(['D'], mu=0, sigma=1**2)
gbn.factors['E'] = GaussianFactor(['E','D'], beta=[0.5], b_mean=0.4, b_var=0.2**2)
gbn.factors['B'] = GaussianFactor(['B','E'], beta=[-0.1], b_mean=1, b_var=0.3**2)
gbn.factors['A'] = GaussianFactor(['A','B','C'], beta=[-0.9, 2], b_mean=1, b_var=0.3**2)
gbn.factors['C'] = GaussianFactor(['C','D'], beta=[-2.3], b_mean=-1, b_var=0.5**2)
return gbn, evidence
def model7():
# Model
evidence = {'D':0.4}
graph = Graph({'D':['E','C'], 'E':['B'], 'B':['A'], 'A':[], 'C':['A']})
gbn = GaussianBayesNet(graph)
gbn.factors['D'] = GaussianFactor(['D'], mu=0, sigma=1**2)
gbn.factors['E'] = GaussianFactor(['E','D'], beta=[0.5], b_mean=0.4, b_var=0.2**2)
gbn.factors['B'] = GaussianFactor(['B','E'], beta=[-0.1], b_mean=1, b_var=0.3**2)
gbn.factors['A'] = GaussianFactor(['A','B','C'], beta=[-0.9, 2], b_mean=1, b_var=0.3**2)
gbn.factors['C'] = GaussianFactor(['C','D'], beta=[-2.3], b_mean=-1, b_var=0.5**2)
return gbn, evidence
def model8():
# Model
evidence = {'A':-2}
graph = Graph({'D':['C','E'], 'E':['B'], 'B':['A'], 'A':[], 'C':['A']})
gbn = GaussianBayesNet(graph)
gbn.factors['D'] = GaussianFactor(['D'], mu=0.1, sigma=1.1**2)
gbn.factors['E'] = GaussianFactor(['E','D'], beta=[0.6], b_mean=0.31, b_var=0.32**2)
gbn.factors['B'] = GaussianFactor(['B','E'], beta=[-0.1], b_mean=1, b_var=0.1**2)
gbn.factors['A'] = GaussianFactor(['A','B','C'], beta=[-0.9, 2], b_mean=1, b_var=0.3**2)
gbn.factors['C'] = GaussianFactor(['C','D'], beta=[-2.3], b_mean=-1, b_var=0.5**2)
return gbn, evidence
def model9():
# Model
evidence = {'A':-2,'E':0.3}
graph = Graph({'D':['C','E'], 'E':['B'], 'B':['A'], 'A':[], 'C':['A']})
gbn = GaussianBayesNet(graph)
gbn.factors['D'] = GaussianFactor(['D'], mu=0.1, sigma=1.1**2)
gbn.factors['E'] = GaussianFactor(['E','D'], beta=[0.6], b_mean=0.31, b_var=0.32**2)
gbn.factors['B'] = GaussianFactor(['B','E'], beta=[-0.1], b_mean=1, b_var=0.1**2)
gbn.factors['A'] = GaussianFactor(['A','B','C'], beta=[-0.9, 2], b_mean=1, b_var=0.3**2)
gbn.factors['C'] = GaussianFactor(['C','D'], beta=[-2.3], b_mean=-1, b_var=0.5**2)
return gbn, evidence
def model10():
# Model
evidence = {'E':0.3,'B':1,'C':-1}
graph = Graph({'D':['C','E'], 'E':['B'], 'B':['A'], 'A':[], 'C':['A']})
gbn = GaussianBayesNet(graph)
gbn.factors['D'] = GaussianFactor(['D'], mu=0.1, sigma=1.1**2)
gbn.factors['E'] = GaussianFactor(['E','D'], beta=[0.6], b_mean=0.31, b_var=0.32**2)
gbn.factors['B'] = GaussianFactor(['B','E'], beta=[-0.1], b_mean=1, b_var=0.1**2)
gbn.factors['A'] = GaussianFactor(['A','B','C'], beta=[-0.9, 2], b_mean=1, b_var=0.3**2)
gbn.factors['C'] = GaussianFactor(['C','D'], beta=[-2.3], b_mean=-1, b_var=0.5**2)
return gbn, evidence
def model11():
# Model
evidence = {'A': -115}
graph = Graph({'D':['C'], 'E':['B','C'], 'B':['A'], 'A':[], 'C':['A'],'F':['A']})
gbn = GaussianBayesNet(graph)
gbn.factors['D'] = GaussianFactor(['D'], mu=0.1, sigma=1.1**2)
gbn.factors['E'] = GaussianFactor(['E'], mu=-5.1, sigma=0.1**2)
gbn.factors['B'] = GaussianFactor(['B','E'], beta=[-2.1], b_mean=1, b_var=0.1**2)
gbn.factors['A'] = GaussianFactor(['A','B','C','F'], beta=[-0.9, 2, 0.05], b_mean=1, b_var=0.3**2)
gbn.factors['C'] = GaussianFactor(['C','D','E'], beta=[-2.3,10], b_mean=-1, b_var=0.5**2)
gbn.factors['F'] = GaussianFactor(['F'], mu=50, sigma=6**2)
return gbn, evidence
def model12():
# Model
evidence = {'A': -115, 'C':-53}
graph = Graph({'D':['C'], 'E':['B','C'], 'B':['A'], 'A':[], 'C':['A'],'F':['A'],'G':['A']})
gbn = GaussianBayesNet(graph)
gbn.factors['D'] = GaussianFactor(['D'], mu=0.1, sigma=1.1**2)
gbn.factors['E'] = GaussianFactor(['E'], mu=-5.1, sigma=0.1**2)
gbn.factors['B'] = GaussianFactor(['B','E'], beta=[-2.1], b_mean=1, b_var=0.1**2)
gbn.factors['A'] = GaussianFactor(['A','B','C','F','G'], beta=[-0.9, 2, 0.05, 3], b_mean=1, b_var=0.3**2)
gbn.factors['C'] = GaussianFactor(['C','D','E'], beta=[-2.3,10], b_mean=-1, b_var=0.5**2)
gbn.factors['F'] = GaussianFactor(['F'], mu=50, sigma=6**2)
gbn.factors['G'] = GaussianFactor(['G'], mu=0, sigma=0.4**2)
return gbn, evidence
def model13():
# Model
evidence = {'G':0.2,'A':-115}
graph = Graph({'D':['C'], 'E':['B','C'], 'B':['A'], 'A':[], 'C':['A'],'F':['A'],'G':['A']})
gbn = GaussianBayesNet(graph)
gbn.factors['D'] = GaussianFactor(['D'], mu=0.1, sigma=1.1**2)
gbn.factors['E'] = GaussianFactor(['E'], mu=-5.1, sigma=0.1**2)
gbn.factors['B'] = GaussianFactor(['B','E'], beta=[-2.1], b_mean=1, b_var=0.1**2)
gbn.factors['A'] = GaussianFactor(['A','B','C','F','G'], beta=[-0.9, 2, 0.05, 3], b_mean=1, b_var=0.3**2)
gbn.factors['C'] = GaussianFactor(['C','D','E'], beta=[-2.3,10], b_mean=-1, b_var=0.5**2)
gbn.factors['F'] = GaussianFactor(['F'], mu=50, sigma=6**2)
gbn.factors['G'] = GaussianFactor(['G'], mu=0, sigma=0.4**2)
return gbn, evidence
def model14():
# Model
evidence = {}
graph = Graph({'D':['C'], 'E':['B','C'], 'B':['A'], 'A':[], 'C':['A'],'F':['A'],'G':['A']})
gbn = GaussianBayesNet(graph)
gbn.factors['D'] = GaussianFactor(['D'], mu=0.1, sigma=1.1**2)
gbn.factors['E'] = GaussianFactor(['E'], mu=-5.1, sigma=0.1**2)
gbn.factors['B'] = GaussianFactor(['B','E'], beta=[-2.1], b_mean=1, b_var=0.1**2)
gbn.factors['A'] = GaussianFactor(['A','B','C','F','G'], beta=[-0.9, 2, 0.05, 3], b_mean=1, b_var=0.3**2)
gbn.factors['C'] = GaussianFactor(['C','D','E'], beta=[-2.3,10], b_mean=-1, b_var=0.5**2)
gbn.factors['F'] = GaussianFactor(['F'], mu=50, sigma=6**2)
gbn.factors['G'] = GaussianFactor(['G'], mu=0, sigma=0.4**2)
return gbn, evidence
def model15():
# Model
evidence = {'E':-4.9}
graph = Graph({'D':['C'], 'E':['B','C'], 'B':['A'], 'A':[], 'C':['A'],'F':['A'],'G':['A']})
gbn = GaussianBayesNet(graph)
gbn.factors['D'] = GaussianFactor(['D'], mu=0.1, sigma=1.1**2)
gbn.factors['E'] = GaussianFactor(['E'], mu=-5.1, sigma=0.1**2)
gbn.factors['B'] = GaussianFactor(['B','E'], beta=[-2.1], b_mean=1, b_var=0.1**2)
gbn.factors['A'] = GaussianFactor(['A','B','C','F','G'], beta=[-0.9, 2, 0.05, 3], b_mean=1, b_var=0.3**2)
gbn.factors['C'] = GaussianFactor(['C','D','E'], beta=[-2.3,10], b_mean=-1, b_var=0.5**2)
gbn.factors['F'] = GaussianFactor(['F'], mu=50, sigma=6**2)
gbn.factors['G'] = GaussianFactor(['G'], mu=0, sigma=0.4**2)
return gbn, evidence
gbn, evidence = model13()
weights, samples = gbn.likelihood_weighted_sampling(50, **evidence)
print(sum(weights))
print(samples)
# This cell takes a long time to run, and shouldn't be used with student code.
# dump thresholds calculated using solution code, for testing student solutions.
# The thresholds show how much random deviation is acceptable.
import pickle
for i, model in enumerate([model1,model2,model3,model4,model5,model6,model7,model8,model9,model10,model11,model12,model13,model14,model15]):
thresholds = get_threshold(model)
pickle.dump(thresholds, open( f"{i}.pkl", "wb" ) )
import traceback
import pickle
# For testing student solutions, should be copied
# along with testing functions above into student
# code, which should be in a subfolder. The outer folder
# should contain the .pkl threshold files.
marks = 0
for i, model in enumerate([model1, model2, model3, model4, model5, model6, model7, model8, model9, model10,model11,model12,model13,model14,model15]):
thresholds = pickle.load(open(f"{i}.pkl", 'rb'))
mu_threshold, cov_threshold = thresholds
gbn, evidence = model()
try:
mu_dev, cov_dev, evi_equal = test(gbn, evidence)
if evi_equal and np.all(mu_dev < mu_threshold) and np.all(cov_dev < cov_threshold):
marks += 1
else:
print(f"#### failed model {i+1}")
full_factor = GaussianFactor(tuple(), mu=[], sigma=[[]])
for f in gbn.factors.values():
full_factor = full_factor*f
weights, samples = gbn.likelihood_weighted_sampling(1000,**evidence)
# fix domain order
domain_order = list(samples[0].keys())
full_factor = full_factor._extend(domain_order)
true_factor = full_factor.evidence(**evidence)
mean, cov = get_estimated_mean_cov(domain_order, weights, samples, evidence)
sample_size = np.sum(weights)
print(f"True domain {true_factor.domain}")
print(f"True mean: {true_factor.mean()}")
print(f"Est mean: {mean}")
print(f"Samples: {samples[0:10]}")
print(f"Evi equal: {evi_equal}")
if(not np.all(mu_dev < mu_threshold)):
print("source: mu")
elif(not np.all(cov_dev < cov_threshold)):
print("source: cov")
print("#######")
except Exception as e:
print(f"Exception in model {i+1}")
traceback.print_tb(e.__traceback__)
print("Total marks Q9",marks)