CS代考 COMP9418_Supp_Exam_T3_2021

COMP9418_Supp_Exam_T3_2021

T3-2021 Supplementary Exam¶
COMP9418 – Advanced Topics in Statistical Machine Learning

Copyright By PowCoder代写 加微信 powcoder

University of Wales

12th January, 2022

Before proceeding, please read and acknowledge the following (double-click on this cell and put an X between the brackets [X]):

[ ] I acknowledge that I will complete all of the work I submit for this exam without assistance from anyone else.

Instructions¶
This exam will last for 10 hours, starting 12/1/2022 at 09:00:00 AEDT and ending 12/1/2022 at 19:00:00 AEDT.
Questions will be answered from 9 am to 7 pm AEDT.
You must provide all answers in this Jupyter notebook.
You must use the cells provided to answer the questions. Use markdown cells for textual answers and code cells for programming answers.
Submit this exam by give (command line or WebCMS) before the deadline. If WebCMS submission is slow, or if you are submitting in the last hour, please submit using the give command on the CSE servers (via VLAB or ssh).
The exam has three parts: Multiple choice questions (20%); Questions that require a textual answer (50%); and, programming questions in Python (30%).
This exam is an open book exam. You are permitted to access papers and books as well as the course materials, including slides and solved tutorials.
You are not permitted to communicate (email, phone, message, talk, etc.) with anyone during the exam, except COMP9418 staff via email or forum.
Do not communicate your exam answers after you finish your exam. Some students may have extended time to complete the exam.
Do not place your exam work in any location accessible to any other person, such as Dropbox and Github.
Ensure that no other person in your household can access your work.
Do not disclose your zpass to any other person. If you have revealed your zpass, you should change it immediately.
We will refer deliberate violations of exam conditions to Student Integrity as serious misconduct.
This exam has ten questions. The total number of marks is 100.
Type your student number and name on the next cell.

Student identification¶

Student ID:

from math import log
from itertools import product, combinations, permutations
from DiscreteFactors import Factor
from Graph import Graph
from BayesNet import BayesNet

Part 1 [20 marks]¶
Part 1 is composed of four multiple-choice questions of five marks each. To answer each question, double-click the cell with the alternatives and write an X between the [ ] of the chosen option.

This is an example before inserting X

[ ] Alternative one
[ ] Alternative two

This is an example after inserting X

[X] Alternative one
[ ] Alternative two

For all four questions, choose only one among the alternatives.

Question 1 [5 marks]¶

Given a probability distribution $P$ over a set of variables $\textbf{X}$. Identify a DAG $G_D$ that is a D-MAP for $P$ and a graph $G_I$ that is an I-MAP for $P$. Both $G_D$ and $G_I$ have one node corresponding to each variable in $\textbf{X}$, and the edges are defined as:

[ ] $G_D$ is a DAG with the maximum number of edges, and $G_I$ is a DAG with no edges.
[ ] $G_D$ is a DAG with no edges, and $G_I$ is a DAG with the maximum number of edges.
[ ] $G_I$ is DAG with the maximum number of edges, and $G_D$ does not exist.
[ ] $G_I$ is a DAG with no edges, and $G_D$ does not exist.
[ ] Neither $G_D$ or $G_I$ exist.

Question 2 [5 marks]¶

Suppose we have a Bayesian network $G$ that contains a node with $k$ parents. What can we say about the width of every complete elimination order for the variables in $G$?

[ ] The elimination orders have width of at least $k$.
[ ] The elimination orders have width equal to $k$.
[ ] The elimination orders have width of at most $k$.
[ ] The elimination orders have width of at least $k + 1$.
[ ] None of the above alternatives.

Question 3 [5 marks]¶

Choose the incorrect alternative regarding the differences between Belief Propagation (BP), Iterative Belief Propagation (IBP – also known as Loopy Belief Propagation) and Generalized Belief Propagation (GBP – also known as Iterative Joingraph Propagation):

[ ] BP is restricted to polytrees while IBP and GBP can be applied to general networks.
[ ] In BP, there is always a root or leaf node with a single neighbour that we can use to start to send messages. In IBP and GPB, such nodes may not exist. However, we initialize all messages with a value, so all nodes are allowed to send an initial message.
[ ] BP provides exact results, that is, the computed marginals (beliefs) are correct. IBP and GBP do not have the same guarantees, and the beliefs are usually approximations.
[ ] BP converges in a single iteration. IBP and GBP need a convergence criterium, so these methods iterate until such criterium is met.
[ ] When sending a message from a node X to a node Y, BP X’s message is the multiplication of incoming messages from all neighbours of X by the factors associated with X. In the case of evidence, it also multiplies by the evidence indicator associated with X.

Question 4 [5 marks]¶

Given the following Markov network $G$:

Which factorization is not valid for $G$?

[ ] $\phi(X_1, X_2, X_3)\phi(X_1, X_3, X_4)\phi(X_3,X_4,X_5)\phi(X_2,X_3,X_5)$.
[ ] $\phi(X_1, X_2)\phi(X_1, X_3)\phi(X_2, X_3)\phi(X_1, X_3, X_4)\phi(X_3,X_4,X_5)\phi(X_2,X_3,X_5)$.
[ ] $\phi(X_1, X_2, X_3, X_4)\phi(X_2, X_3, X_4, X_5)$.
[ ] $\phi(X_1, X_2, X_3, X_5)\phi(X_1, X_3, X_4, X_5)$.
[ ] All factorizations above are valid.

Part 2 [50 marks]¶
Part 2 is composed of four open questions. To answer each question, edit the markdown cell after the question statement and insert your answer.

Question 5 [10 marks]¶

The degeneracy of an undirected graph $G$ is defined as the maximum degree attained by any of the subgraphs of $G$. Degeneracy is a lower-bound for the treewidth of $G$.

Show that the degeneracy can be computed by generating a sequence of subgraphs, starting with $G$, and then generating the next subgraph by removing a minimum-degree node. The degeneracy is then the maximum degree attained by any of the generated subgraphs.

Your answer for Q5

Question 6 [10 marks]¶

Consider a cluster sequence $\textbf{C}_1, \ldots, \textbf{C}_n$ induced by an elimination order $\pi$ on graph $G$. Explain why it is impossible for a cluster $\textbf{C}_i$ to be contained in another cluster $\textbf{C}_j$ that follows in the sequence ($j > i$). However, it is possible that a cluster $\textbf{C}_i$ to be contained in an earlier cluster $\textbf{C}_j$ ($j < i$). Your answer for Q6 Question 7 [10 marks]¶ Suppose that we have a jointree with $n$ clusters and width $w$. Show how a MAP quey can be solved in $O(n \exp(w))$ time and space if all MAP variables are contained in some jointree cluster. Your answer for Q7 Question 8 [20 marks]¶ We can detect the presence of the SARS-CoV-2 virus using two types of tests: Antigen and PCR. The most relevant difference between these tests is that PCR amplifies the genetic material; therefore, it can detect even the smallest amount of coronavirus material in a sample. PCR tests have a true-positive rate (TPR) of 95% and a true-negative rate (TNR) of 99%. Antigen tests do not amplify the genetic material and are sensitive to the virus load in the body. The antigen test has a TNR of 99%, a TPR of 75% for low virus load patients, and a TPR of 93% for high virus load patients. What is the probability that the same patient gets different results from the antigen and a PCR test? Assume that 10% of the tested population has the SARS-CoV-2 virus, 40% with low virus load and 60% with high virus load. [5 Marks] Show a Bayesian network structure (graph) for this problem. [5 Marks] Briefly explain your network. [5 Marks] Show the outcome space and network conditional probability tables (CPTs) for all variables. If any information is missing, assume a uniform distribution. [5 Marks] What is the answer to this question? Solve it as a programming exercise, i.e., provide a program that computes the solution. # Your answer for 8.1 - Bayesian network structure # Define your graph here G = Graph({ ... # TODO Your answer for 8.2 # Your answer for 8.3 # Define the outcome space here outcomeSpace = dict( ... # TODO # Define the factors here factors = dict() ... # TODO BN = BayesNet(G, outcomeSpace, factors) Your answer for 8.4 Part 3 [30 marks]¶ Part 3 is composed of two programming questions of 15 marks each. Use the code cell after each question statement to enter your answer. You can use the code of the tutorials in your answers. Question 9 [15 marks]¶ A central component of learning structures from data is the computation of the log-likelihood functions for a Bayesian network $G$ and a dataset $D$. An important result is that such log-likelihood corresponds to a sum of terms, one for each family in the network: $$LL(G;D)=-N \sum_{X\textbf{U}} H_D(X|\textbf{U})$$where $H_D(X|\textbf{U})$ is the conditional entropy for the family $X\textbf{U}$. Implement a Python function LL(BN, dataset) that computes the log-likelihood of a Bayesian network $BN$ using the dataset dataset. Define an intermediate function entropy(BN, X, dataset) that computes the entropy for the family of variable X. # Your answer for Question 9 def entropy(BN, X, examples): ... # TODO def LL(BN, examples): ... # TODO ############ # Test code G = Graph({ 'A': ['B'], 'B': ['E'], 'C': ['E', 'F'], 'D': ['G'], 'E': ['G', 'H'], 'G': ['I'], outcomeSpace = dict( factors = dict() A = Factor(('A'), outcomeSpace) factors['A'] = A B = Factor(('A', 'B'), outcomeSpace) B[0, 0] = .95 B[0, 1] = .05 B[1, 0] = .8 B[1, 1] = .2 factors['B'] = B C = Factor(('C'), outcomeSpace) factors['C'] = C D = Factor(('D'), outcomeSpace) factors['D'] = D E = Factor(('B', 'C', 'E'), outcomeSpace) E[0, 0, 0] = .05 E[0, 0, 1] = .95 E[0, 1, 0] = .99 E[0, 1, 1] = .01 E[1, 0, 0] = .8 E[1, 0, 1] = .2 E[1, 1, 0] = .5 E[1, 1, 1] = .5 factors['E'] = E F = Factor(('C', 'F'), outcomeSpace) F[0, 0] = .1 F[0, 1] = .9 F[1, 0] = .4 F[1, 1] = .6 factors['F'] = F G = Factor(('D', 'E', 'G'), outcomeSpace) G[0, 0, 0] = .15 G[0, 0, 1] = .85 G[0, 1, 0] = .7 G[0, 1, 1] = .3 G[1, 0, 0] = .8 G[1, 0, 1] = .2 G[1, 1, 0] = .25 G[1, 1, 1] = .75 factors['G'] = G H = Factor(('E', 'H'), outcomeSpace) H[0, 0] = .5 H[0, 1] = .5 H[1, 0] = .45 H[1, 1] = .55 factors['H'] = H I = Factor(('G', 'I'), outcomeSpace) I[0, 0] = .6 I[0, 1] = .4 I[1, 0] = .55 I[1, 1] = .45 factors['I'] = I BN = BayesNet(G, outcomeSpace, factors) dataset = [ dict(A=1,B=1,C=0,D=1,E=0,F=1,G=1,H=1,I=1), dict(A=0,B=1,C=1,D=1,E=1,F=1,G=1,H=0,I=0), dict(A=1,B=0,C=1,D=0,E=0,F=0,G=0,H=1,I=0), dict(A=1,B=0,C=0,D=1,E=0,F=0,G=0,H=0,I=1), dict(A=0,B=1,C=0,D=1,E=0,F=1,G=1,H=1,I=1), dict(A=0,B=0,C=1,D=1,E=1,F=1,G=0,H=0,I=1), dict(A=1,B=1,C=1,D=0,E=1,F=0,G=1,H=1,I=0), dict(A=0,B=1,C=0,D=1,E=1,F=1,G=0,H=0,I=1), dict(A=0,B=0,C=0,D=0,E=1,F=0,G=1,H=1,I=0), dict(A=0,B=0,C=0,D=1,E=1,F=0,G=0,H=0,I=0), print(LL(BN, dataset)) Question 10 [15 marks]¶ In lecture 14, we discussed a simple approach to create a valid jongraph callend the Bethe Graph. In summary, the approach is the following: For each factor $\phi_k$, create a cluster $\textbf{C}_k$ that represents the variables in $\phi_k$. For each variable $X_i$, create a singleton cluster $\{X_i\}$. Create an edge $(\textbf{C}_k,X_i)$ if $X_i \in \textbf{C}_k$. Implement a function Bethe(phi) that returns a graph object with a Bethe Graph for the factor list $phi$. # Your answer for Question 10 def Bethe(phi): ... # TODO ############ # Test code outcomeSpace = dict( phi = dict() phi_1 = Factor(('A', 'B', 'C'), outcomeSpace) phi_1[0, 0, 0] = 1 phi_1[0, 0, 1] = 2 phi_1[0, 1, 0] = 1 phi_1[0, 1, 1] = 3 phi_1[1, 0, 0] = 1 phi_1[1, 0, 1] = 5 phi_1[1, 1, 0] = 1 phi_1[1, 1, 1] = 7 phi['phi_1'] = phi_1 phi_2 = Factor(('B', 'C'), outcomeSpace) phi_2[0, 0] = 7 phi_2[0, 1] = 3 phi_2[1, 0] = 1 phi_2[1, 1] = 9 phi['phi_2'] = phi_2 phi_3 = Factor(('B', 'D'), outcomeSpace) phi_3[0, 0] = 2 phi_3[0, 1] = 1 phi_3[1, 0] = 5 phi_3[1, 1] = 2 phi['phi_3'] = phi_3 phi_4 = Factor(('D', 'E'), outcomeSpace) phi_4[0, 0] = 1 phi_4[0, 1] = 1 phi_4[1, 0] = 2 phi_4[1, 1] = 1 phi['phi_4'] = phi_4 phi_5 = Factor(('B', 'E'), outcomeSpace) phi_5[0, 0] = 10 phi_5[0, 1] = 1 phi_5[1, 0] = 20 phi_5[1, 1] = 5 phi['phi_5'] = phi_5 phi_6 = Factor(('B', 'D'), outcomeSpace) phi_6[0, 0] = 5 phi_6[0, 1] = 10 phi_6[1, 0] = 10 phi_6[1, 1] = 1 phi['phi_6'] = phi_6 phi_7 = Factor(('B', 'D', 'F'), outcomeSpace) phi_7[0, 0, 0] = 2 phi_7[0, 0, 1] = 1 phi_7[0, 1, 0] = 3 phi_7[0, 1, 1] = 1 phi_7[1, 0, 0] = 2 phi_7[1, 0, 1] = 5 phi_7[1, 1, 0] = 2 phi_7[1, 1, 1] = 7 phi['phi_7'] = phi_7 G = Bethe(phi) 程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com