CS 188 Introduction to
Spring 2016 Artificial Intelligence Final V2
• You have approximately 2 hours and 50 minutes.
• The exam is closed book, closed calculator, and closed notes except your three crib sheets.
• Mark your answers ON THE EXAM ITSELF. If you are not sure of your answer you may wish to provide a brief explanation or show your work.
• For multiple choice questions,
– means mark all options that apply – means mark a single choice
• There are multiple versions of the exam. For fairness, this does not impact the questions asked, only the ordering of options within a given question.
First name
Last name
SID
edX username
First and last name of student to your left
First and last name of student to your right
For staff use only:
Q1. Agent Testing Today!
/1
Q2. Potpourri
/21
Q3. Bayes Nets and Sampling
/6
Q4. Deep Learning
/15
Q5. MDPs: Reward Shaping
/11
Q6. Zero Sum MDP’s
/6
Q7. Planning ahead with HMMs
/11
Q8. Na ̈ıve Bayes
/7
Q9. Beyond Ordinary Pruning
/12
Q10. Iterative Deepening Search
/10
Total
/100
1
THIS PAGE IS INTENTIONALLY LEFT BLANK
Q1. [1 pt] Agent Testing Today!
It’s testing time! Circle your favorite robot below. We hope you have fun with the rest of the exam!
Any answer was acceptable.
3
Q2. [21 pts] Potpourri
(a)
(i) [1 pt] Suppose we have a multiclass perceptron with three classes A, B, C and with weights initially set to wA = [1,2], wB = [2,0], wC = [2,−1]. Write out the vectors wA,wB,wC of the perceptron after training on the following two dimensional training example once.
wA = [1,2] wB = [2,0] wC = [2,−1]
The predicted label is y′ = argmaxy∈{A,B,C} wy · [x0,x1] = A, since wA · [x0,x1] = [1,2] · [1,1] = 3 is greater than both wB ·[x0,x1] = [2,0]·[1,1] = 2 and wC ·[x0,x1] = [2,−1]·[1,1] = 1. Since the predicted label y′ = A is equal to the correct label y∗ = A, the weights are not updated, so the weights are the same as the initial ones
(ii) [1pt]SupposewehaveadifferentmulticlassperceptronwiththreeclassesA,B,Candwithweightsinitially set to wA = [2,4], wB = [−1,0], wC = [2,−2]. Write out the vectors wA,wB,wC of the perceptron after training on the following two dimensional training example once.
wA = [2,4] wB = [1,−1] wC = [0,−1]
The predicted label is y′ = argmaxy∈{A,B,C} wy ·[x0,x1] = B, since wB ·[x0,x1] = [−1,0]·[−2,1] = 2 is greater than both wA · [x0,x1] = [2,4] · [−2,1] = 0 and wC · [x0,x1] = [2,−2] · [−2,1] = −6. Since the predicted label y′ = B is not equal to the correct label y∗ = C, the weights are updated by subtracting the datum from the weights of the predicted label and by adding the datum to the weights of the correct label:
wB ←wB −[−2,1]=[1,−1] wC ←wC +[−2,1]=[0,−1].
x0
x1
label
1
1
A
x0
x1
label
-2
1
C
4
(iii) [3 pts] Suppose we have a different multiclass perceptron with three classes A,B,C and with weights initially set to wA = [1,0], wB = [1,1], wC = [3,0]. After training on the following set of training data an infinite number of times, select which of the following options must be True given no additional information. Convergence indicates that the values do not change even within a pass through the data set.
All of the weight vectors wA, wB, wC converge.
Only two of the weight vectors wA, wB, wC converge. Only one of the weight vectors wA, wB, wC converge. None of the weight vectors wA, wB, wC converge.
None of the above.
First, notice that the data is not linearly separable, so not all of the weight vectors converge (this is the case for the perceptron). Second, notice that it’s impossible for only two of the weight vectors to converge since every time the prediction is wrong, two of the weights are updated with a datum (this is the case for the multiclass perceptron), all of which are non-zero for this particular data set. Therefore, there are two possibilities left: either only one of the weight vectors converge or none of them converge. To find out which one is the case, you can do a single pass through the data in order to notice a pattern emerge.
For the training example with i = 0, the perceptron incorrectly predicts a label of C, so the weight vectors wA and wC get updated to wA = [2,1] and wC = [2,−1]. Then, for the training examples with i = 1 and i = 2, the perceptron correctly predicts the labels B and C, respectively, so the weight vectors are not updated. Then, for the training example with i = 3, the perceptron incorrectly predicts a label of C, so the weight vectors wA and wC get updated to wA = [1, 0] and wC = [3, 0], which are the values that you originally started with! Therefore, as you train the perceptron on the data set an infinite number of times, the value of the weight vector wA fluctuates between wA = [2, 1] and wA = [1, 0], while the value of the weight vector wC fluctuates between wC = [2, −1] and wC = [3, 0]. Meanwhile, the value of the weight vector wB stays at wB = [1, 1], and therefore it is the only weight vector that converges.
training example i
x0
x1
label
0 1 2 3
1 -1 1 -1
1 1 -1 -1
A B C A
5
(b) You are given a constraint graph for a Constraint Satisfaction Problem as follows. The domains of all variables are indicated in the table, and the binary constraints are as follows:
• A> B • A ̸= C •C>B •DC, A ≤ B, B̸= C, D=B. You get solutions with A=2 and B=3, as well as solutions with A=3 and B=2.
A CSP with no solution is with these domains is: A̸= B, B̸=D, D̸= A. There is a cycle between A,B,D and each arc is consistent but there is no overall consistent solution.
A
2
3
B
2
3
C
0
1
2
D
2
3
6
(c) [3 pts] Your assistant gives you the probability distributions for 4 mysterious binary variables: W, X, Y, and Z. Circle the Bayes net(s) amongst those given, that can represent a distribution that is consistent with the tables below using the fewest edges. If there is more than one such minimal net, circle all of them.
XWXWXW
YZYZYZ
XWXWXW
YZYZYZ
The correct answers are the last two options in the second row. Notice that in the table for P(W|X), the probability of W does not change when X changes. This means that W is independent of X. You can use the values to calculate P(Y,Z) vs P(Y)P(Z), and they are independent. The same holds for X and Z. The minimal bayes net is the one with the fewest arrows (that is, the most enforced independencies). The last two in the bottom row encode the information necessary for the given distribution: the dependence betwen X and Y , and the dependence between Z and W .
(d) Triangle is a rational agent in the world below, where it gains or loses Utility from moving and picking up money. Triangle can move deterministically Up, Down, Left or Right or Stay still. Black squares indicate that the Triangle cannot traverse them. The squares marked with L($100, $0) indicate lotteries of [0.5, $100; 0.5 $0]. Taking a step onto a blank square gives Triangle no utility, but stepping onto a lottery square gives it the utility of the lottery, and the lottery disappears.
Additionally, taking a step in any direction has a probability p of giving Triangle pain in addition to whatever money it might earn upon landing on a spot. If Triangle chooses to stay still, it will not feel pain. The utilities are not discounted in this problem so γ = 1.
In both of the problems below, Triangle’s starting position is as shown in the figure above. (i) [1 pt] For this part, Triangle’s utility is as follows(where k > 0):
U(pain) = −k; U($m) = m 7
X
W
P(W|X)
X
Y
P(Y|X)
Z
W
P(W|Z)
X
P(X)
0
0
0.4
0
0
0.3
0
0
0.2
0
0.75
0
1
0.6
0
1
0.7
0
1
0.8
1
0.25
1
0
0.4
1
0
0.1
1
0
0.8
1
1
0.6
1
1
0.9
1
1
0.2
What is the expected utility of going to the closest lottery and staying in that spot forever? Express your answer in terms of numerical constants, p, k.
There are 2 moves required to go to the closest lottery, so the expected utility is −2kp+0.5∗100+0.5∗0 = 50−2pk
(ii) [2 pts] Now, triangle’s utility function is as follows:
U(pain) = −k; U($m) =
m
For what range of k (where k > 0) will triangle always go to both lotteries. Express your answer in terms
of numerical constants and p. If no such range exists, write None in the blank below.
There are 3 possible best options for triangle: staying put, going to the first money, and going to both moneys. U(both) = −5pk + 10 ∗ 0.5 + 10 ∗ 0.5, U(first) = −2pk + 10 ∗ 0.5,U(Neither) = 0. For both to the optimal strategy, U(both) must be greater than all other options, therefore:
−5pk+10>−2pk+5 and −5pk+10>0 k< 5 andk<10
√
2 > 5 ,sotherangeforkisk∈(0, 5 ) p 3p 3p
3p 5p
8
(e) [5 pts] For each of the branches in the game tree below, put an ‘X’ on the branches if there exists an assignment of values to leaf nodes, for which that branch could be pruned. The max nodes are upward pointing triangles ( ), the min nodes are downward pointing triangles ( ), and the chance nodes are circles ( ). Assume that the children of a node are visited in left-to-right order.
Explicitly write down “Not possible” below if no branches can be pruned, in which case any ‘X’ marks above will be ignored. Any ‘X’ on the nodes and leaves will be ignored.
1
2 14 36
3
8
615 293745
4
5
16 19 30 33
46
47
17
18
20
21
31
32
34
35
7
22
38
11
23
26
39
42
9
10
12
13
24
25
27
When there is an adversarial min-max set of nodes, then bounds can be imposed on how good or bad a node can be which allows us to prune certain branches. However, we always have to look at the left most branch in order to determine whether pruning is ever going to be possible. Branches immediately under a chance node however can never be pruned because they all need to be looked at in order to compute an expectation.
The numbers inside the nodes have been added to the solution for the sole purpose of referencing them in the explanations that follow. There are 5 branches that can be pruned (from left to right):
The first branch (the one above node 11) can be pruned if the minimizer node 8 has a value that is greater than the value that the minimizer node 2 has so far (i.e. the value of node 3). This is what typically happens in a standard minimax tree since the chance node 6 with a single child can be thought of as not doing anything.
The second branch (the one above leave 28) can be pruned if the leave node 27 has a value that is less than the value that the maximizer node 22 has so far (i.e. the value of the node 23).
The third branch can be pruned (the one above node 33) if the chance node 14 has a value that is, so far, less than the value that the root maximizer node 1 has so far. This is because once the value of the chance node 30 is known, the value of the chance node 14 can’t increase.
The fourth branch (the one above leave 35) can be pruned if the leave 34 has a value that is greater than the value of the chance node 30. Even though the third branch can be pruned, this fourth one should also be marked because there might be cases in which the third branch can’t be pruned but the fourth branch can. However, the rubric doesn’t penalize for not marking the fourth branch if the third branch is marked. This is because the question didn’t explictly specify to mark branches that are underneath branches that can be pruned.
The fifth branch (the one above the chance node 45) can be pruned if the minimizer node 37 has a value that is less than the value that the root maximizer node 1 has so far.
9
28
40
41
43
44
Q3. [6 pts] Bayes Nets and Sampling
You are given a bayes net with the following probability tables:
ABE
CDF
E
D
F
P(F|E,D)
0
0
0
0.6
0
0
1
0.4
0
1
0
0.7
0
1
1
0.3
1
0
0
0.2
1
0
1
0.8
1
1
0
0.7
1
1
1
0.3
E
C
D
P(D|E,C)
0
0
0
0.5
A
B
P(B|A)
A
C
P(C|A)
0
0
1
0.5
A
P (A)
0
0
0.1
0
0
0.3
E
P(E)
0
1
0
0.2
0
0.75
0
1
0.9
0
1
0.7
0
0.1
0
1
1
0.8
1
0.25
1
0
0.5
1
0
0.7
1
0.9
1
0
0
0.5
1
1
0.5
1
1
0.3
1
0
1
0.5
1
1
0
0.2
1
1
1
0.8
You want to know P (C = 0|B = 1, D = 0) and decide to use sampling to approximate it.
(a) [2 pts] With prior sampling, what would be the likelihood of obtaining the sample [A=1, B=0, C=0, D=0,
E=1, F=0]?
0.25*0.1*0.3*0.9*0.8*0.7 0.75*0.1*0.3*0.9*0.5*0.8 0.25*0.9*0.7*0.1*0.5*0.6
0.25*0.5*0.7*0.5*0.9*0.2
0.25*0.5*0.3*0.2*0.9*0.2
0.75*0.1*0.3*0.9*0.5*0.2 + 0.25*0.5*0.7*0.5*0.9*0.2
Other Prior sampling samples without taking the evidence into account, so the probability of the sample is P(A)P(B—A)P(C—A)P(D—C,E)P(E)P(F—E,D)
(b) [2 pts] Assume you obtained the sample [A = 1, B=1, C=0, D=0, E=1, F=1] through likelihood weighting. What is its weight?
0.25*0.5*0.7*0.5*0.9*0.8
0.25*0.7*0.9*0.8 + 0.75*0.3*0.9*0.8 0.25*0.5*0.7*0.5*0.8
0
0.5*0.5
0.9*0.5 + 0.1*0.5
Other The weight of a sample in gibbs sampling is the probability of the evidence given their parents: P(D=0—E=1,C=0)*P(B=1—A=1)
(c) [2 pts] You decide to use Gibb’s sampling instead. Starting with the initialization [A = 1, B=1, C=0, D=0, E=0, F=0], suppose you resample F first, what is the probability that the next sample drawn is [A = 1, B=1, C=0, D=0, E=0, F=1]?
0.4
0.6*0.1*0.5
0.25*0.5*0.7*0.5*0.1*0.3
10
0.6
0
0.9*0.5 + 0.1*0.5
Other In Gibb’s sampling, you resample individual vairables conditioned on the rest of the sample. The distribution of F given the rest of the sample is 0.4 for F=1 and 0.6 for F=0.
11
Q4. [15 pts] Deep Learning
(a) [3 pts] Perform forward propagation on the neural network below for x = 1 by filling in the values in the table. Note that (i), . . . , (vii) are outputs after performing the appropriate operation as indicated in the node.
(i)
(ii)
(iii)
(iv)
(v)
(vi)
(vii)
2
3
4
5
4
3
5
∗2 (i) x ∗3 (ii) ∗4 (iii)
(iv) max (v)
(vi)
min
(b) [6 pts] Below is a neural network with weights a,b,c,d,e,f. The inputs are x1 and x2.
Thefirsthiddenlayercomputesr1 =max(c·x1+e·x2,0)andr2 =max(d·x1+f·x2,0).
The second hidden layer computes s1 = 1 and s2 = 1 . 1+exp(−a·r1 ) 1+exp(−b·r2 )
max (vii)
The output layer computes y = s1 + s2 . Note that the weights a, b, c, d, e, f are indicated along the edges of the neural network here.
Suppose the network has inputs x1 = 1, x2 = −1.
The weight values are a = 1, b = 1, c = 4, d = 1, e = 2, f = 2.
Forward propagation then computes r1 = 2, r2 = 0, s1 = 0.9, s2 = 0.5, y = 1.4. Note: some values are rounded.
x1
x2
d e
c r1 a s1 f r2 b s2
y
Using the values computed from forward propagation, use backpropagation to numerically calculate the following partial derivatives. Write your answers as a single number (not an expression). You do not need a calculator. Use scratch paper if needed.
Hint: For g(z) = 1 , the derivative is ∂g = g(z)(1 − g(z)).
1+exp(−z) ∂z
∂y ∂a
∂y ∂b
∂y ∂c
∂y ∂d
∂y ∂e
∂y ∂f
0.18
0
0.09
0
−0.09
0
12
∂y ∂a
∂y ∂b
∂y ∂c
∂y ∂d
∂y ∂e
∂y ∂f
= ∂y ∂s1 ∂s1 ∂a
= 1 · ∂g(a · r1) ∂a
=r1 ·g(a·r1)(1−g(a·r1)) =r1 ·s1(1−s1)
= 2·0.9·(1−0.9)
= 0.18
= ∂y ∂s2 ∂s2 ∂b
= 1 · ∂g(b · r2) ∂b
=r2 ·g(b·r2)(1−g(b·r2)) =r2 ·s2(1−s2)
= 0·0.5(1−0.5)
=0
= ∂y ∂s1 ∂r1 ∂s1∂r1∂c
= 1·[a·g(a·r1)(1−g(a·r1))]·x1 = [a·s1(1−s1)]·x1
= [1·0.9(1−0.9)]·1
= 0.09
= ∂y ∂s2 ∂r2 ∂s2 ∂r2 ∂d = ∂y ∂s2 ·0
∂s2 ∂r2 =0
= ∂y ∂s1 ∂r1 ∂s1 ∂r1 ∂e
= 1·[a·g(a·r1)(1−g(a·r1))]·x2 = [a·s1(1−s1)]·x2
= [1·0.9(1−0.9)]·−1
= −0.09
= ∂y ∂s2 ∂r2 ∂s2 ∂r2 ∂f = ∂y ∂s2 ·0
∂s2 ∂r2 =0
13
(c) [6 pts] Below are two plots with horizontal axis x1 and vertical axis x2 containing data labelled × and •. For each plot, we wish to find a function f(x1,x2) such that f(x1,x2) ≥ 0 for all data labelled × and f(x1,x2) < 0 for all data labelled •.
Below each plot is the function f(x1,x2) for that specific plot. Complete the expressions such that all the data is labelled correctly. If not possible, mark “No valid combination”.
There are two possible solutions:
f(x1,x2)=max( (i) +(ii),(iii)+(iv))+(v)
(i) x1 −x1 0 (ii) x2 −x2 0 (iii) x1 −x1 0 (iv)x2 −x2 0 (v)1−1 0 No valid combination
f (x1 , x2 ) = max(x1 , −x1 ) − 1 f (x1 , x2 ) = max(−x1 , x1 ) − 1
f(x1,x2)=(vi)−max((vii)+(viii), (ix)+(x) )
There are four possible solutions:
(vi) x2
(vii) x1
(viii) x2
(ix) x1 (x)x2
No valid combination
f(x1,x2)=x2 −max(x1,−x1) f(x1,x2)=x2 −max(−x1,x1) f(x1,x2)=−max(x1 −x2,−x1 −x2) f(x2,x2)=−max(−x1 −x2,x1 −x2)
14
−x2 0 −x1 0 −x2 0 −x1 0 −x2 0
Q5. [11 pts] MDPs: Reward Shaping
PacBot is in a Gridworld-like environment E. It moves deterministically Up, Down, Right, or Left except that it cannot move onto squares which are blackened. PacBot must move at every step or exit. The reward for any of these actions is always zero. Additionally, from a numbered square, PacBot can choose to exit to a terminal state and collect reward equal to the number on the square. PacBot is not required to exit on a numbered square; it can also move in any direction off that square.
(a) [3 pts] Draw an arrow in each square (including numbered squares) in the following board on the right to indicate the optimal policy PacBot will calculate with the discount factor γ = 0.5 in the board on the left. (For example, if PacBot would move Down from the square in the middle on the left board, draw a down arrow in that square on the right board.) If PacBot’s policy would be to exit from a particular square, draw an X instead of an arrow in that square.
The decision between actions in this MDP at each state is to either take the number on the board or take half
the reward that can be attained from a neighboring square. On the square showing 4, the value for moving
toward it from one square away is 2. Since that’s greater than 1, we move toward the four instead of exiting
from the 1 in the bottom right. From two squares away, we can get reward 1 by moving toward the 4. This
means that from the cell in the second row, second column, we can get reward 1 by moving right and only
reward 1 by moving left. Finally, at distance more then 2 from the 4, we get reward less than 1, so it is optimal 2
to exit from the squares showing 1 on the left side rather than go toward the 4.
PacBot now operates in a new environment E′ with an additional reward function F(s,a,s′), which is added to the original reward function R(s, a, s′) for every (s, a, s′) triplet.
(b) [4 pts] Consider an additional reward F1 that favors moving toward numbered squares. Let d(s) be defined as the Manhattan distance from s to the nearest numbered square. If s is numbered, d(s) = 0.
0 F1(s, a, s′) = 10
s′ is a terminal state,
d(s′) < d(s) i.e. s′ is closer to a numbered square than s is, d(s′) ≥ d(s).
0
Fill in the diagram on the right as in (a) to indicate the optimal policy PacBot will calculate with the discount
factor γ = 0.5 and the modified reward function R1′ (s, a, s′) = R(s, a, s′) + F1(s, a, s′) in the board on the left.
Here, from all exit squares, we can step off of the numbered squares and back on to get a reward of 5. This is 5 and not 2.5 because the value at a numbered square is (let s′ be the state resulting from taking the optimal
15
action from s)
V (s) = max(R1′ (s, a, s′) + γV (s′))
a
= 0 + γ max(R1′ (s′, a′, s′′) + γV (s′′)) a′
= 10γ + γ2V (s′′) = 5 + γ2V (s′′).
the best choice from s is to move to a non-numbered square the best choice from s′ is to move onto a numbered square
So from all numbered states we move off the numbers onto blank states, and from non-numbered states we move onto numbered states to collect the reward of 10. Note that the reward is only achieved for strictly decreasing the distance to the nearest numbered square, so moving back and forth between numbered squares (or squares with d(s) = 1) does not get the reward and is thus not optimal.
(c) [4 pts] Consider a different artificial reward that also favors moving toward numbered squares in a slightly different way:
0 s′ is a terminal state, F2(s,a,s′)= 10d(s)−1d(s′) otherwise.
2
Fill in the diagram on the right as in (a) to indicate the optimal policy PacBot will calculate with the discount factor γ = 0.5 and the modified reward function R2′ (s, a, s′) = R(s, a, s′) + F2(s, a, s′) in the board on the left.
Here we will never get into a cycle because for every reward we could gain by stepping in one direction, that
reward is lost when we step back the other way. It can be shown that the sum of F2 rewards along a path
from a numbered square to another numbered square all cancel to 0. In fact, along any (multi-step) path from
state s to state s′ , the sum of F2 rewards along that path will sum to 10 d(s) − 1 d(s′ ). This means that the 2
F2 rewards do not favor one path over another as long as they both lead to exiting, and they favor exiting eventually over never exiting. Thus we will have a policy that never gets stuck in a cycle and chooses which exit to go to in exactly the same way as in (a). Explicitly computing the Q-values by hand will also show that the optimal policy is the same as that in (a).
16
Q6. [6 pts] Zero Sum MDP’s
Consider a Markov Decision Process where it is not just Pacman in the environment, but there is also a ghost. Pacman plays one turn, then the ghost plays one turn and they continue alternating, each of their actions transitioning the state forward using the same transition function T. At any one time step, only one of Pacman and ghost can play a turn. Let A be Pacman’s action set can take and B be the ghost’s action set. The game is infinite horizon, with discount factor γ applied at every turn no matter which agent is taking the turn. |A| is the size of A’s action set and |B| is the size of B’s action set. R indicates the utility received by Pacman.
(a) [2 pts] Let us first consider the situation where Pacman tries to maximize his expected utility, while the ghost tries to minimize Pacman’s utility, thus playing adversarially. Both Pacman and the ghost try to play optimally and they are aware of this. Given the standard notation for an MDP, choose which of the following updates is the correct one for Q-Value Iteration under this formulation, given that Q∗pac is the infinite horizon Q-function for Pacman.
Q∗pac(s,a)=s′ T(s,a,s′)[R(s,a,s′)+γb∈Bs′′(T(s′,b,s′′)[R(s′,b,s′′)+γmaxa′∈AQ∗pac(s′′,a′)])]
Q∗pac(s,a)=s′ T(s,a,s′)[R(s,a,s′)+γmaxa′∈AQ∗pac(s′,a′)]
Q∗pac(s,a)=s′ R(s,a,s′)+γmaxa′∈AQ∗pac(s′,a′)
Q∗ (s,a)= ′ T(s,a,s′)[R(s,a,s′)+γ ′′(T(s′,b,s′′)[R(s′,b,s′′)+γ 1 max ′ Q∗ (s′′,a′)])] pac s b∈B s |B| a ∈A pac
Q∗pac(s,a)=s′ T(s,a,s′)[R(s,a,s′)+γminb∈Bs′′(T(s′,b,s′′)[R(s′,b,s′′)+γmaxa′∈AQ∗pac(s′′,a′)])] Q∗pac(s,a)=s′ T(s,a,s′)[R(s,a,s′)+γmaxb∈Bs′′(T(s′,b,s′′)[R(s′,b,s′′)+γmaxa′∈AQ∗pac(s′′,a′)])] None of the above.
You can project this game by imagining that the ghost takes the next move, and instead of maximizing utility, the ghost is minimizing utility. The ghost’s optimal value is again decided by imagining you playing optimally, which leads to the inner maximization of the Q function.
(b) [2 pts] For this part, let us suppose that instead of having a ghost which is adversarial, the ghost is a friendly ghost who is also trying to maximize Pacman’s utility. Both Pacman and the ghost know this arrangement, and are aware of the others knowledge. Given the standard notation for an MDP, choose which of the following updates is the correct one for Q-Value Iteration under this formulation, given that Q∗pac is the Q-function for Pacman.
Q∗pac(s,a)=s′ T(s,a,s′)[R(s,a,s′)+γminb∈BQ∗pac(s′,b)]
Q∗pac(s,a)=s′ T(s,a,s′)[R(s,a,s′)+γb∈Bs′′(T(s′,b,s′′)[R(s′,b,s′′)+γmaxa′∈AQ∗pac(s′′,a′)])]
Q∗ (s,a)= ′ T(s,a,s′)[R(s,a,s′)+γ 1 ′′(T(s′,b,s′′)[R(s′,b,s′′)+γmax ′ Q∗ (s′′,a′)])] pac s |B| b∈B s a ∈A pac
Q∗pac(s,a)=s′ T(s,a,s′)[R(s,a,s′)+γmaxb∈Bs′′(T(s′,b,s′′)[R(s′,b,s′′)+γmaxa′∈AQ∗pac(s′′,a′)])] Q∗ (s,a)= ′ T(s,a,s′)[R(s,a,s′)+γ 1 max Q∗ (s′,b)]
pac s |B| b∈Bpac
Q∗pac(s,a)=s′ T(s,a,s′)[R(s,a,s′)+γminb∈Bs′′(T(s′,b,s′′)[R(s′,b,s′′)+γmaxa′∈AQ∗pac(s′′,a′)])] None of the above.
You can project this game by imagining that the ghost takes the next move, and the ghost is maximizing your utility. The ghost’s optimal value is again decided by imagining you playing optimally, which leads to the inner maximization of the Q function, along with the outer maximization where the ghost is trying to help you.
(c) [2 pts] For this part let us suppose that instead of having a ghost which is friendly, the ghost is a confused ghost who takes random actions, with uniform probability in the environment. Given the standard notation for an MDP, choose which of the following updates is the correct one for Q-Value Iteration under this formulation, given that Qpac is the Q-function for Pacman.
Q∗ (s,a)= ′ T(s,a,s′)[R(s,a,s′)+γ 1 max ′′(T(s′,b,s′′)[R(s′,b,s′′)+γmax ′ Q∗ (s′′,a′)])] pac s |B| b∈Bs a∈Apac
Q∗pac(s,a)=s′ T(s,a,s′)[R(s,a,s′)+b∈Bs′′(T(s′,b,s′′)[R(s′,b,s′′)+γmaxa′∈AQ∗pac(s′′,a′)])]
Q∗pac(s,a)=s′ T(s,a,s′)[R(s,a,s′)+γmaxb∈BQ∗pac(s′,b)]
Q∗ (s,a)= ′ T(s,a,s′)[R(s,a,s′)+γ 1 ′′(T(s′,b,s′′)[R(s′,b,s′′)+γmax ′ Q∗ (s′′,a′)])] pac s |B| b∈B s a ∈A pac
Q∗ (s,a)= ′ T(s,a,s′)[R(s,a,s′)+γ 1 min ′′(T(s′,b,s′′)[R(s′,b,s′′)+γmax ′ Q∗ (s′′,a′)])] pac s |B| b∈B s a∈A pac
17
Q∗pac(s,a)=s′ T(s,a,s′)[R(s,a,s′)+γmaxa′∈AQ∗pac(s′,a′)] None of the above.
You can project this game by imagining that the ghost takes the next move, and the ghost is playing randomly, which means that you can take an expectation over all possible actions that the ghost could play. The ghost’s optimal value is decided by imagining you playing optimally, which leads to the inner maximization of the Q function, along with the outer expectation where we average over all random actions that the ghost could take.
18
Q7. [11 pts] Planning ahead with HMMs
Pacman is tired of using HMMs to esti- mate the location of ghosts. He wants to use HMMs to plan what actions to take in order to maximize his utility. Pacman uses the HMM (drawn to the right) of length T to model the plan- ning problem. In the HMM, X1:T is the sequence of hidden states of Pacman’s world, A1:T are actions Pacman can take, and Ut is the utility Pacman re- ceives at the particular hidden state Xt. Notice that there are no evidence vari- ables, and utilities are not discounted.
... ...
... Xt−1 Xt Xt+1 ...
... Ut−1 Ut Ut+1 ...
(a) The belief at time t is defined as Bt(Xt) = p(Xt|a1:t). The forward algorithm update has the following form:
Bt(Xt) = (i) (ii) Complete the expression by choosing the option that fills in each blank.
Bt−1(xt−1).
xt
p(Xt|xt−1, at)
(i) [1 pt] xt−1
max
max p(Xt)
1 1
(ii) [1 pt] p(Xt|xt−1)p(Xt|at)
None of the above combinations is correct
(v)[1pt] 1 Ut
Bt(xt) 1
1 1 UT
At−1
xt−1 p(Xt|xt−1)
xt
Bt(Xt) = p(Xt|a1:t)
= p(Xt|xt−1, at)p(xt−1|a1:t−1)
xt−1
= p(Xt|xt−1, at)Bt−1(xt−1)
xt−1
(b) Pacman would like to take actions A1:T that maximizes the expected sum of utilities, which has the following form:
MEU1:T = (i) (ii) (iii) (iv) Complete the expression by choosing the option that fills in each blank.
(v)
At
max aT
Tt=1
xt p(xt)
UT
None of the above combinations is correct
(i) [1 pt] (ii) [1 pt] (iii) [1 pt] (iv) [1 pt]
1 Tt=1 1
max a1:T
maxt
xt ,at
p(xt|xt−1, at)
a1:T
mint
at
BT(xT)
Ut T
aT
xT
1
At+1
MEU1:T =maxBt(xt)Ut(xt)
a1:T
t=1 xt
(c) [2 pts] A greedy ghost now offers to tell Pacman the values of some of the hidden states. Pacman needs your help to figure out if the ghost’s information is useful. Assume that the transition function p(xt|xt−1,at) is not deterministic. With respect to the utility Ut, mark all that can be True:
19
VPI(Xt−1|Xt−2) > 0 VPI(Xt−2|Xt−1) > 0 VPI(Xt−1|Xt−2) = 0 VPI(Xt−2|Xt−1) = 0 None of the above
It is always possible that VPI = 0. Can guarantee VPI(E|e) is not greater than 0 if E is independent of parents(U) given e.
(d) [2 pts] Pacman notices that calculating the beliefs under this model is very slow using exact inference. He therefore decides to try out various particle filter methods to speed up inference. Order the following methods by how accurate their estimate of BT (XT ) is? If different methods give an equivalently accurate estimate, mark them as the same number.
Least accurate
Exact inference will always be more accurate than using a particle filter. When comparing the particle filter resampling approaches, notice that because there are no observations, each particle will have weight 1. Therefore resampling when particle weights are 1 could lead to particles being lost and hence prove bad.
Exact inference
Particle filtering with no resampling
Particle filtering with resampling before every time elapse
Particle filtering with resampling before every other time elapse 1 2 3 4
20
Most accurate 1 1 1
2 2 2
3 4 3 4 3 4
Q8. [7 pts] Na ̈ıve Bayes
You are given a na ̈ıve bayes model, shown below, with label Y and features X1 and X2. The conditional probabilities for the model are parametrized by p1, p2 and q.
Y
X2
Y
P(X2|Y)
0
0
p2
1
0
1−p2
0
1
1−p2
1
1
p2
X1
Y
P(X1|Y)
0
0
p1
1
0
1−p1
0
1
1−p1
1
1
p1
Y
P(Y)
0
1−q
1
q
X1
X2
Note that some of the parameters are shared (e.g. P(X1 = 0|Y = 0) = P(X1 = 1|Y = 1) = p1).
(a) [2 pts] Given a new data point with X1 = 1 and X2 = 1, what is the probability that this point has label
Y = 1? Express your answer in terms of the parameters p1 , p2 and q (you might not need all of them).
P(Y =1|X1 =1,X2 =1)=
p1p2q
p1 p2 q+(1−p1 )(1−p2 )(1−q)
P(Y =1,X1 =1,X2 =1)=P(X1 =1|Y =1)P(X2 =1|Y =1)P(Y =1) = p1p2q
P(Y =0,X1 =1,X2 =1)=P(X1 =1|Y =0)P(X2 =1|Y =0)P(Y =0) = (1−p1)(1−p2)(1−q)
P(Y =1|X1 =1,X2 =1)= P(Y =1,X1 =1,X2 =1) P(X1 =1,X2 =1)
= =
The model is trained with the following data:
P(Y =1,X1 =1,X2 =1)
P(Y =1,X1 =1,X2 =1)+P(Y =0,X1 =1,X2 =1)
p1p2q
p1p2q + (1 − p1)(1 − p2)(1 − q)
sample number
1 2 3 4 5 6 7 8 9 10
X1
0010101011
X2
0000000100
Y
0000000111
(b) [5 pts] What are the maximum likelihood estimates for p1,p2 and q? p1= 3 p2= 4 q= 3
The maximum likelihood estimate of p1 is the fraction of counts of samples in which X1 = Y . In the given training
data,samples1,2,4and6haveX1 =Y =0andsamples9and10haveX1 =Y =1,so6outofthe10samples
5 5 10
haveX1=Yandthusp1=6 =3.Analogously,8outofthe10sampleshaveX2=Yandthusp2=8 =4.The 10 5 10 5
maximum likelihood estimate of q is the fraction of counts of samples in which Y = 1, thus q = 3 . 10
21
You can find what these parameters are equal to by maximizing the likelihood of the data with respect to the parameters. First, notice that the probabilities can be written as
P(X1 = x1|Y = y) = (p1)1[x1=y](1 − p1)(1−1[x1=y]) P(X2 = x2|Y = y) = (p2)1[x2=y](1 − p2)(1−1[x2=y])
P(Y =y)=(q)y(1−q)(1−y),
where 1[x = y] is an indicator function that evaluates to 1 when x is equal to y, and 0 otherwise. Let X be all the
data: x(i), x(i), y(i), ∀i = 1, . . . , 10, where the superscripts i denote the sample number. Then, the likelihood of the 12
data given the parameters is l(X|p1, p2, q) = P (X|p1, p2, q)
10
= P (x(i), x(i), y(i))
12
i=1 10
= P (x(i)|y(i))P (x(i)|y(i))P (y(i)) 12
i=1 10
= (p )1[x(i)=y(i)](1 − p )(1−1[x(i)=y(i)])(p )1[x(i)=y(i)](1 − p )(1−1[x(i)=y(i)])(q)y(i) (1 − q)(1−y(i)). 11112222
i=1
We optimize for the parameters by maximizing the likelihood l(X|p1,p2,q), which is equivalent to maximizing the
log likelihood ll(X|p1, p2, q),
p1,p2,q=arg max l(X|p1,p2,q) p1 ,p2 ,q
=arg max ll(X|p1,p2,q), p1 ,p2 ,q
ll(X|p1, p2, q) = log l(X|p1, p2, q) 10
=1[x(i) =y(i)]log(p )+(1−1[x(i) =y(i)])log(1−p ) 1111
i=1
+1[x(i) =y(i)]log(p )+(1−1[x(i) =y(i)])log(1−p )
+ y(i) log(q) + (1 − y(i)) log(1 − q).
We can find the values that obtain the maximum by setting the partial derivates of the log likelihood to zero and
solving for the parameters
where
10
∂ll 1 (i) (i) 1 (i) (i)
10 (i) i=11[x1 =y(i)]
∂p = p1[x1 =y ]−1−p(1−1[x1 =y ])=0 1 i=1 1 1
⇒p1= 10 10 (i)
2222
10
∂ll 1 (i) (i) 1 (i) (i)
i=11[x2 =y(i)] ⇒p2= 10
∂p = p1[x2 =y ]−1−p(1−1[x2 =y ])=0
2 i=1 2 10
10 i=1y(i)
∂ll 1 (i)
(i)
⇒q= 10 .
∂q=
i=1
2
1
qy −1−q(1−y )=0
22
Q9. [12 pts] Beyond Ordinary Pruning
Important: For all following parts, assume that the children of a node are visited in left-to-right order. You should not prune on equality (This also applies to any bound on utilities, if any. For example, given all utilities are less than or equal to 10, you should not prune after seeing a node with utility of 10.)
(a) [3 pts] Consider a two-player game in which both players alternate moves and each player seeks to maximize its own utility. At a leaf node s, utilities are represented as a tuple U(s) = U1(s),U2(s), with the i-th component corresponding to the utility of the i-th player.
For the following special cases of two-player games, select all of the following in which pruning is never possible, given just this information about the relationship between utilities U1 and U2. Select “None of the above” if none of the options apply.
0 < U1(s), U2(s) < M for all terminal states s, where M is a positive constant U1(s) + U2(s) = M for all terminal states s, where M ̸= 0 is a constant
U1(s) = U2(s) for all terminal states s
U1(s)+U2(s)=0forallterminalstatess
None of the above
The first option is an interleaving of two independent searches; without the relationship between the two players, no pruning is possible.
The second option is minimax in which all utilities are shifted by M; alpha-beta pruning still applies.
The third option is a search problem since both players share the same objective; since the maximum utility can occur at any terminal node, no pruning is possible.
The fourth option is minimax; alpha-beta pruning applies.
(b) Now we consider a three-player game similarly defined as in part (a). Then at a leaf node s, utilities are represented as a 3-tuple U(s) = U1(s),U2(s),U3(s), where the player going first (at the top of the tree) maximizes U1, the player going second maximizes U2, and the player going last maximizes U3.
(i) [2 pts] Fill in the values at all nodes. Note that all players maximize their own respective utilities.
(ii) [3 pts] Without any further information, select all terminal nodes that can be pruned. Or check “None” if no node can be pruned.
Reminder: A node can be pruned only if the node’s utilities can have no effect on the utilities at the root, irrespective of the node’s utilities, and the utilities of nodes not yet visited by the left-to-right depth-first traversal.
a b c d e f None
Note that without any further assumption, particularly we don’t know whether any utility value is
23
bounded. Before visiting any leaf node, we don’t know whether that node has U3(·) = ∞ (no pruning on equality). Therefore, no node can be pruned.
(iii) [4 pts] Now we are given that for all terminal states s the following holds true: • Ui(s)≥0 ∀i=1,2,3
• 3i=1 Ui(s) ≤ 9
Select all terminal nodes that can be pruned. Or check “None” if no node can be pruned.
a b c d e f None
Nodes a, b and d cannot be pruned because each is the first child of the corresponding player and before visiting that child, the player does not yet have a bound.
Node c can be pruned. After visiting node a, the second player will only choose the right branch when max(U2(b),U2(c)) ≥ 2. After visiting node b, the third player will only choose node c when U3(c) ≥ 8 and U2(c) ≥ 2. But this is not possible given the constraints, so node c will not be chosen no matter what utilities it subsumes and hence can be pruned.
Node e cannot be pruned. To see this, assume node e subsumes a different utility tuple, for example, (3, 0, 6). Now, on the right branch of the root, the third player will choose node e and subsumes (3,0,6), the second player will choose node f and subsumes (6, 1, 2), and the first player (the root) will choose the right branch and subsume (6, 1, 2). Since the utilities at node e may affect the utilities at the root, node e cannot be pruned. Node f cannot be pruned. Assume node f subsumes utility tuple (6, 3, 0). Then the first player (the root) will subsume (6, 3, 0) after the propagation.
24
Q10. [10 pts] Iterative Deepening Search
Pacman is performing search in a maze again! The search graph has a branching factor of b, a solution of depth d, a maximum depth of m, and edge costs that may not be integers. Although he knows breadth first search returns the solution with the smallest depth, it takes up too much space, so he decides to try using iterative deepening. As a reminder, in standard depth-first iterative deepening we start by performing a depth first search terminated at a maximum depth of one. If no solution is found, we start over and perform a depth first search to depth two and so on. This way we obtain the shallowest solution, but use only O(bd) space.
But Pacman decides to use a variant of iterative deepening called iterative deepening A*, where instead of limiting the depth-first search by depth as in standard iterative deepening search, we can limit the depth-first search by the f value as defined in A* search. As a reminder f [node] = g[node] + h[node] where g[node] is the cost of the path from the start state and h[node] is a heuristic value estimating the cost to the closest goal state.
In this question, all searches are tree searches and not graph searches.
(a) [7 pts] Complete the pseudocode outlining how to perform iterative deepening A* by choosing the option from the next page that fills in each of these blanks. Iterative deepening A* should return the solution with the lowest cost when given a consistent heuristic. Note that cutoff is a boolean and new-limit is a number.
function Iterative-Deepening-Tree-Search(problem) start-node ← Make-Node(Initial-State[problem]) limit ← f [start-node]
loop
fringe ← Make-Stack(start-node) new-limit ←
cutoff ←
while fringe is not empty do
node ← Remove-Front(fringe)
if Goal-Test(problem, State[node]) then
return node end if
for child-node in Expand(State[node], problem) do if f [child-node] ≤ limit then
fringe ← Insert(child-node, fringe) new-limit ←
cutoff ← else
new-limit ← cutoff ←
end if end for
end while
if not cutoff then
return failure end if
limit ←
end loop end function
(i)
(ii)
(iii)
(iv)
(v)
(vi)
(vii)
25
−∞
A1 A2 A3 A4 B1 B2 B3 B4 C1 C2 C3 C4
C5 C6 C7 C8
0
∞
limit
True
False
cutoff
not cutoff
new-limit + f[node]
max(new-limit, f[node])
new-limit + f [child-node]
max(new-limit, f [child-node])
new-limit
A1 A2 B1 B2
C1 C2 C5 C6
B1 B2 C1 C2
C5 C6
B1 B2
new-limit + 1
min(new-limit, f[node])
min(new-limit, f [child-node])
A3 A4 B3 B4
C3 C4 C7 C8
B3 B4 C3 C4
The cutoff variable keeps track of whether there are items that aren’t being explored because of the limit. If cutoff is false and the algorithm has exited the while, no nodes were cutoff (not added to the fringe because of the limit). This scenario suggests that there is no solution.
In order to ensure that iterative deepening A* obtains the lowest cost solution efficiently, we want to increase the limit as much as we can while guaranteeing optimality. Setting new-limit to the smallest f cost of nodes that were cutoff achieves this. When nodes aren’t cutoff (part iii), the new-limit should not change. Hence C1, C7, C8, or a combination of the three were accepted as answers.
(b) [3 pts] Assuming there are no ties in f value between nodes, which of the following statements about the number of nodes that iterative deepening A* expands is True? If the same node is expanded multiple times, count all of the times that it is expanded. If none of the options are correct, mark None of the above.
The number of times that iterative deepening A* expands a node is greater than or equal to the number of times A* will expand a node.
The number of times that iterative deepening A* expands a node is less than or equal to the number of times A* will expand a node.
We don’t know if the number of times iterative deepening A* expands a node is more or less than the number of times A* will expand a node.
None of the above
Iterative deepening A* runs depth first search multiples at different limit values. This causes iterative deepening A* to expand certain nodes multiple times.
(i) [1 pt] (ii) [1 pt] (iii) [1 pt]
(iv) [1 pt] (v) [1 pt]
(vi) [1 pt] (vii) [1 pt]
C7 C8 B3 B4 C3 C4
C1 C2
C5 C6 C7 C8
26
THIS PAGE IS INTENTIONALLY LEFT BLANK