CS代写 Question 1, Part 1, a

Question 1, Part 1, a
􏰀 P(Rr,Ll,t,Ss,Qq) r,l,q,s
0.9 0.1 +P(¬l|Rr)􏰁P(q|¬l)P(t|q,Rr)+P(¬q|¬l)P(t|¬q,Rr)􏰂􏰄
􏰈 􏰇􏰆 􏰉 􏰈 􏰇􏰆 􏰉

Copyright By PowCoder代写 加微信 powcoder

Bayesian Network Practice – Solutions
P(t) = = =
=P(r) P(l|r) 0.9×P(t|q,r)+0.1×P(t|¬q,r) +P(¬l|r) 0.7×P(t|q,r)+0.3×P(t|¬q,r) + 􏰈􏰇􏰆􏰉 􏰈􏰇􏰆􏰉 􏰈 􏰇􏰆 􏰉 􏰈 􏰇􏰆 􏰉 􏰈 􏰇􏰆 􏰉 􏰈 􏰇􏰆 􏰉 􏰈 􏰇􏰆 􏰉
0.2 0.8 0.7 0.2 0.2 0.7 0.2
P(¬r) P(l|¬r) 0.9×P(t|q,¬r)+0.1×P(t|¬q,¬r) +P(¬l|¬r) 0.7×P(t|q,¬r)+0.3×P(t|¬q,¬r)
􏰀 P(Rr)P(Ll|Rr)P(Qq|Ll)P(t|Qq,Rr)P(Ss|Qq,t) r,l,q,s
􏰀P(Rr)􏰀P(Ll|Rr)􏰀P(Qq|Ll)P(t|Qq,Rr)􏰀P(Ss|Qq,t) rlqs
(Try to push the summations as far right as you can)
􏰀 P (Rr ) 􏰀 P (Ll |Rr ) 􏰀 P (Qq |Ll )P (t|Qq , Rr ) × 1 rlq
􏰀P(Rr)􏰀P(Ll|Rr)􏰀P(Qq|Ll)P(t|Qq,Rr) rlq
􏰀P(Rr)􏰀P(Ll|Rr)􏰁P(q|Ll)P(t|q,Rr)+P(¬q|Ll)P(t|¬q,Rr)􏰂 rl
P(Rr) P(l|Rr) P(q|l)P(t|q,Rr)+P(¬q|l)P(t|¬q,Rr) 􏰈 􏰇􏰆 􏰉 􏰈 􏰇􏰆 􏰉
(See notes section)
􏰈 􏰇􏰆 􏰉 􏰈 􏰇􏰆 􏰉 􏰈 􏰇􏰆 􏰉 􏰈 􏰇􏰆 􏰉 􏰈 􏰇􏰆 􏰉
0.8 0.3 0.9 0.3
= 0.2 0.8 0.9×0.7+0.1×0.2 +0.2 0.7×0.7+0.3×0.2 +
0.8 0.3 0.9×0.9+0.1×0.3 +0.7 0.7×0.9+0.3×0.3
􏰃 􏰁 􏰂 􏰁 􏰂􏰄
= 0.2 0.8 0.65 +0.2 0.55
= 0.2×0.63+0.8×0.756
= 0.7308 ∴ P (t) = 0.7308
􏰃 􏰁 􏰂 􏰁 􏰂􏰄
+0.8 0.3 0.84 +0.7 0.72
Note: 􏰅x P(Xx) = 1 and also 􏰅x P(Xx|something) = 1 because you are summing over all possible values of X. It’s like calculating the probability that quokkas are happy plus the probability that they aren’t, which is always just 1!

Question 1, Part 1, b Step 1: get equation
P(t) = 􏰀 P(Rr,Ll,t,Ss,Qq) r,l,q,s
= 􏰀 P(Rr)P(Ll|Rr)P(Qq|Ll)P(t|Qq,Rr)P(Ss|Qq,t) r,l,q,s
= 􏰀P(Rr)􏰀P(Ll|Rr)􏰀P(Qq|Ll)P(t|Qq,Rr)􏰀P(Ss|Qq,t) rlqs
(Try to push the summations as far right as you can)
= 􏰀P(Rr)􏰀P(Ll|Rr)􏰀P(Qq|Ll)P(t|Qq,Rr)􏰀P(Ss|Qq,t)
r 􏰈 􏰇􏰆 􏰉 l 􏰈 􏰇􏰆 􏰉 q 􏰈 􏰇􏰆 􏰉􏰈 􏰇􏰆 f1 (R) f2 (L,R) f3 (Q,L) f4 (Q,R)
Step 2: work out factors
(Label the different parts)
S Q f5(S,Q) TT0.9
F T 0.1 FF0.6
(We’ll name the new factor f6)
Notice that we got 1 in both cases. Here, just like in the enu- meration section, we could have just immediately set 􏰅s f5(S, Q) to 1
(We’ll name the new factor f7)
Q R L f7(Q,R,L) TTT 0.63 TTF 0.49 TFT 0.81 TFF 0.63
→ FTT 0.02 FTF 0.06 FFT 0.03 FFF 0.09
􏰉 s 􏰈 􏰇􏰆 􏰉 f5 (S,Q)
L R f2(L,R) Q L f3(Q,L)
TT0.8 TT0.9 TT0.7 T F 0.3 T F 0.7 T F 0.9 F T 0.2 F T 0.1 F T 0.2 FF0.7 FF0.3 FF0.3
Q R f4(Q,R)
R f1(R) T 0.2 F 0.8
Step 3: sum over S
P (t) = 􏰀 f1(R) 􏰀 f2(L, R) 􏰀 f3(Q, L)f4(Q, R) 􏰀 f5(S, Q) rlqs
S Q f5(S,Q) TT 0.9
0.1 → F 0.4+0.6 → F 1 0.6
T 0.9 + 0.1
Step 4: multiply factors
P (t) = 􏰀 f1(R) 􏰀 f2(L, R) 􏰀 f3(Q, L)f4(Q, R)f6(Q) r l q􏰈 􏰇􏰆 􏰉
f3 (Q, L) 0.9 0.7 0.1 0.3 f4(Q,R) 0.7 0.9 0.2 0.3
f7 (Q,R,L)
Q R L f3×f4×f6 T T T 0.9×0.7×1
T T F 0.7×0.7×1 T F T 0.9×0.9×1 T F F 0.7×0.9×1 F T T 0.1×0.2×1 F T F 0.3×0.2×1 F F T 0.1×0.3×1 F F F 0.3×0.3×1
F F QR TT TF FT FF
Q f6(Q) T1 F1

Step 5: sum over Q
Q R L f7(Q,R,L) TTT 0.63 TTF 0.49 TFT 0.81 TFF 0.63 FTT 0.02 FTF 0.06 FFT 0.03 FFF 0.09
Step 6: multiply factors
T T →T F F T F F
f8(R, L) 0.63+0.02
0.49+0.06 0.81+0.03 0.63+0.09
T T 0.65 T F 0.55 F T 0.84 F F 0.72
(We’ll name the new factor f9)
P (t) = 􏰀 f1(R) 􏰀 f2(L, R) 􏰀 f7(Q, R, L) rlq
(We’ll name the new factor f8)
L R f2(L,R) TT 0.8 TF 0.3
R L f8(R,L) → T T 0.65
T T 0.8×0.65
T F 0.2×0.55
F T 0.3×0.84 → F F 0.7×0.72
about the order! In the f2 table, L is on the left. In the other table, L is on the right!
P (t) = 􏰀 f1(R) 􏰀 f2(L, R)f8(R, L) r l􏰈􏰇􏰆􏰉
R L f9 T T 0.52 T F 0.11 F T 0.252 F F 0.504
be careful
Step 7: almost there! Sum over L
R L f9 T T 0.52 T F 0.11 F T 0.252 F F 0.504
T 0.52+0.11 → F 0.252 + 0.504
f10(R) 0.63 0.756
f1 × f10 0.126 0.6048
P (t) = 􏰀 f1(R) 􏰀 f9(R, L) rl
(We’ll name the new factor f10)
Step 8: multiply one more time!
R f1(R) T 0.2
F 0.8 R f10(R) T 0.63 F 0.756
R f1 × f10
→ T 0.2×0.63 → T
P (t) = 􏰀 f1(R)f10(R) r 􏰈 􏰇􏰆 􏰉
(We’ll name the new factor f11)
F 0.8 × 0.756

Step 9: finally, sum over R
Question 1, Part 2 Step 1: Use Bayes rule
Before going on
Step 3: factors
T Q f1(Q,T)
F T 0.1 F T 0.4 F 0.7 F T 0.1
P(t) = 􏰀f11(R) r
= 0.126 + 0.6048 = 0.7308
P(q | s,¬r) = P(q,s,¬r) P(s,¬r)
From here, if we wanted to, we could work out the numerator and denominator individually by following the same process as in the last question. That’s a lot of work, though! It would be better if we did something a bit sneaky instead!
What is that, you ask? Well, let’s think about this for a second. We also know:
P(¬q | s,¬r) = P(¬q,s,¬r) P(s,¬r)
Now, if we knew P (q, s, ¬r) AND P (¬q, s, ¬r), we could work out the denominator by adding them together. Then, we could divide P (q, s, ¬r) by the denominator to get our answer. In other words we could get P(Q, s, ¬r) and normalise this to get our answer. And that’s what we’re going to do!
Step 2: get equation
P(Q,s,¬r) = 􏰀P(Q,Tt,Ll,¬r,s,) t,l
= 􏰀 P (¬r)P (Ll|¬r)P (Q|Ll)P (Tt|Q, ¬r)P (s|Q, Tt) t,l
= P (¬r) 􏰀 P (Tt|Q, ¬r)P (s|Q, Tt) 􏰀 P (Ll|¬r)P (Q|Ll) tl
(Note: there is more than one way to do this! Eg. t and l swapped)
P(Q,s,¬r) = P(¬r)􏰀P(Tt|Q,¬r)P(s|Q,Tt)􏰀P(Ll|¬r)P(Q|Ll)
􏰈􏰇􏰆􏰉 t 􏰈 􏰇􏰆 0.8 f1(T,Q)
􏰉􏰈 􏰇􏰆 􏰉 l 􏰈 􏰇􏰆 􏰉􏰈 􏰇􏰆 􏰉 f2(Q,T ) f3(L) f4(Q,L)
Q T f2(Q,T) T T 0.9
L f3(L) T 0.3
Q L f4(Q,L) T T 0.9
FF0.7 FF0.1

Step 4: multiply
P(Q,s,¬r) = 0.8􏰀f1(T,Q)f2(Q,T)􏰀f3(L)f4(Q,L)
T T T F F T F F
l 􏰈 􏰇􏰆 􏰉 f5 (Q,L)
f5(Q,L) 0.9×0.3 = 0.27
0.7×0.7 = 0.49 0.1×0.3=0.03 0.3×0.7=0.21
Step 5: sum over L
P(Q,s,¬r) = 0.8􏰀f1(T,Q)f2(Q,T)􏰀f5(Q,L) tl
0.27 + 0.49 = 0.76 0.03 + 0.21 = 0.24
Step 6: multiply
Step 7: sum over T
P(Q,s,¬r) = 0.8􏰀f1(T,Q)f2(Q,T)f6(Q) t􏰈 􏰇􏰆 􏰉
Q T f7(Q,T)
T T 0.9×0.9×0.76=0.6156 T F 0.1×0.6×0.76=0.0456 F T 0.3×0.4×0.24=0.0288 F F 0.7×0.1×0.24=0.0168
P(Q,s,¬r) = 0.8􏰀f7(Q,T) t
0.6156 + 0.0456 = 0.6612 0.0288 + 0.0168 = 0.0456

Step 8: normalise
All right, we’ve got:
P(Q, s, ¬r) = 0.8× < 0.6612, 0.0456 >
=< 0.8 × 0.6612, 0.8 × 0.0456 >
=< 0.52896, 0.03648 >
(Note: this 0.8 doesn’t really matter, because we’re going to normalise later anyway)
Now, we’re going to normalise by dividing each term by their sum (ie. 0.52896 + 0.03648), and then we can get our answer!
P(Q | s, ¬r) = < 0.52896, 0.03648 > 0.52896 + 0.03648
= < 0.52896, 0.03648 > 0.56544
= < 0.52896, 0.03648 > 0.56544
raining is about 0.935!
Question 1, Part 3
≈< 0.935, 0.065 >
∴ P (q | s, ¬r) ≈ 0.935 (∴ means “therefore”)
(≈ means “approximately”) So, the probability of quokkas being happy given that people are taking lots of quokka selfies and it is not
Well, in the previous question we got a probability of 0.935 that the quokkas are happy given the evidence. So, since this is more than 0.5, we will predict that they are happy. Yay! 😀 (I guess you could say we BAYESically had the answer already… :D)
Question 1, Parts 4 and 5
P(r | ¬l,s) = 0.0619 P(l | q,t,s) = 0.44218

Question 2
􏰀 P(Qq,U,O,Kk,Aa) q,w,a
= 􏰀 P(U)P(O)P(Qq|U,O)P(Kk|Q)P(Aa|Q)
= P(U)P(O)􏰀P(Qq|U,O)􏰀P(Kk|Q)􏰀P(Aa|Q) qka
= P (U )P (O) 􏰀 P (Qq , U, O) 􏰀 P (Kk |Q) × 1 qk
= P(U)P(O)􏰀P(Qq|U,O)􏰀P(Kk|Q) qk
= P (U )P (O) 􏰀 P (Qq |U, O) × 1 q
= P(U)P(O)􏰀P(Qq|U,O) q
= P(U)P(O) × 1
= P(U)P(O)
Perfect! This is just what we wanted :). We’ve just shown U and O are independent!!!

程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com