CS计算机代考程序代写 chain CS229 – Probability Theory Review

CS229 – Probability Theory Review
Taide Ding, Fereshte Khani
Based on CS229 Review of Probability Theory by Arian Maleki and Tom Do. Additional material by Zahra Koochak and Jeremy Irvin.
April 9, 2021

N.B.
This review assumes basic background in probability (events, sample space, probability axioms etc.) and focuses on concepts useful to CS229 and to machine learning in general.

Conditional Probability and Bayes’ Rule
For any events A,B such that P(B) ̸= 0, we define: P(A|B):= P(A∩B)
P(B)
Let’s apply conditional probability to obtain Bayes’ Rule!
P(B | A) = P(B ∩ A) = P(A ∩ B) P (A) P (A)
=
Conditioned Bayes’ Rule: given events A,B,C,
P(A|B,C)= P(B |A,C)P(A|C) P(B | C)
P(B)P(A | B)
P (A)
See Appendix for proof 🙂

Law of Total Probability
Let B1, …, Bn be n disjoint events whose union is the entire sample space. Then, for any event A,
n
P (A) = 􏰓 P (A ∩ Bi )
i=1 n
= 􏰓P(A | Bi)P(Bi) i=1
We can then write Bayes’ Rule as:
P(Bk | A) = P(Bk)P(A | Bk)
P (A) =
P(Bk)P(A | Bk)
􏰃ni=1 P(A | Bi )P(Bi )

Example
Treasure chest A holds 100 gold coins. Treasure chest B holds 60 gold and 40 silver coins.
Choose a treasure chest uniformly at random, and pick a coin from that chest uniformly at random. If the coin is gold, then what is the probability that you chose chest A? 1
Solution:
P(A | G) = P(A)P(G | A) P(A)P(G | A) + P(B)P(G | B)
= 0.5×1 0.5×1+0.5×0.6
= 0.625
1Question based on slides by Koochak & Irvin

Chain Rule
For any n events A1, …, An, the joint probability can be expressed as a product of conditionals:
P(A1 ∩A2 ∩…∩An)
= P(A1)P(A2 | A1)P(A3 | A2 ∩ A1)…P(An | An−1 ∩ An−2 ∩ … ∩ A1)

Independence
Events A, B are independent if
P(AB) = P(A)P(B)
WedenotethisasA⊥B. Fromthis,weknowthatifA⊥B, P(A|B)= P(A∩B) = P(A)P(B) =P(A)
P(B) P(B)
Implication: If two events are independent, observing one event does not change the probability that the other event occurs.
In general: events A1, …, An are mutually independent if
P(􏰡 Ai) = 􏰔P(Ai) i∈S i∈S
for any subset S ⊆ {1, …, n}.

Random Variables
􏰢 A random variable X maps outcomes to real values.
􏰢 X takes on values in Val(X) ⊆ R.
􏰢 X = k is the event that random variable X takes on value k.
Discrete RVs:
􏰢 Val(X) is a set
􏰢 P(X = k) can be nonzero Continuous RVs:
􏰢 Val(X) is a range
􏰢 P(X =k)=0forallk. P(a≤X ≤b)canbenonzero.

Probability Mass Function (PMF)
Given a discrete RV X, a PMF maps values of X to probabilities. pX(x):=P(X =x)
For a valid PMF, 􏰃x∈Val(x) pX (x) = 1.

Cumulative Distribution Function (CDF)
A CDF maps a continuous RV to a probability (i.e. R → [0, 1]) FX(x):=P(X ≤x)
A CDF must fulfill the following:
􏰢 limx→−∞FX(x)=0
􏰢 limx→∞FX(x)=1
􏰢 If a ≤ b, then FX (a) ≤ FX (b) (i.e. CDF must be nondecreasing)
Alsonote: P(a≤X ≤b)=FX(b)−FX(a).

Probability Density Function (PDF)
PDF of a continuous RV is simply the derivative of the CDF. fX(x):= dFX(x)
Thus,
dx
P(a≤X ≤b)=FX(b)−FX(a)=
A valid PDF must be such that
􏰢 for all real numbers x, fX (x) ≥ 0.
􏰢􏰝∞ fX(x)dx=1 −∞
􏰞b a
fX(x)dx

Expectation
Let g be an arbitrary real-valued function. 􏰢 If X is a discrete RV with PMF pX :
E[g(X)] := 􏰓 g(x)pX(x) x∈Val(X)
􏰢 If X is a continuous RV with PDF fX: 􏰞∞
E[g(X)] :=
Intuitively, expectation is a weighted average of the values of g(x), weighted by the probability of x.
−∞
g(x)fX (x)dx

Properties of Expectation
For any constant a ∈ R and arbitrary real function f : 􏰢 E[a]=a
􏰢 E[af (X)] = aE[f (X)]
Linearity of Expectation
Given n real-valued functions f1(X),…,fn(X),
nn E[􏰓fi(X)] = 􏰓E[fi(X)]
i=1 i=1 Law of Total Expectation
Given two RVs X,Y:
E[E[X | Y]] = E[X]
N.B. E[X | Y] = 􏰃x∈Val(x) xpX|Y (x|y) is a function of Y. See Appendix for details 🙂

Example of Law of Total Expectation
El Goog sources two batteries, A and B, for its phone. A phone with battery A runs on average 12 hours on a single charge, but only 8 hours on average with battery B. El Goog puts battery A in 80% of its phones and battery B in the rest. If you buy a phone from El Goog, how many hours do you expect it to run on a single charge?
Solution: Let L be the time your phone runs on a single charge. We know the following:
􏰢 pX(A) = 0.8, pX(B) = 0.2,
􏰢 E[L|A]=12, E[L|B]=8. Then, by Law of Total Expectation,
E[L] = E[E[L | X]] = 􏰓 E[L | X]pX(X) X∈{A,B}
= E[L | A]pX (A) + E[L | B]pX (B) =12×0.8+8×0.2= 11.2

Variance
The variance of a RV X measures how concentrated the distribution of X is around its mean.
Var(X) := E[(X − E[X])2] = E[X2] − E[X]2
Interpretation: Var(X) is the expected deviation of X from E[X]. Properties: For any constant a ∈ R, real-valued function f (X )
􏰢 Var[a]=0
􏰢 Var[af(X)]=a2Var[f(X)]

Example Distributions
Distribution
Binomial(n,p)
PDF or PMF
􏰙n􏰚pk(1−p)n−k fork=0,1,…,n k
Mean Variance
np np(1−p)
Bernoulli (p)
􏰌 p, if x = 1 1−p, ifx=0.
p
p(1−p)
Geometric(p)
p(1−p)k−1 fork=1,2,…
1
p
1−p p2
Poisson(λ)
e−λλk fork=0,1,… k!
λ
λ
Uniform(a, b)
1 forallx∈(a,b) b−a
a+b 2
(b−a)2 12
Gaussian(μ, σ2)
1 −(x−μ)2
√ e 2σ2 for all x ∈ (−∞, ∞) σ 2π
μ
σ2
λe−λx for all x ≥ 0, λ ≥ 0
1 1 λ λ2
Exponential(λ)
Read review handout or Sheldon Ross for details 2
2Table reproduced from Maleki & Do’s review handout by Koochak & Irvin

Joint and Marginal Distributions
􏰢 Joint PMF for discrete RV’s X , Y : pXY(x,y)=P(X =x,Y =y)
Note that 􏰃x∈Val(X) 􏰃y∈Val(Y) pXY (x,y) = 1 􏰢 Marginal PMF of X, given joint PMF of X,Y:
pX (x ) = 􏰓 pXY (x , y ) y
􏰢 Joint PDF for continuous X , Y : fXY(x,y)= δ2FXY(x,y)
δxδy Notethat􏰝∞ 􏰝∞ fXY(x,y)dxdy=1
−∞ −∞
􏰢 Marginal PDF of X, given joint PDF of X,Y: 􏰞∞
fX (x ) =
fXY (x , y )dy
−∞

Joint and Marginal Distributions for Multiple RVs
􏰢 Joint PMF for discrete RV’s X1, …, Xn:
p(x1, …, xn) = P(X1 = x1, …, Xn = xn)
Note that 􏰃 􏰃 …􏰃 p(x1,…,xn) = 1 x1 x2 xn
􏰢 Marginal PMF of X1, given joint PMF of X1, …, Xn: pX1 (x1) = 􏰓 … 􏰓 p(x1, …, xn)
x2 xn
􏰢 Joint PDF for continuous RV’s X1, …, Xn:
f(x1,…,xn)= δnF(x1,…xn) δx1δx2…δxn
Note that 􏰝 􏰝 … 􏰝 f (x1, …, xn)dx1…dxn = 1 x1x2 xn
􏰢 Marginal PDF of X1, given joint PDF of X1, …, Xn: 􏰞􏰞
fX1 (x1) = … f (x1, …, xn)dx2…dxn x2 xn

Expectation for multiple random variables
Given two RV’s X , Y and a function g : R2 → R of X , Y , 􏰢 for discrete X,Y:
E[g(X,Y)] :=
􏰢 forcontinuousX,Y: E[g(X,Y)] :=
􏰓 􏰓 g(x,y)pXY (x,y) x∈Val(x) y∈Val(y)
􏰞∞􏰞∞ −∞ −∞
These definitions can be extended to multiple random variables in the same way as in the previous slide. For example, for n continuous RV’s X1, .., Xn and function g : Rn → R:
􏰞􏰞􏰞
E[g(X)] = … g(x1,…,xn)fX1,…,Xn(x1,…,xn)dx1,…,dxn
g(x,y)fXY (x,y)dxdy

Covariance
Intuitively: measures how much one RV’s value tends to move with another RV’s value. For RV’s X , Y :
Cov[X,Y]:=E[(X −E[X])(Y −E[Y])] = E[XY ] − E[X ]E[Y ]
􏰢 If Cov [X , Y ] < 0, then X and Y are negatively correlated 􏰢 If Cov [X , Y ] > 0, then X and Y are positively correlated 􏰢 If Cov[X,Y] = 0, then X and Y are uncorrelated

Properties Involving Covariance
􏰢 If X ⊥ Y, then E[XY] = E[X]E[Y]. Thus,
Cov [X , Y ] = E[XY ] − E[X ]E[Y ] = 0
This is unidirectional! Cov [X , Y ] = 0 does not imply X ⊥ Y 􏰢 Variance of two variables:
Var[X +Y]=Var[X]+Var[Y]+2Cov[X,Y] i.e. if X ⊥ Y , Var [X + Y ] = Var [X ] + Var [Y ].
􏰢 Special Case:
Cov [X , X ] = E[XX ] − E[X ]E[X ] = Var [X ]

Conditional distributions for RVs
Works the same way with RV ’s as with events: 􏰢 For discrete X,Y:
pY|X(y|x)= pXY(x,y) pX(x)
􏰢 ForcontinuousX,Y:
fY|X(y|x)= fXY(x,y)
fX(x) 􏰢 In general, for continuous X1, …, Xn:
fX1|X2,…,Xn (x1|x2, …, xn) = fX1,X2,…,Xn (x1, x2, …, xn) fX2,…,Xn (x2, …, xn)

Bayes’ Rule for RVs
Also works the same way for RV ’s as with events: 􏰢 For discrete X,Y:
pX|Y (x|y)pY (y) pY|X(y|x)= 􏰃y′∈Val(Y)pX|Y(x|y′)pY(y′)
􏰢 ForcontinuousX,Y:
fX|Y (x|y)fY (y)
fY|X(y|x)= 􏰝∞ fX|Y(x|y′)fY(y′)dy′ −∞

Chain Rule for RVs
Also works the same way as with events:
f (x1, x2, …, xn) = f (x1)f (x2|x1)…f (xn|x1, x2, …, xn−1) n
= f (x1) 􏰔 f (xi |x1, …, xi−1) i=2

Independence for RVs
􏰢 ForX ⊥Y tohold,itmustbethatFXY(x,y)=FX(x)FY(y) FOR ALL VALUES of x, y.
􏰢 SincefY|X(y|x)=fY(y)ifX ⊥Y,chainruleformutually independentX1,…,Xn is:
n f(x1,…,xn) = f(x1)f(x2)…f(xn) = 􏰔f(xi)
i=1
(very important assumption for a Naive Bayes classifier!)

Random Vectors
Given n RV’s X1 , …, Xn , we can define a random vector X s.t. X1
X2 X =  . 
. Xn
Note: all the notions of joint PDF/CDF will apply to X.
Given g : Rn → Rm, we have:
g1(x) E[g1(X)]
g2(x) E[g2(X)] g(x) =  . ,E[g(X)] =  . .
. . gm(x) E[gm(X)]

Covariance Matrices
For a random vector X ∈ Rn, we define its covariance matrix Σ as the n × n matrix whose ij-th entry contains the covariance betweenXi andXj.
Cov[X1,X1] … Cov[X1,Xn]  . .. . 
Σ=… Cov[Xn, X1] . . . Cov[Xn, Xn]
applying linearity of expectation and the fact that Cov [Xi , Xj ] = E[(Xi − E[Xi ])(Xj − E[Xj ])], we obtain
Σ=E[(X −E[X])(X −E[X])T]
Properties:
􏰢 Σ is symmetric and PSD
􏰢 If Xi ⊥ Xj for all i,j, then Σ = diag(Var[X1],…,Var[Xn])

Multivariate Gaussian
The multivariate Gaussian X ∼ N (μ, Σ), X ∈ Rn :
1 􏰁1 T−1 􏰂
p(x;μ,Σ)= 1 n exp −2(x−μ) Σ (x−μ) det(Σ)2 (2π)2
The univariate Gaussian X ∼ N (μ, σ2), X ∈ R is just the special case of the multivariate Gaussian when n = 1.
2 1 􏰁1 2􏰂 p(x;μ,σ)= 1 exp −2σ2(x−μ)
σ(2π)2
Notice that if Σ ∈ R1×1, then Σ = Var[X1] = σ2, and so
􏰢 Σ−1 = 1 σ2
􏰢det(Σ)12 =σ

Some Nice Properties of MV Gaussians
􏰢 Marginals and conditionals of a joint Gaussian are Gaussian
􏰢 A d-dimensional Gaussian X ∈ N(μ,Σ = diag(σ12,…,σn2)) is equivalent to a collection of d independent Gaussians
Xi ∈ N (μi , σi2). This results in isocontours aligned with the coordinate axes.
􏰢 In general, the isocontours of a MV Gaussian are n-dimensional ellipsoids with principal axes in the directions of the eigenvectors of covariance matrix Σ (remember, Σ is PSD, so all n eigenvectors are non-negative). The axes’ relative lengths depend on the eigenvalues of Σ.

Visualizations of MV Gaussians
Effect of changing variance

Visualizations of MV Gaussians
If Var[X1] ̸= Var[X2]:

Visualizations of MV Gaussians
If X1 and X2 are positively correlated:

Visualizations of MV Gaussians
If X1 and X2 are negatively correlated:

Thank you and good luck!
For further reference, consult the following CS229 handouts 􏰢 Probability Theory Review
􏰢 The MV Gaussian Distribution
􏰢 More on Gaussian Distribution
For a comprehensive treatment, see
􏰢 Sheldon Ross: A First Course in Probability

Appendix: More on Total Expectation
Why is E[X |Y ] a function of Y ? Consider the following:
􏰢 E[X|Y = y] is a scalar that only depends on y.
􏰢 Thus, E[X |Y ] is a random variable that only depends on Y . Specifically, E[X|Y] is a function of Y mapping Val(Y) to the real numbers.
An example: Consider RV X such that X=Y2+ε
such that ε ∼ N (0, 1) is a standard Gaussian. Then, 􏰢 E[X|Y]=Y2
􏰢 E[X|Y =y]=y2

Appendix: More on Total Expectation
A derivation of Law of Total Expectation for discrete X , Y :3
E[E[X |Y ]] = E[􏰓xP(X = x | Y)] (1)
x
= 􏰓􏰓xP(X = x | Y)P(Y = y) (2) yx
=􏰓􏰓xP(X =x,Y =y) (3) yx
=􏰓x􏰓P(X =x,Y =y) (4) xy
=􏰓xP(X =x)= (5) x
where (1), (2), and (5) result from the definition of expectation, (3) results from the definition of cond. prob., and (5) results from marginalizing out Y .
3from slides by Koochak & Irvin
E[X ]

Appendix: A proof of Conditioned Bayes Rule
Repeatedly applying the definition of conditional probability, we have: 4
P(b|a, c)P(a|c) = P(b, a, c) · P(a|c)
P(b|c) P(a, c)
= P(b,a,c) ·
P(a, c) = P(b,a,c)
P(b|c)P(c) = P(b,a,c)
P(b,c) = P(a|b,c)
P(b|c) P(a,c)
P(b|c)P(c)
4from slides by Koochak & Irvin