Tutorial on Maximum Likelyhood Estimation: Parametric Density Estimation
Sudhir B Kylasa 03/13/2014
1 Motivation
Suppose one wishes to determine just how biased an unfair coin is. Call the probability of tossing a HEAD is p. The goal then is to determine p.
Also suppose the coin is tossed 80 times: i.e., the sample might be something likex1 = H, x2 = T, , x80 = T, and the count of number of HEADS, ”H” is observed.
The probability of tossing TAILS is 1 − p. Suppose the outcome is 49 HEADS and 31 TAILS, and suppose the coin was taken from a box containing three coins: one which gives HEADS with probability p = 1/3, one which gives HEADS with probability p = 1/2 and another which gives HEADS with probability p = 2/3. The coins have lost their labels, so which one it was is unknown. Clearly the probability mass function for this experiment is binomial distribution with sample size equal to 80, number of successes equal to 49 but different values of p. We have the following probability mass functions for each of the above mentioned cases:
Pr(H =49|p=1/3) = Pr(H =49|p=1/2) = Pr(H =49|p=2/3) =
80 49 3
49 (1/3) (1−1/3) 1≈0.000, (1)
80 49 3
49 (1/2) (1−1/2) 1≈0.012, (2)
80 49 3
49 (2/3) (1−2/3) 1≈0.054 (3)
Based on the above equations, we can conclude that the coin with p = 2/3 was more likely to be picked up for the observations which we were given to begin with.
2 Definition
The generic situation is that we observe a n-dimensional random vector X with probability density (or mass) function f (x; θ). It is assumed that θ is a fixed, unknown constant belonging to the set Θ ⊂ Rn
For x ∈ Rn, the likelihood function of θ is defined as
L(θ/x) = f(x/θ). (4)
x is regarded as fixed, and θ is regarded as the variable for L. The log-likelihood function is defined as l(θ/x) = logL(θ/x).
ˆˆ
The maximum likelihood estimate (or MLE) is the value θ = θ(x) ∈ Θ maximizing L(θ/x),
provided it exists:
ˆ
L(θ/(x)) = arg max L(θ/x) (5)
θ
3 What is Likelihood function ?
If the probability of an event X dependent on model parameters p is written as
then we talk about the likelihood
P(X|p)
L(p|X )
that is the likelihood of the parameters given the data.
For most sensible models, we will find that certain data are more probable than other data. The aim of maximum likelihood estimation is to find the parameter value(s) that makes the observed data most likely. This is because the likelihood of the parameters given the data is defined to be equal to the probability of the data given the parameters
If we were in the business of making predictions based on a set of solid assumptions, then we would be interested in probabilities – the probability of certain outcomes occurring or not occurring.
However, in the case of data analysis, we have already observed all the data: once they have been observed they are fixed, there is no ’probabilistic’ part to them anymore (the word data comes from the Latin word meaning ’given’). We are much more interested in the likelihood of the model parameters that underly the fixed data.
The following is the relation between the likelihood and the probability spaces:
Probability Likelihood
4 Method
Knowing paramtres → prediction of outcomes Observation of data → estimation of parameters
Maximum likelihood (ML) estimates need not exist nor be unique. In this section, we show how to compute ML estimates when they exist and are unique. For computational convenience, the ML estimate is obtained by maximizing the log-likelihood function, log L(θ/x). This is because the two functions log L(θ/x) and L(θ/x) are monotonically related to each other so the same ML estimate is obtained by maximizing either one. Assume that the log-likelihood function is differentiable, if θMLE exists, it must satisfy the following partial differential equation known as the likelihood equation:
d (logL(θ/x)) = 0 (6) dθ
at θ = θM LE . This is because maximum or minimum of a continuously differentiable function implies that its first derivatives vanishes at such points.
The likelihood equation represents a necessary condition for the existence of an MLE esti- mate. An additional condition must also be satisfied to ensure that log L(θ/x) is a maximum and not minimum, since the first derivative cannot reveal this. To be a maximum, the shape of the log-likelihood function should be convex in the neighborhood of θM LE . This can be checked by calculating the second derivatives of the log-likelihoods and showing whether they are all negative at θ = θMLE.
d2
dθ2 (logL(θ/x)) < 0 (7)
5 Properties
Some general properties (advantages and disadvantages) of the Maximum Likelihood Estimate are as follows:
1. For large data samples (large N) the likelihood function L approaches a Gaussian distri- bution
2. Maximum Likelihood estimates are usually consistent. For large N the estimates converge to the true value of the parameters which are estimated.
3. Maximum Likelihood Estimates are usually unbiased. For all sample sizes the parameter of interest is calculated correctly.
4. Maximum Likelihood Estimate is efficient: (the estimates have the smallest variance).
5. Maximum Likelihood Estimate is sufficient: (it uses all the information in the observa- tions).
6. The solution from the Maximum Likelihood Estimate is unique.
On the other hand, we must know the correct probability distribution for the problem at hand.
6 Numerical examples using Maximum Likelihood Estimation
In the following section, we discuss the applications of MLE procedure in estimating unknown parameters of various common density distributions.
6.1 Estimating prior probability using MLE
Consider a two-class classification problem, with classes (ω1, ω2) and let Prob(ω1) = p and Prob(ω2) = 1 − p (here p is the unknown parameter). By using MLE, we can estimate p as follows:
Let the sample be D = (x1,x2,...,xN). Let ωij be the class of the feature vector xj (N is the sample size and N1 is the number of feature vectors belonging to class ω1). Also assume that samplesx1,x2...,xN areindependentevents.Thenwehavethefollowingequations.
N
P rob(D/p) = P rob(ωij /p)
By Independence of the feature vectors
j=1
= pN1 ∗(1−p)N−N1
Please note that P rob(D/p) is infinitely differentiable function of p, so the local maxima lies where its derivative is zero.
N1 ∗pN1−1 ∗(1−p)N−N1 −(N −N1)∗(1−p)N−N1−1 ∗pN pN1 ∗ (1 − p)N−N1−1
Solving the above equation for p, we get the following: pN1 ∗(1−p)N−N1−1 = 0
N1 ∗(1−p) = (N −N1)∗p
= 0 = 0
So p is either 0 or 1 by eq. 8 and p is (N1/N) by eq. 9. Hence this proves that taking the frequencies for probabilities of the feature vectors is optimum and using MLE we showed that p is maximized. Hence the class probabilities are optimum (the likelihood function is maximized using MLE).
6.2 Estimating μ of a Gaussian distribution when Σ is known
In this section, we estimate the value of μ (μMLE), when the covariance matrix (Σ) in known,
for gaussian distribution in n-dimensional feature space.
d pN1 ∗(1−p)N−N1 = 0 dp
N
ρ(xj /μ) j=1
N
log(ρ(xj /μ))
2
ρ(D/μ) = log likelihood function is
logρ(D/μ) =
=
=
This function is infinitely differentiable function of unknown parameters (μ)’s. To find the maxima,wesetthederivativeofthiseq. 10to0.
j=1
N T−1
log 1 exp−(xj−μ) Σ (xj−μ)
(8) (9)
j=1 (2π)n2 |Σ|21 N11T−1
log (2π)n2|Σ|21 −2(xj−μ)Σ (xj−μ) (10)
j=1
▽μ⃗log(D/μ⃗) =
d 0 dμ1
d0 dμ2 =
(11) ... ...
d0
dμN n×1 n×1
Differentiating the log likelihood functions yields the following results:
− 1 N
▽μ⃗ log(D/μ⃗)
d T−1
But dμ (xj −μ) Σ (xj −μ)
i
= −2eTi Σ−1(xj −μ)
where eTi = [0,0,0 ..., 1, ...] and 1 is located at position i in the array
= 2
j=1 N
=
−1
2 j=1
... (xj − μ)T Σ−1(xj − μ)
▽μ⃗ (xj − μ)T Σ−1(xj − μ) d (xj −μ)TΣ−1(xj −μ)
dμ1 jj
d (x −μ)TΣ−1(x −μ) dμ2
d dμN
n×1
T −1d
T −1 d T −1
d
= dμ(xj −μ) Σ (xj −μ)+(xj −μ) Σ dμ(xj −μ)
= 2 dμ(xj−μ) Σ (xj−μ)
−2eT1 Σ−1(xj − μ) − 1 − 2 e T2 Σ − 1 ( x j − μ ) 2 ...
j=1 −2eTNΣ−1(xj −μ) n×1 e T1
N
N
= eT2 Σ−1(xj−μ)
. . . j=1 eTN n×1
e T1
Notice that eT2 is the identity matrix.
. . .
eTN n×1
So the above equation reduces to.
N
▽μ⃗log(D/μ⃗) = Σ−1(xj −μ)
Bysolvingtheequation. 12forμ.
N Σ−1(xj −μ)
j=1
N Σ−1(xj−μ) j=1
N
(xj−μ) j=1
NN (xj ) − (μ)
j=1 j=1
j=1
N
= Σ−1 (xj − μ) (12) j=1
= 0
= 0
= 0
= 0
1 N
= N( (xj))=μMLE (13) j=1
μ
Hence, we proved that using MLE the sample mean is the maximum likelihood estimate of any given sample.
6.3 Estimating μ and σ2 for 1-D gaussian distribution using MLE
In this subsection, we estimate the μ and σ2 for one-dimensional gaussian data. Here θ = (θ1, θ2) are (μ, σ2), which are unknown parameters. We estimate these parameter using the procedure discussed in the section 4.
The log-likelihood function for this case is given by the following equation:
2 1 xk−μ
= log √2πσ2 exp(− 2σ2 )
= −1log(2πσ2) − 1 (xk − μ)2
logρ(D/μ, σ2) = logρ(xk/μ, σ2) (14) k=1
Since (x1, x2, . . . , xN ) are I.I.D’s, the density function can be written in product form as fol- lows:
ρ(D/μ,σ2) = ρ((x1,x2,...,xN)/μ,σ2)
logρ(xk/μ, σ )
= logρ(D/μ, σ2) =
ρ(x1/μ, σ2)ρ(x2/μ, σ2) . . . ρ(xN /μ, σ2) N
log ρ(xk/μ, σ2) k=1
2 2σ2 N
N1 2 1 2
−2log(1πσ ) − 2σ2 (xk − μ) (15) Differentiatingandsettingtheeq. 15to0,yieldsthefollowingequations:
▽μ,σ2 ρ(D/μ, σ2)
d logρ(D/μ, σ2) dσ2
=
d logρ(D/μ,σ2) = dμ
k=1
2×1
d (−Nlog(2πσ2)− 1 N (x −μ))
dμ 2 2σ2 k=1 k
= d (−Nlog(2πσ2)− 1 N (x −μ))
dσ22 2σ2k=1k 2×1 1N(x−μ)
σ2 k=1 k
= −N + 1 N (x −μ)2
2σ2 2σ4 k=1 k 2×1
Solving the eq. 16 for μ and σ2 yields the following: 1N
N
(xk −μ)=0 k=1
1 N
(xk) (16) (xk − μ)2 = 0
(xk −μ)2 =Nσ2 k=1
σ2
(xk−μ) =0 k=1
μˆ = μ = N N 1 N
k=1
−2σ2 + 2σ4
k=1 N
σˆ2 = σ2 = k=1
N
N (xk−μˆ)2
which are the mean and standard deviation of the empirical data.
In general for N-dimensional Gaussian data, when X ∼ N(μ, Σ) where X ∈ Rn, with μ⃗ and Σ
are unknown parameters, then MLE for μ and Σ are as follows: 1 N
As discussed in section 6.3, we know that
1 N
( x⃗ k ) N (x⃗ −μ⃗)(x⃗ −μ⃗)T
σˆ2= k=1 k
6.4 How far does the μˆ deviate from the true μ when MLE is used.
μˆ = N N
k=1 k
E [ μˆ ] = N =μ
Now we compute the expected value of (|μˆ − μ|)2 as follows:
k=1
( x⃗ k )
E[(|μˆ − μ|)2]
Substituting E[μˆ] = μ we have
= E [(μˆ − μ)(μˆ − μ)]
= E[μˆμ − μμˆ − μˆμ + μμ] = E[μˆμˆ] − 2E[μμˆ] + E[μμ]
= E[μˆμˆ]−μμ 1NN
= E( Xk ∗Xj)−μμ N k=1 j=1
E[XjXk]−μμ Treating Xj as random variables in the above equations, we have
1NN
E(Xj)E(Xk) + E(Xj ∗ Xk) j,k=1 j,k=1
E[(|μˆ − μ|)2] =
= 1 N(N−1)μμ+N∗E[X2]μμ
− μμ
E[(|μˆ − μ|)2] But, we know that
= N1 E[X2] − N1 μμ
E[|X−μ|2] = E[(X−μ)(X−μ)]=σ2 E[X∗X]−μ2 = σ2
N2 N2
E[X ∗ X] = σ2 ∗ μ2 Therefore, we have the following result:
E[(|μˆ−μ|)2] = N1(σ2∗μ2)−N1(μμ) = N1σ2
So the expected value of (|μˆ − μ|)2 is proportional to the true standard deviation.
= N2
j,k=1
1 N
6.5 Estimate for Σ is biased, when MLE is used.
In the following section, we will show that the estimate for covariance is biased (when μ is unknown) when MLE is used to estimate its value and equals to true covariance when μ is known.
ˆ 1N
E[Σ] = E[N
=
k=1
E[N(Xk −μ)(Xk −μ) ]
= N =
k=1
= = =
(Xk −μ)(Xk −μ)T] N1 T
k=1
1 N
E[XkXkT −XKμT −μXT +μμT] N1NE[XKXKT]−μμT
E[XXT −μμT]
E[XXT −2μμT −μμT] E[(X−μ)(X−μ)T]=Σ
If the μ is known, then it turns out that Estimated value of Σˆ is equal to true Σ. We now show the derivation that in case where μ is not known, then estimated value of Σˆ is not equal to true Σ
ˆ 1 N
(Xk −μˆ)(Xk −μˆ)T]
E[XkXkT −XKμˆT −μˆXT +μˆμˆT]
k=1
= E[XkXkT]−E[XKμˆT]−E[μˆXT]−E[μˆμˆ]
N
k=1
1N 1N
E[XkXkT ] − E[(N Xl)Xk] l=1
E[Σ] = NE[ 1 N
= N
1 N
= N
− E [ N
k=1
k=1
1 N
1 N
X k X l T ] + E [ N
1 N
X l N X mT ]
m=1
l=1
l=1
Splitting the summation above into two parts, l = k and l ̸= k, we have the following: ˆ1N 1 1
{E[XXT ] − N ΣNl=1,l̸=kE(Xl)E(XkT ) − N ΣNl=1,l=kE(XkXkT ) + 1 ΣKl,m=1,l̸=mE(Xl)E(XmT ) + 1 ΣKl,m=1,l=mE(Xl)E(XlT )}
E[Σ] = N
−N1 ΣNl=1,l̸=kE(Xk)E(XlT ) − N1 ΣNl=1,l=kE(XkXkT )
k=1
N2 N2 1N 1 1
{E[XXT ] − N ΣNl=1,l̸=kμμT − N ΣNl=1,l=kE(XkXkT ) −N1 ΣNl=1,l̸=kμμT − N1 ΣNl=1,l=kE(XkXkT )
= N
+ 1 ΣKl,m=1,l̸=mμμT + 1 ΣKl,m=1,l=mE(Xl)E(XlT )}
k=1
N2 N2 ˆ11T1T
E[Σ] = N N(1− NE(XX )+N(N −1)μμ = (1−N1)E(XXT)−μμT
= (N − 1)E(XXT − μμT )) N
= (N−1)E[(X−μ)(X−μ)T]̸=Σ N
6.6 Binomial Distribution
This section discusses the estimation of p in a typical binomial distribution. Assuming that parameter p, is unknown in a binomial distribution give by the equation below:
P rob(r/p) = logP rob(r/p) =
logP rob(D/p) =
=
=
n r n−r r p(i−p)
n
log r = rlog(p)+(n−r)log(1−p)
N logProb(ri/p)
i=1 N
log[P rob(ri/p)] i=1
N n
[log r + rilog? + (n − ri)log(1 − p)] i
i=1
To find the maximum, we set the derivative of log-likelihood function to be 0:
N ri n−ri
d log(D/p) = 0 dp
[p + 1−p(−1)] = 0 i=1
1 N
p ri = 1−p (n−ri)
1 N i=1 i=1
NN (1−p)ri = p(n−ri) i=1 i=1
1 N p=N ri/n
i=1
Hence p, which is the unknown parameter is equal to the average of number of successes in the sample space.