https://xkcd.com/1338/
Mixture Models and
Gaussian mixture models
Copyright By PowCoder代写 加微信 powcoder
Maximisation
chastening IEM # 2 after Break
Clustering
and density approximation
Assignment
Discrete latent variables
Expectation maximization, a
estimators in latent variable models — or, how solving several easy-looking pieces.
technique for finding maximum likelihood
hard-looking proble
something more in the familiar
bell curves?
https://en.wikipedia.org/wiki/Normal_distribution
A “Wallaby” distribution
A “Wallaby” distribution
any smooth density can be approximated to
arbitrary precision by a Gaussian mixture model
with enough components.
Park, J. and Sandberg, I.W., 1991. Universal
approximation using radial-basis-function
Neural computation
, 3(2), pp.246-257.
Lachlan and . Finite mixture models. & Sons, 2004.
01: K-Means
tor Quantization
K-means clustering:
representation + loss
data pout Xy
K-means clustering:
IF {𝝁k} are given, we only need to determine rnk
Kun- Mall’ I
K-means clustering:
Assume rnk is known, what would {𝝁k} be?
assigned to
K-means clustering:
Expectation step
Maximisation step
data points to clusters, determine rnk
Re-compute the cluster means –
update {𝝁k}
fix the mind
{randomly Where to start? randomly pick
When to stop? ‘ kontofxni – kmean#
does this work?
E’ – s t e p :
Ido aotchye
Meet ‘ R E
-findNak : – nearestMnetIn
{Tna ,fn)=0
can’t updateMk ,
re- initialism #
NK,yt§kdim .
quanta. 401 –
CR G.B) 8-bit int
pixel . toupee . – forearm
16k colors ←
K-means for illustrating image channel [0 , 255)
segmentation and data compression Q Eg
Mixture Models and
Maximisation
Clustering
and density approximation
Gaussian mixture models
Models (GMM)
distributions are formed by taking linear
combinations of
distributions.
also (9.7)
as a latent variable model
also (9.7)
Conditional
is gaussian
conditioned
and marginal distributions
%”Nlxlμ. . Taf” Kei
ofthe Zk active
PLZE- 1) 71k
function of
¥¥¥tNC9N5Y
hzT¥rcxy→i
Maximum likelihood estimation
Observed: {xn}; Unobserved/latent: {zn}
Need to estimate: 𝛑, 𝝁, 𝜮
Issues in maxim
um likelihood
space symmetries, or identifiability problem
Singularity due to component collapsing onto one data point
That be each
equivalent
Challenges
maximising the likelihood function
whole problem :
solve for Them -2 Divide-and-conquer: solve for 𝝁k –
coast. expe.EE.
Differentiate wrt 𝝁k
only one term depend o n
7¥- ¥¥%NCMMsEI~#
in cluster K
Solve for 𝜋k
But 𝛾nk, …
hat happened?
Given 𝛾nk , solve for 𝜋k 𝝁k 𝜮k
is computed from {xn},
In the expe
probabilities, or
ep, or E for the parameters to
step, we use the current evaluate the posterior
responsibilities, given by (9.13).
We then use these probabilities in the maximization
step, or M step, to re-estimate the means, covariances, and mixing coefficients using
(9.17), (9.19), and (9.22).
the results
Responsibilities
connection to
infxn belongs
Pcznoellxn
Mixture Models and
Clustering
and density approximation
Gaussian mixture models
Maximisation
” expectation”
Discrete latent variables
Expectation maximization, a
estimators in latent variable models — or, how solving several easy-looking pieces.
technique for finding maximum likelihood
hard-looking proble
程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com