程序代写 COMP3670/6670 Assignment 3 – GMM¶

A3_programming

COMP3670/6670 Assignment 3 – GMM¶

Copyright By PowCoder代写 加微信 powcoder

Enter Your Student ID:

Your Name:

Submit: You can write your answers in this file and submit a single Jupyter Notebook file (.ipynb) on Wattle. Rename this file with your student number as ‘uXXXXXXX.ipynb’. Otherwise, you can write your programming questions in this file, and submit two files, ‘uXXXXXXX.ipynb’ for programming and ‘uXXXXXXX.pdf’ for theory. Please submit them separately instead of a zip file.

Enter Discussion Partner IDs Below:



Programming Section

PROGRAMMING SECTION¶
For all of the following, program the solution yourself. Don’t just call a library function that does the whole question for you, or you’ll get zero (no, that doesn’t mean you can’t use any library functions, but it does mean that you have to show you understand how to compute the answer yourself).

import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D #This is for 3d scatter plots.
import math
import random
import functools

We’re going to implement an algorithm to model data with a mixture of gaussians.

Remember the simplified EM algorithm in assignment 2 for k-means? Well, we’re going to implement something similar.

The more complex aspect of this all is that your program must also correctly estimate the covariance matrices of the 2-dimensional gaussian components involved, as well as their means.

Not only this, but the “distance” of each of your data points (a row $d_i$ of $D$) from a gaussian $k$, is the probability of that point given that gaussian $p(d_i \mid k)$.

Run the below cell to load the data we’ll be using.

X = np.load(“./data.npy”)
fig = plt.figure(figsize=(16, 10))
ax = fig.add_subplot(111, projection=’3d’)
ax.scatter(X[:,0], X[:,1], X[:, 2])
plt.show()

First off, some definitions:

$X$ is a dataset contains 800 data. Each data has 3 dimensions. They are represented by a three dimensional Gaussian Mixture Model.

$K$ is the total number of gaussians. This is just like the $K$ means you had for k-means in assignment 2. Each gaussian is named $k$, where $k$ is the unique number associated with that gaussian. Each $k$ has a mean and covariance matrix associated with it. This allows you to construct a gaussian which is just a formula which can be used to generate data points or to compute the probability that a given data point is generated from this gaussian (look up generative models if you’re curious to learn more). For this purpose, you can use np.random.multivariate_normal().

$\Sigma$ is a list of covariance matrices $\Sigma_k \in \mathbb{R}^{m \times m}$, which are symmetric positive semidefinite matrices.

$\mu$ is a list of means, each one $\mu_k \in \mathbb{R}^{m}$ associated with gaussian $k$.

$N$ is the total number of datapoints.

$responsibilities \in [0, 1]^{N \times K}$ is a matrix. Every column $k$ of $responsibilities$ is associated with the $k^{th}$ gaussian. Each element $r_{ik}$ of the $k^{th}$ column is the probability of the $i^{th}$ datapoint $x_i$ (the $i^{th}$ row of $X$) given the gaussian $k$.

$N_k$ is the sum of the $k^{th}$ column of $responsibilities$. In other words, there is one $N_k$ for each gaussian. $N_k = \sum_i r_{ik}$.

$\pi$ is a list of probabilies, 1 associated with each gaussian. $\pi_k$ is the probability of gaussian $k$. $\pi_k = \frac{N_k}{N}$.

TASK 1: Complete the below function to initialise your parameters. You will need to generate:

A list $\mu$ of $K$ means, each of which is in $\mathbb{R}^{m}$.
A list $\Sigma$ of $K$ covariance matrices in $\mathbb{R}^{m \times m}$. Remember, covariance matrices must be symmetric positive semidefinite.
A list $\pi$ of $K$ probabilities $\pi_k$. They should be initialised at $\frac{1}{K}$ (uniformly distributed at first).

Do not hard code parameters. You should generate them with some randomness, and your code must work for any $m$, $k$ and $N$.

You need to intelligently select $\mu_k$ just like you did with k-means. Poorly initialised parameters may result in an entirely broken EM algorithm.

def initialise_parameters(X, K):
# YOUR CODE HERE
return sigma, mu, pi

sigma, mu, pi = initialise_parameters(X[:, :3], K)
print(‘\nSigma: \n’, sigma)
print(‘\nMu: \n’, mu)
print(‘\nPi: \n’, pi)

$E\_step$ computes the matrix $responsibilities \in [0, 1]^{N \times K}$, where $N$ is the number of data points, and $K$ is the number of gaussians you’re attempting to cluster the data with. Each gaussian will be associated with a column of $responsibilities$. As your algorithm runs, each row represents a data point $x_i$, and each column of that row will contain the probability that $x_i$ came from that gaussian, $p(x_i \mid k)$, signifying the extent to which this datapoint $x_i$ has been assigned to the gaussian associated with that column.

TASK 2: Implement $E\_step(\pi, \mu, \Sigma, X) = responsibilities$, which updates the $responsibilities$ matrix.

Remember, which gaussian a datapoint is assigned to depends on the probability of that datapoint given that gaussian.

Each element of responsibilities $r_{ik} = \frac{\pi_k \mathcal{N}(x_i \mid \mu_k \Sigma_k)}{\sum_j \pi_j \mathcal{N}(x_i \mid \mu_j \Sigma_j)}$, where $x_i$ is the $i^{th}$ row of $X$.
https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.stats.multivariate_normal.html
Use $multivariate\_normal.pdf(x, \mu_k, \Sigma_k)$

def E_step(pi, mu, sigma, X):
# YOUR CODE HERE

responsibilities = E_step(pi, mu, sigma, X[:, :3])
print(responsibilities)

TASK 3: Implement $M\_step(responsibilities, X) = \mu, \Sigma, \pi$ which returns the updated means and covariances for all of the $k$ gaussians, along with the priors $\pi$.

https://docs.scipy.org/doc/numpy/reference/generated/numpy.outer.html

def M_step(r, X):
# YOUR CODE HERE
return mu, sigma, pi

mu, sigma, pi = M_step(responsibilities, X[:, :3])
print(‘\nSigma: \n’, sigma)
print(‘\nMu: \n’, mu)
print(‘\nPi: \n’, pi)

TASK 4: Implement $classify(\pi, \mu, \Sigma, x) = k$ which takes an unknown example $x \in \mathbb{R}^{m}$, the means $\mu$ and covariance matrices $\Sigma$ and the priors $\pi$, and returns $k$, the number of the gaussian which maximises the probability of $x$.

In other words, ${arg max}_k \left(p(x \mid \mu_k, \Sigma_k)\pi_k \right)$.

def classify(pi, mu, sigma, x):
# YOUR CODE HERE

print(classify(pi, mu, sigma, X[270, :3]))

TASK 5: Implement $EM(X, K, iterations) = \mu, \Sigma, \pi$ which:

takes a dataset $X \in \mathbb{R}^{N \times m}$ and $K$, an integer indicating how many gaussians will be used to cluster the data, and $iterations$ the number of iterations to be performed.
uses all of the functions you completed above to initialise parameters and find the optimal means $\mu$, covariances $\Sigma$ and priors $\pi$ to cluster the data points (a gaussian mixture model).

def EM(X, K, iterations):
# YOUR CODE HERE
return mu, sigma, pi

#Test code. Leave it aloooooone!
iterations = 30
mu_1, sigma_1, pi_1 = EM(X[:, :3], K, iterations)
print(‘\nSigma: \n’, sigma_1)
print(‘\nMu: \n’, mu_1)
print(‘\nPi: \n’, pi_1)

def allocator(pi, mu, sigma, X, k):
N = X.shape[0]
cluster = []
for ix in range(N):
prospective_k = classify(pi, mu, sigma, X[ix, :])
if prospective_k == k:
cluster.append(X[ix, :])
return np.asarray(cluster)

colours = [‘r’, ‘g’, ‘b’]
fig = plt.figure(figsize=(16, 10))
ax = fig.add_subplot(111, projection=’3d’)
for k in range(K):
cluster = allocator(pi_1, mu_1, sigma_1, X[:, :3], k)
ax.scatter(cluster[:,0], cluster[:,1], cluster[:, 2], c=colours[k])
plt.show()

Task 6: Image segmentation is the process of assigning a label to every pixel in an image such that pixels with the same label share certain characteristics. In this task, we are going to implement a simple image segmentation algorithm using GMM.

The image_segmentation function should satisfy the following specifications:

Inputs: image: the image to be segmented. Type: np.ndarray
K: the number of gaussians.
iterations: the number of iterations of EM algorithm.

Return: a matrix, each element of this matrix corresponds to the class of pixels of the input image matrix. Type: np.ndarray. dtype: np.int32.
For an image with shape (78,78,3), the shape of returned matrix should be (78,78).

Make sure your code can run within 3 mins.

Read more: https://en.wikipedia.org/wiki/Image_segmentation

Hints: Remember to use the functions you defined above.

Let’s load the image to be segmented first.

image = plt.imread(‘mandm.png’)
plt.imshow(image)
plt.show()

def image_segmentation(image, K, iterations):

# test code, leave it alone!
import time
start = time.time()
gmm_labels = image_segmentation(image, 5, 10)
end = time.time()
print(f’It takes {end-start} seconds to segement the image.’)
colors = [[255, 0, 0], [0, 255, 0], [0, 0, 255], [255, 255, 255], [0, 0, 0]]
segemented_image = np.zeros_like(image, dtype=np.int32)
m, n, _ = segemented_image.shape
for i in range(m):
for j in range(n):
segemented_image[i, j] = np.array(colors[gmm_labels[i, j]])
plt.imshow(segemented_image)
plt.show()

Your answer should look like this, maybe with different colors:

程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com