Feature Extraction
1. Briefly define The following terms:
a. Feature Engineering
b. Feature Selection
Copyright By PowCoder代写 加微信 powcoder
c. Feature Extraction
d. Dimensionality Reduction e. Deep Learning
2. List 5 methods that can be used to perform feature extraction.
3. Write pseudo-code for the Karhunen-Loève Transform method for performing Principal Component Analysis (PCA).
4. Use the Karhunen-Loève Transform to project the following 3-dimensional data onto the first two principal components (the MATLAB command eig can be used to find eigenvectors and eignevalues).
1 2 3 2 x1 = 2 , x2 = 3 , x3 = 5 , x4 = 2 .
5. What is the proportion of the variance explained by the 1st two principal components in the preceding question?
6. Use the Karhunen-Loève Transform to project the following 2-dimensional dataset onto the first principal component
(the MATLAB command eig can be used to find eigenvectors and eignevalues).
035589 1,5,4,6,7,7.
7. Apply two epochs of a batch version of Oja’s learning rule to the same data used in the previous question. Use a learning rate of 0.01 and an initial weight vector of [-1,0].
8. The graph below shows a two-dimensional dataset in which examplars come from two classes. Exemplars from one class are plotted using triangular markers, and exemplars from the other class are plotted using square markers.
• Draw the approximate direction of the first principal component of this data.
• Draw the approximate direction of the axis onto which the data would be projected using LDA.
9. Briefly describe the optimisation performed by Fisher’s Linear Discriminant Analysis to find a projection of the original data onto a subspace.
10. For the data in the Table below use Fisher’s method to determine which of the following projection weights is more effective at performing Linear Discriminant Analysis (LDA).
• wT =[−1,5] • wT =[2,−3]
Class Feature vector xT 1 [1,2]
2 [6,5] 2 [7,8]
11. An Extreme Learning Machine consists of a hidden layer with six neurons, and an output layer with one neuron. The weights to the hidden neurons have been assigned the following random values:
−0.62 0.44 −0.91 −0.81 −0.09 0.02 0.74 −0.91 −0.60
V = −0.82 −0.92 0.71 −0.26 0.68 0.15
0.80 −0.94 −0.83
The weights to the output neuron are: w = (0, 0, 0, −1, 0, 0, 2). All weights are defined using augmented vector notation.
Hidden neurons are Linear Threshold units, while the output neuron is linear. Calculate the response of the output neuron
0011 to each of the following input vectors: 0 , 1 , 0 , 1 .
12. Given a dictionary, Vt, what is the best sparse code for the signal x out of the following two alternatives: i) y1t = (1,0,0,0,1,0,0,0)
ii) y2t = (0,0,1,0,0,0,−1,0)
t 0.4 0.55 0.5 −0.1 −0.5 0.9 0.5 0.45 −0.05
Where V = −0.6 −0.45 −0.5 0.9 −0.5 0.1 0.5 0.55 , and x = −0.95 . Assume that sparsity is measured as the count of elements that are non-zero.
13. Repeat the previous questions when the two alternatives are: i) y1t = (1,0,0,0,1,0,0,0)
ii) y2t = (0,0,0,−1,0,0,0,0)
程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com