CS代考 Discriminant Functions

Discriminant Functions
1. Consider a dichotomizer defined using the following linear discriminant function g(x) = w x + w0 where w = 1
and w0 = −5. Plot the decision boundary, and determine the class of the following feature vectors: 1 , 2 , and 􏰗3􏰘
2. In augmented feature space, a dichotomizer is defined using the following linear discriminant function g(x) = aty where

Copyright By PowCoder代写 加微信 powcoder

t 􏰗1􏰘 􏰗1􏰘􏰗2􏰘 􏰗3􏰘 a = (−5,2,1) and y = x . Determine the class of the following feature vectors: 1 , 2 , and 3
3. Consider a 3-dimensional feature space and quadratic discriminant function, g(x), where: g(x)=x21 −x23 +2x2x3 +4x1x2 +3×1 −2×2 +2
This discriminant function defines two classes, such that g(x) > 0 if x ∈ ω1 and g(x) ≤ 0 if x ∈ ω2. Determine the class of each of the following pattern vectors: (1 1 1)t, (−1 0 3)t, and (−1 0 0)t.
4. Consider a dichotomizer defined in a 2-dimensional feature space using a quadratic discriminant function, g(x), where: g(x) = xtAx + xtb + c
Classify the following feature vectors: (0, −1)t, and (1, 1)t, when: 􏰗2 1􏰘 􏰗1􏰘
i)A= 1 4 ,b= 2 ,andc=−3. 􏰗−25􏰘 􏰗1􏰘
ii)A = 5 −8 , b = 2 , and c = −3.
5. In augmented feature space, a dichotomizer is defined using the following linear discriminant function g(x) = aty
0  −1 
where at = (−3, 1, 2, 2, 2, 4) and yt = (1, xt). Determine the class of the following feature vectors, x:  0  and  0 
1  1 .  1 
6. A Linear Discriminant Function is used to define a Dichotomizer, such that x is assigned to class 1 if g(x) > 0, and x is assigned to class 2 otherwise. Use the Batch Perceptron Learning Algorithm (with augmented notation and sample normalisation), to find appropriate parameters for the linear discriminant function, when the data set is as shown.
Assume an initial values of a = (w0, wt)t = (−25, 6, 3)t, and use a learning rate of 1.
class (1, 5)t 1 (2, 5)t 1 (4, 1)t 2 (5, 1)t 2
t 􏰗2􏰘 􏰗1􏰘􏰗2􏰘

7. Repeat the previous question using the Sequential Perceptron Learning Algorithm (with augmented notation and sample normalisation).
8. Write pseudo-code for the sequential Perceptron Learning Algorithm 9. Consider the following linearly separable data set.
(−3, 1)t −1 (−2, −1)t −1 (−3, −2)t −1
Apply the Sequential Perceptron Learning Algorithm to determine the parameters of a linear discriminant function that will correctly classify this data. Assume an initial values of a = (w0, wt)t = (1, 0, 0)t, and use a learning rate of 1.
10. Repeat previous question using the sample normalisation method of implementing the Sequential Perceptron Learning Algorithm.
11. A data-set consists of exemplars from three classes.
􏰗 1 􏰘 􏰗 2 􏰘 􏰗 0 􏰘 􏰗 −1 􏰘 􏰗 −1 􏰘
Class 1: 1 , 0 . Class 2: 2 , 1 . Class 3: −1 . Use the Sequential Multiclass Perceptron
Learning algorithm to find the parameters for three linear discriminant functions that will correctly classify this data. Assume initial values for all parameters are zero, and use a learning rate of 1. If more than one discriminant function produces the maximum output, choose the function with the highest index (i.e., the one that represents the largest class label).
class (0, 2)t 1 (1, 2)t 1 (2, 1)t 1
12. Consider the following linearly separable data set.
(−3, 1)t −1 (−2, −1)t −1 (−3, −2)t −1
Use the pseudoinverse (pinv in MATLAB) to calculate the parameters of a linear discriminant function that can be used to classify this data. Use an arbitrary margin vector b = [1 1 1 1 1 1]t.
13. Repeat the previous question using (a) b = [2, 2, 2, 1, 1, 1]t, and (b) b = [1, 1, 1, 2, 2, 2]t.
14. For the same dataset used in the preceding question, apply 12 iterations of the Sequential Widrow- Algorithm. Assume an initial values of a = (w0, wt)t = (1, 0, 0)t , use a margin vector b = [111111]t, and a learning rate of 0.1.
15. The table shows the training data set for a simple classification problem. Class feature vector
1 (0.15,0.35)
2 (0.15,0.28)
2 (0.12,0.2)
3 (0.1,0.32)
3 (0.06,0.25)
class (0, 2)t 1 (1, 2)t 1 (2, 1)t 1

Use the k-nearest-neighbour classifier to determine the class of a new feature vector x = (0.1,0.25). Use Euclidean distance and
How would these results be affected if the first dimension of the feature space was scaled by a factor of two?
16. a) Plot the decision boundaries that result from applying a nearest neighbour classifier (i.e. a kNN classifier with k=1), to the data shown in the table. Assume Euclidean distance is used to define distance.
Class x1 x2 105 58 10 0 250 10 5
b) For the same data, plot the decision boundaries that result from applying a nearest mean classifier (i.e. one in which a new feature vector, x, is classified by assigning it to the same category as nearest sample mean).

程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com