CS代考 1. [5 pts.] 1.

1. [5 pts.] 1.
2. [5 pts.] 1.
3. [ExtraCredit: 5 pts]
Consider the above data.

Copyright By PowCoder代写 加微信 powcoder

Trace the behavior of Hierarchical Clustering down to 3 clusters using single-linkage.
Trace the behavior of Hierarchical Clustering down to 3 clusters using complete-linkage.
Trace the behavior of k-Means, using 2-decimal places for new centroids, with: k=3, initial centroids (500,10), (200,700), (800,200)
k=3, initial centroids of A, B, and C
Is k-means in the general case guaranteed to terminate? Why or why not? True or False: linear functions are good candidates for neural network perceptrons? Explain your answer.
True or False: finding a linear separator for a training set is NP-hard? Explain your answer.
How is a support vector machine (“SVM”) different than a linear separator? Explain.
What is the salient feature of Deep Learning as opposed to traditional Supervised Learning?
Spring ¡®22 AI
Homework 11: solutions
For all questions, using Euclidean distance squared for p, q of = sqrt((p_x – q_x)^2 + (p_y – q_y)^2).
Note: I wrote
So I will accept both (they actually give the same results for both). Trace uses sqrt()
¡°Euclidean distance squared¡± but then wrote the formula for Euclidean distance.
1.1 {A} {B} {C} {D} {E} {F} {G} {H} {I} {J}
closest clusters are C3 = {C} and C4 = {D} = 154.56 {A} {B} {C D} {E} {F} {G} {H} {I} {J}
closest clusters are C1 = {A} and C4 = {E} = 183.58 {A E} {B} {C D} {F} {G} {H} {I} {J}
closest clusters are C2 = {B} and C7 = {I} = 212.94

{A E} {{B} {I}} {C D} {F} {G} {H} {J}
closest clusters are C1 = {A,E} and C3 = {C,D} = 279.36
{{A E} {C D}} {B I} {F} {G} {H} {J}
closest clusters are C1 = {A,E,C,D} and C6 = {J} = 289.28
{{{A E} {C D}} J} {B I} {F} {G} {H}
closest clusters are C1 = {A,E,C,D,J} and C2 = {B,I} = 425.42 {{{{A E} {C D}} J} B I} {F} {G} {H}
closest clusters are C1 = {A,E,C,D,J,B,I} and C2 = {F} = 426.62 {{{{{A E} {C D}} J} B I} F} {G} {H}
C1 = {A,E,C,D,J,B,I,F}
1.2 {A} {B} {C} {D} {E} {F} {G} {H} {I} {J}
closest clusters are C3 = {C} and C4 = {D} = 154.56
{A} {B} {C D} {E} {F} {G} {H} {I} {J}
closest clusters are C1 = {A} and C4 = {E} = 183.58
{A E} {B} {C D} {F} {G} {H} {I} {J}
closest clusters are C2 = {B} and C7 = {I} = 212.94
{A E} {B I} {C D} {F} {G} {H} {J}
closest clusters are C1 = {A,E} and C7 = {J} = 372.08
{{A E} J} {B I} {C D} {F} {G} {H}
closest clusters are C4 = {F} and C5 = {G} = 440.22
{{A E} J} {B I} {C D} {F G} {H}
closest clusters are C1 = {A,E,J} and C3 = {C,D} = 518.73
{{{A E} J} C D} {B I} {F G} {H}
closest clusters are C2 = {B,I} and C4 = {H} = 727.40
{{{A E} J} C D} {{B I} H} {F G}
C1 = {A,E,J,C,D}
C2 = {B,I,H}
C3 = {F,G}
Initial Centroids= <500,10> <200,700> <800,200>
C1 = {A,B,C,D,E}
C2 = {F,G}
C3 = {H,I,J}
recenter=<456.4, 736> <224, 16> <937, 641.334>

C1 = {F,G}
C2 = {A,C,D,E}
C3 = {B,H,I,J}
recenter =<224, 16> <337.25, 646.75> <936, 754.25>
C1 = {F,G}
C2 = {A,C,D,E} C3 = {B,H,I,J}
Unchanged: complete
Initial Centroids=<362,768> <933,1093> <192,539>
C1 = {C,D,F,G}
C2 = {A,E,J}
C3 = {B,H,I}
recenter=<233.25, 248.25> <521.666, 755> <1014.334, 792.666>
C1 = {C,D,F,G} C2 = {A,E,J} C3 = {B,H,I}
Unchanged: complete
EC.1 Yes, kmeans will terminate as it is in the same family as hill-climbing, but note that it may not terminate with the optimal solution.
EC.2 False. As the composition of linear functions is linear, nothing is gained by multi-layer systems using linear functions.
EC.3 False. Finding a linear separator, if one exists, is solvable via linear programming. What is NP-hard is finding the maximal accuracy linear classifier when no linear separator exists. EC.4 An SVM is the ¡°best¡± linear separator in that it maximizes the margin to the closest points in the training set on both sides of the separator.
EC.5 Deep learning eschews traditional domain-specific feature engineering and instead uses the raw data input, fed through multiple hidden model layers to build a model.

程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com