CS代写 Biological Inspired Methods

Biological Inspired Methods

From the figure, the two classes can be separated by a straight line. It is thus linearly separable, which suggests that the best kernel is a linear kernel function.

Copyright By PowCoder代写 加微信 powcoder

Define labels for classes
Design the form of classifier (hyperplane)
Obtain the conditions from SVM principle
Determine the parameter values of SVM classifier

 Defined by the designer
 Data in 2 dimensional space

Hard-margin linear SVM:

\noindent Consider the term: $\lambda_i (y_i(\mathbf{w}^T \mathbf{x}_i + w_0) – 1)$\\

\noindent Take $i = 1$ as an example, we have $\mathbf{x}_1 = \left[ \begin{array}{c} 1 \\ 1 \end{array} \right]$, $y_1 = 1$ and 
\noindent $\lambda_1 (y_1(\mathbf{w}^T \mathbf{x}_1 + w_0) – 1)$.\\

\noindent After expending, with $\mathbf{w}^T = \left[ \begin{array}{cc} w_1 & w_2 \end{array} \right]$ we have\\
$\lambda_1 (1 \times (\left[ \begin{array}{cc} w_1 & w_2 \end{array} \right] \left[ \begin{array}{c} 1 \\ 1 \end{array} \right] + w_0) – 1) = \lambda_1 (w_1 + w_2 + w_0  – 1)

From p. 16 of lecture notes:

From p. 16 of lecture notes:

\begin{align*}
  \mathbf{w} = \left[ \begin{array}{cc} w_1 \\ w_2 \end{array} \right] &= \displaystyle \sum_{i=1}^4 \lambda_i y_i \mathbf{x}_i\\
  &= \lambda_1 y_1 \mathbf{x}_1 + \lambda_2 y_2 \mathbf{x}_2 + \lambda_1 y_3 \mathbf{x}_3 + \lambda_4 y_4 \mathbf{x}_4\\
  &= \lambda_1 \times 1 \times \left[ \begin{array}{c} 1 \\ 1 \end{array} \right] + \lambda_2 \times 1 \times \left[ \begin{array}{c} 1 \\ -1 \end{array} \right] + \lambda_3 \times -1 \times \left[ \begin{array}{c} -1 \\ 1 \end{array} \right] + \lambda_4 \times – 1 \times \left[ \begin{array}{c} -1 \\ -1 \end{array} \right]\\
  &= \left[ \begin{array}{c} \lambda_1 + \lambda_2 + \lambda_3 + \lambda_4
 \\ \lambda_1 – \lambda_2 – \lambda_3 + \lambda_4
 \end{array} \right]
\end{align*}

\begin{align*}
  \displaystyle \sum_{i=1}^4 \lambda_i y_i &= \lambda_1 y_1 + \lambda_2 y_2 + \lambda_3 y_3 + \lambda_4 y_4 \\
 &= \lambda_1 \times 1 + \lambda_2 \times 1 + \lambda_3 \times -1 + \lambda_4 \times -1 \\
 &= \lambda_1 + \lambda_2 – \lambda_3 – \lambda_4 = 0
\end{align*}

From p. 16 of lecture notes:

How many variables we need to find?

\noindent Looking at $w_1(w_1 – 1) + w_2^2 = 0$, if $w_1 > 1$, $w_2^2 = -w_1(w_1 – 1)$ which is not possible.  \\~\\The only solution is $w_1 = 1$, then $w_2^2 = -w_1(w_1 – 1) = 0 \Rightarrow w_2 = 0$.  \\

\noindent Substituting $w_1 = 1$ and $w_2 = 0$ to (1) to (4) and considering (1) + (2) $-$ (3) $-$ (4), we have:\\~\\ $\lambda_1 (1 + w_0  – 1) + \lambda_2 ( 1 + w_0  – 1) – \lambda_3 (1 – w_0  – 1) – \lambda_4 (1 – w_0  – 1) = w_0 = 0$. \\
\noindent The linear SVM classifier is: $\text{sgn}(w_1 x_1 + w_2 x_2 + w_0) = \text{sgn}(x_1)$ (hard classifier)

From p. 16 of lecture notes:

\begin{align*}
   w_2^2 &= -w_1(w_1 – 1)\\
  w_1 &\ge 1
\end{align*}

From p. 16 of lecture notes:

\begin{align*}
  w_1(w_1 – 1) + w_2^2 &= 0\\
  w_1 &\ge 1
\end{align*}

\noindent Looking at $w_1(w_1 – 1) + w_2^2 = 0$, if $w_1 > 1$, $w_2^2 = -w_1(w_1 – 1)$ which is not possible.  \\~\\The only solution is $w_1 = 1$, then $w_2^2 = -w_1(w_1 – 1) = 0 \Rightarrow w_2 = 0$.  \\

\noindent Substituting $w_1 = 1$ and $w_2 = 0$ to (1) to (4) and considering (1) + (2) $-$ (3) $-$ (4), we have:\\~\\ $\lambda_1 (1 + w_0  – 1) + \lambda_2 ( 1 + w_0  – 1) – \lambda_3 (1 – w_0  – 1) – \lambda_4 (1 – w_0  – 1) = w_0 = 0$. \\
\noindent The linear SVM classifier is: $\text{sgn}(w_1 x_1 + w_2 x_2 + w_0) = \text{sgn}(x_1)$ (hard classifier)

From p. 16 of lecture notes:

\begin{align*}
  w_1(w_1 – 1) + w_2^2 &= 0\\
  w_1 &\ge 1
\end{align*}

\noindent Looking at $w_1(w_1 – 1) + w_2^2 = 0$, if $w_1 > 1$, $w_2^2 = -w_1(w_1 – 1)$ which is not possible.  \\~\\The only solution is $w_1 = 1$, then $w_2^2 = -w_1(w_1 – 1) = 0 \Rightarrow w_2 = 0$.  \\

\noindent Substituting $w_1 = 1$ and $w_2 = 0$ to (1) to (4) and considering (1) + (2) $-$ (3) $-$ (4), we have:\\~\\ $\lambda_1 (1 + w_0  – 1) + \lambda_2 ( 1 + w_0  – 1) – \lambda_3 (1 – w_0  – 1) – \lambda_4 (1 – w_0  – 1) = w_0 = 0$. \\
\noindent The linear SVM classifier is: $\text{sgn}(w_1 x_1 + w_2 x_2 + w_0) = \text{sgn}(x_1)$ (hard classifier)

\noindent \lambda_1 + \lambda_2 + \lambda_3 + \lambda_4 = 1\\~\\
\noindent \lambda_1 – \lambda_2 – \lambda_3 + \lambda_4 = 0\\~\\
\noindent \lambda_1 + \lambda_2 – \lambda_3 – \lambda_4 = 0

x=[1 1 -1 -1; 1 -1 1 -1];
y = [1 1 -1 -1];
lambda3 = 0.1;
lambda4 = 0.4;
lambda1 = lambda3;
lambda2 = lambda4;
lambda1*y(1)*x(:,1) + lambda2*y(2)*x(:,2) + lambda3*y(3)*x(:,3) + lambda4*y(4)*x(:,4)

From the figure, the two classes can be separated by a straight line. It is thus linearly separable.

 Class label: 1
 Class label: – 1

Remark: Except 𝜆1, 𝜆2 and 𝜆5 are non zero (support vectors), the rest 𝜆 are all zero.

\noindent Define the label for Class 1 as $+1$ and Class 2 as $-1$:\\
$y_1 = y_2 = y_3 = y_4 = 1$,\\ $y_5 = y_6 =  y_7 = y_8 = -1$.

\noindent Hyperplane: $\mathbf{w}^T \mathbf{x} + w_0 = 0$\\
\begin{align*}
  \mathbf{w} &= \lambda_1 y_1 \mathbf{x}_1 + \lambda_2 y_2 \mathbf{x}_2 + \lambda_5 y_5 \mathbf{x}_5\\
 &= \lambda_1 \left[ \begin{array}{c} 3 \\ 1 \end{array} \right] + \lambda_2 \left[ \begin{array}{c} 3 \\ -1 \end{array} \right] – \lambda_5 \left[ \begin{array}{c} 1 \\ 0 \end{array} \right].
\end{align*}

 Class label: 1
 Class label: – 1

Recall that $y_i (\mathbf{w}^T \mathbf{x} + w_0) = 1$ when $\mathbf{x}$ is a support vector.

\noindent $\mathbf{x} = \mathbf{x}_1$, $y_1 = 1$:\\
$y_1 (\mathbf{w}^T \mathbf{x}_1 + w_0) = 1 \times \Bigg( \bigg(\lambda_1 \left[ \begin{array}{c} 3 \\ 1 \end{array} \right] + \lambda_2 \left[ \begin{array}{c} 3 \\ -1 \end{array} \right] – \lambda_5 \left[ \begin{array}{c} 1 \\ 0 \end{array} \right] \bigg)^T \left[ \begin{array}{c} 3 \\ 1 \end{array} \right] + w_0 \Bigg) = 1$ \\ $\Rightarrow 10 \lambda_1 + 8 \lambda_2 – 3 \lambda_5 + w_0 = 1$. \\

\noindent $\mathbf{x} = \mathbf{x}_2$, $y_2 = 1$:\\
$y_2 (\mathbf{w}^T \mathbf{x}_2 + w_0) = 1 \times \Bigg( \bigg(\lambda_1 \left[ \begin{array}{c} 3 \\ 1 \end{array} \right] + \lambda_2 \left[ \begin{array}{c} 3 \\ -1 \end{array} \right] – \lambda_5 \left[ \begin{array}{c} 1 \\ 0 \end{array} \right] \bigg)^T \left[ \begin{array}{c} 3 \\ -1 \end{array} \right] + w_0 \Bigg) = 1$ \\ $\Rightarrow 8 \lambda_1 + 10 \lambda_2 – 3 \lambda_5 + w_0 = 1$.\\

\noindent $\mathbf{x} = \mathbf{x}_5$, $y_5 = -1$:\\
$y_5 (\mathbf{w}^T \mathbf{x}_5 + w_0) = -1 \times \Bigg( \bigg(\lambda_1 \left[ \begin{array}{c} 3 \\ 1 \end{array} \right] + \lambda_2 \left[ \begin{array}{c} 3 \\ -1 \end{array} \right] – \lambda_5 \left[ \begin{array}{c} 1 \\ 0 \end{array} \right] \bigg)^T \left[ \begin{array}{c} 1 \\ 0 \end{array} \right] + w_0 \Bigg) = 1$ \\ $\Rightarrow 3 \lambda_1 + 3 \lambda_2 – \lambda_5 + w_0 = -1$.

\noindent $\sum_{i = 1}^8 \lambda_i y_i = 0 \Rightarrow \lambda_1 + \lambda_2 – \lambda_5 = 0$.\\

\noindent $\left[ \begin{array}{cccc} 10 & 8 & -3 & 1 \\ 8 & 10 & -3 & 1 \\ 3 & 3 & -1 & 1 \\ 1 & 1 & -1 & 0 \end{array} \right] \left[ \begin{array}{c} \lambda_1 \\ \lambda_2 \\ \lambda_5 \\ w_0 \end{array} \right] = \left[ \begin{array}{c} 1 \\ 1 \\ -1 \\ 0 \end{array} \right]$.\\

\noindent$\lambda_1 = \lambda_2 = 0.25$, $\lambda_5 = 0.5$ and $w_0 = -2$.\\

\noindent $\mathbf{w} = 0.25 \left[ \begin{array}{c} 3 \\ 1 \end{array} \right] + 0.25 \left[ \begin{array}{c} 3 \\ -1 \end{array} \right] – 0.5 \left[ \begin{array}{c} 1 \\ 0 \end{array} \right] = \left[ \begin{array}{c} 1 \\ 0 \end{array} \right]$

On p. 2 of lecture note:

Using the hard classifier f(x) = sgn(x1-2), the new point has x1>2 and we get 1 as the predicted class

\noindent $\sum_{i = 1}^8 \lambda_i y_i = 0 \Rightarrow \lambda_1 + \lambda_2 – \lambda_5 = 0$.\\

\noindent $\left[ \begin{array}{cccc} 10 & 8 & -3 & 1 \\ 8 & 10 & -3 & 1 \\ 3 & 3 & -1 & 1 \\ 1 & 1 & -1 & 0 \end{array} \right] \left[ \begin{array}{c} \lambda_1 \\ \lambda_2 \\ \lambda_5 \\ w_0 \end{array} \right] = \left[ \begin{array}{c} 1 \\ 1 \\ -1 \\ 0 \end{array} \right]$.\\

\noindent$\lambda_1 = \lambda_2 = 0.25$, $\lambda_5 = 0.5$ and $w_0 = -2$.\\

\noindent $\mathbf{w} = 0.25 \left[ \begin{array}{c} 3 \\ 1 \end{array} \right] + 0.25 \left[ \begin{array}{c} 3 \\ -1 \end{array} \right] – 0.5 \left[ \begin{array}{c} 1 \\ 0 \end{array} \right] = \left[ \begin{array}{c} 1 \\ 0 \end{array} \right]$

\begin{align*}
  \frac{2}{\Vert \mathbf{w} \Vert} &= \frac{2}{\sqrt{w_1^2 + w_2^2}}\\ &= \frac{2}{\sqrt{1^2 + 0^2}}\\ &= 2
\end{align*}

Assign class labels, plot data and identify support vectors by observation
Represent w using the identified support vectors as:

Construct linear equations in terms of 𝜆 using the support vectors:

Solve the linear equations (in matrix form) to find 𝜆 and w0
Construct w using with the found 𝜆 as:

Construct the hyperplane as:

$\mathbf{w} = \displaystyle \sum_{i=1}^N \lambda_i y_i \mathbf{x}_i$

$\displaystyle \sum_{i=1}^N \lambda_i y_i = 0$

$y_i (\mathbf{w}^T \mathbf{x} + w_0) = 1$ 

$\mathbf{w}^T \mathbf{x} + w_0 = 0$

The two classes cannot be separated by a straight line. It is thus non-linearly separable.
What can be done to the data?

\noindent Take $\mathbf{x}_2$ as an example: $x_1 = 2$ and $x_2 = -2$\\~\\

\noindent $\Vert \mathbf{x} \Vert = \sqrt{x_1^2 + x_2^2} = \sqrt{2^2 + (-2)^2} = \sqrt{8} > 2$\\~\\

\noindent $\mathbf{z} = \left[ \begin{array}{c} z_1 \\ z_2 \end{array} \right] = \left[ \begin{array}{c} 4 – \frac{x_2}{2} + \vert x_1 – x_2 \vert \\ 4 – \frac{x_1}{2} + \vert x_1 – x_2 \vert \end{array} \right] = \left[ \begin{array}{c} 4 – \frac{-2}{2} + \vert 2 – (-2) \vert \\ 4 – \frac{2}{2} + \vert 2 – (-2) \vert \end{array} \right] = \left[ \begin{array}{c} 9 \\ 7 \end{array} \right]$

\noindent Take $\mathbf{x}_6$ as an example: $x_1 = 1$ and $x_2 = -1$\\~\\

\noindent $\Vert \mathbf{x} \Vert = \sqrt{x_1^2 + x_2^2} = \sqrt{1^2 + (-1)^2} = \sqrt{2} < 2$\\~\\ \noindent $\mathbf{z} = \left[ \begin{array}{c} z_1 \\ z_2 \end{array} \right] = \left[ \begin{array}{c} x_1 - 2 \\ x_2 - 3 \end{array} \right] = \left[ \begin{array}{c} 1 - 2 \\ -1 - 3 \end{array} \right] = \left[ \begin{array}{c} -1 \\ -4 \end{array} \right]$ The dataset in feature space z is linearly separable as a straight line (linear classifier) can separate the samples correctly into two classes. How to find the mapping function? \noindent Applying the feature mapping function to $\mathbf{x}$, we obtain dataset in a new feature space, i.e., $\mathbf{z} = \Phi(\mathbf{x})$:  \noindent Class 1: $\mathbf{z}_1 = \left[ \begin{array}{c} 3 \\ 3 \end{array} \right]$, $\mathbf{z}_2 = \left[ \begin{array}{c} 9 \\ 7 \end{array} \right]$, $\mathbf{z}_3 = \left[ \begin{array}{c} 5 \\ 5 \end{array} \right]$, $\mathbf{z}_4 = \left[ \begin{array}{c} 7 \\ 9 \end{array} \right]$;\\~\\ \noindent Class 2: $\mathbf{z}_5 = \left[ \begin{array}{c} -1 \\ -2 \end{array} \right]$, $\mathbf{z}_6 = \left[ \begin{array}{c} -1 \\ -4 \end{array} \right]$, $\mathbf{z}_7 = \left[ \begin{array}{c} -3 \\ -4 \end{array} \right]$, $\mathbf{z}_8 = \left[ \begin{array}{c} -3 \\ -2 \end{array} \right]$. \noindent It can be seen from the figure that the dataset is linearly separable in a new feature space after applying the feature mapping function.  Class label: +1  Class label: –1 \noindent Combining with the condition $\sum_{i = 1}^8 \lambda_i y_i = 0 \Rightarrow \lambda_1 - \lambda_5 = 0$, we have\\ $\left[ \begin{array}{ccc} 18 & 9 & 1 \\ -9 & -5 & 1 \\ 1 & -1 & 0 \end{array} \right] \left[ \begin{array}{c} \lambda_1 \\ \lambda_5 \\ w_0 \end{array} \right] = \left[ \begin{array}{c} 1 \\ -1 \\ 0 \end{array} \right]$.\\~\\ \noindent It gives $\lambda_1 = \lambda_5 = 0.0488$  and $w_0 = -0.3171$.\\~\\ \noindent As a result, $\mathbf{w} = 0.0488 \left[ \begin{array}{c} 3 \\ 3 \end{array} \right] - 0.0488 \left[ \begin{array}{c} -1 \\ -2 \end{array} \right] = \left[ \begin{array}{c} 0.1952 \\ 0.2440 \end{array} \right]$ \noindent Hyperplane (in new feature space $\mathbf{z}$):\\ $\mathbf{w}^T \mathbf{z} + w_0 = 0.1952 z_1 + 0.2440 z_2 - 0.3171 = 0$.\\~\\ \noindent The hyperplane (in original feature space $\mathbf{x}$):\\ $\mathbf{w}^T \Phi(\mathbf{x}) + w_0 = (\lambda_1 y_1 \Phi(\mathbf{x}_1) + \lambda_5 y_5 \Phi(\mathbf{x}_5))^T\Phi(\mathbf{x}) + w_0 = \left[ \begin{array}{c} 0.1952 \\ 0.2440 \end{array} \right]^T \Phi(\mathbf{x})~-~0.3171~=~0$. \\~\\ \noindent The hyperplane in the original feature space in kernel form:\\  $0.0488 K(\mathbf{x}_1,\mathbf{x}) - 0.0488 K(\mathbf{x}_5,\mathbf{x}) - 0.3171 = 0$ where $K(\mathbf{p},\mathbf{x}) = \Phi(\mathbf{p})^T\Phi(\mathbf{x})$ $\mathbf{w} = \displaystyle \sum_{i=1}^N \lambda_i y_i \mathbf{z}_i = \lambda_1 y_1 \Phi(\mathbf{x}_1) + \lambda_5 y_5 \Phi(\mathbf{x}_5)$ $\lambda_1 y_1 \Phi(\mathbf{x}_1)\Phi(\mathbf{x})$ $\lambda_5 y_5 \Phi(\mathbf{x}_5)\Phi(\mathbf{x})$ Department of Informatics, King’s College London Pattern Recognition (6CCS3PRE/7CCSMPNN) Tutorial 8 Q1. An SVM classifier is employed to classify the following points: Class 1: x1 = Class 2: x3 = a. Determine the best kernel function for this classification problem. b. Design an SVM classifier to classify all given points. c. Identify the support vectors. Q2. An SVM classifier is employed to classify the following points: Class 1: x1 = Class 2: x5 = a. Determine if the dataset is linearly separable. b. Identify the support vectors by inspection. c. Design an SVM classifier to classify all given points. What is the margin? d. Classify the point x = using the designed SVM classifier. Q3. An SVM classifier is employed to classify the following points: Class 1: x1 = Class 2: x5 = a. Determine if the dataset is linearly separable. b. Apply the feature mapping function �(x) = + |x1 � x2| + |x1 � x2 if kxk > 2

to the dataset where x =

. Determine if the dataset is linearly separable

in the new feature space.

c. Identify the support vectors by inspection.

Department of Informatics, King’s College London

Pattern Recognition (6CCS3PRE/7CCSMPNN)

Tutorial 8

Q1. An SVM classifier is employed to classify the following points:

Class 1: x1 =

Class 2: x3 =

a. Determine the best kernel function for this classification problem.

b. Design an SVM classifier to classify all given points.

c. Identify the support vectors.

Q2. An SVM classifier is employed to classify the following points:

Class 1: x1 =

Class 2: x5 =

a. Determine if the dataset is linearly separable.

b. Identify the support vectors by inspection.

c. Design an SVM classifier to classify all given points. What is the margin?

d. Classify the point x =

using the designed SVM classifier.

Q3. An SVM classifier is employed to classify the following points:

Class 1: x1 =

Class 2: x5 =

a. Determine if the dataset is linearly separable.

b. Apply the feature mapping function �(x) =

+ |x1 � x2|

+ |x1 � x2

if kxk > 2

to the dataset where x =

. Determine if the dataset is linearly separable

in the new feature space.

c. Identify the support vectors by inspection.

Department of Informatics, King’s College London

Pattern Recognition (6CCS3PRE/7CCSMPNN)

Tutorial 8

Q1. An SVM classifier is employed to classify the following points:

Class 1: x1 =

Class 2: x3 =

a. Determine the best kernel function for this classification problem.

b. Design an SVM classifier to classify all given points.

c. Identify the support vectors.

Q2. An SVM classifier is employed to classify the following points:

Class 1: x1 =

Class 2: x5 =

a. Determine if the dataset is linearly separable.

b. Identify the support vectors by inspection.

c. Design an SVM classifier to classify all given points. What is the margin?

d. Classify the point x =

using the designed SVM classifier.

Q3. An SVM classifier is employed to classify the following points:

Class 1: x1 =

Class 2: x5 =

a. Determine if the dataset is linearly separable.

b. Apply the feature mapping function �(x) =

+ |x1 � x2|

+ |x1 � x2

if kxk > 2

to the dataset where x =

. Determine if the dataset is linearly separable

in the new feature space.

c. Identify the support vectors by inspection.

Department of Informatics, King’s College London

Pattern Recognition (6CCS3PRE/7CCSMPNN)

Tutorial 8

Q1. An SVM classifier is employed to classify the following points:

Class 1: x1 =

Class 2: x3 =

a. Determine the best kernel function for this classification problem.

b. Design an SVM classifier to classify all given points.

c. Identify the support vectors.

Q2. An SVM classifier is employed to classify the following points:

Class 1: x1 =

Class 2: x5 =

a. Determine if the dataset is linearly separable.

b. Identify the support vectors by inspection.

c. Design an SVM classifier to classify all given points. What is the margin?

d. Classify the point x =

using the designed SVM classifier.

Q3. An SVM classifier is employed to classify the following points:

Class 1: x1 =

Class 2: x5 =

a. Determine if the dataset is linearly separable.

b. Apply the feature mapping function �(x) =

+ |x1 � x2|

+ |x1 � x2

if kxk > 2

to the dataset where x =

. Determine if the dataset is linearly separable

in the new feature space.

c. Identify the support vectors by inspection.

Department of Informatics, King’s College London
Pattern Recognition (6CCS3PRE/7CCSMPNN)

Tutorial 8 (Suggested Solution)

Q1. a. It can be seen from the figure that the two classes are linearly separable, the

best kernel function is linear kernel function.

b. Define the label for Class 1 as +1 and Class 2 as �1 so we have y1 = y2 = 1
and y3 = y4 = �1.

The hyperplane is: wTx+ w0 = w1x1 + w2x2 + w0 = 0.

Primal problem:

L(w, w0,�) =

Txi + w0)� 1)

� �1(w1 + w2 + w0 � 1)� �2(w1 � w2 + w0 � 1)

� �3(w1 � w2 � w0 � 1)� �4(w1 + w2 � w0 � 1)

@L(w,w0,�)

= 0 ) w1 = �1 + �2 + �3 + �4

@L(w,w0,�)

= 0 ) w2 = �1 � �2 � �3 + �4

@L(w,w0,�)

= 0 ) �1 + �2 � �3 � �4 = 0
Step 3: �i

Txi + w0)� 1
= 0 leading to:

�1(w1 + w2 + w0 � 1) = 0 (1)

�2(w1 � w2 + w0 � 1) = 0 (2)
�3(w1 � w2 � w0 � 1) = 0 (3)
�4(w1 + w2 � w0 � 1) = 0 (4)

Step 4: �1,�2,�3,�4 � 0

Department of Informatics, King’s Colle

程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com