Tutorial 4
Q1. Draw the diagram of a 2-input-3-output multi-layer feedforward fully-connected neural network including biases. It has 2 hidden layers where the first hidden layer has 4 hidden units and the second hidden layer has 5 hidden units. How many connection weights it has in total?
Q2. Derive the equation for a multi-layer feedforward fully-connected multilayer neural network including biases in terms of d (number of inputs), n(h) (number of hidden
units in the hth hidden layer, h = 1, 2, . . . , L), L (number of hidden layers) and c
Copyright By PowCoder代写 加微信 powcoder
(number of outputs).
Q3. Fig. 1 shows 3 patterns of a classification problem. A feedforward fully-connected neural network is employed to classify the patterns. How many inputs and outputs should be used for the neural network? List the input patterns and target output in a table.
(a) Pattern 1. (b) Pattern 2. (c) Pattern 3.
Figure 1: 3 patterns.
Q4. Fig. 2 shows a 4-3-2 neural network, which is used to classify the patterns shown in Fig. 1. The assignment of input x = x1 x2 x3 x4 T is shown in Fig. 3 where 0 represents an unfilled grid and 1 represents a filled grid. The activation functions in the input layer, hidden layer and output layer are linear function, symmetric tangent sigmoid function and logarithmic sigmoid function, respectively. Given that the weight and bias matrices as:
−0.7057 1.9061 2.6605 −1.1359 4.8432 Wji = 0.4900 1.9324 −0.4269 −5.1570 , Wj0 = 0.3973 ,
0.9438 −5.4160 −0.3431 −0.2931 2.1761 −1.1444 0.3115 −9.9812 2.5230
Wkj = 0.0106 11.5477 2.6479 ,Wk0 = 2.6463 , determine the output patterns representing the 3 classes.
Input layer
Hidden layer f(·) y1 f(·) y2 f(·) y3
Output layer
f(·) z1(x)
f(·) z2(x)
Figure 2: A diagram of 4-3-2 feedforward fully-connected neural network.
x1 x2 x4 x3
Figure 3: Assignment of input x.
Q5. Fig. 4 shows a 3-layer partially connected neural network. Linear function is used as the activation function in the input units, symmetric tangent sigmoid function is used as the activation function in all hidden and output units. Stochastic backpropagation is employed to train the neural network using the cost J = 12 ∥t − z1∥2 where z1 is the network output corresponding to input pattern x selected from the training set and t is its corresponding target output.
a. Considering x = 0.9 , determine y1, y2 and z1.
b. Derive the backpropagation update rules for m10 and w21. 0.1
c. Given that the learning rate η = 0.25, x = 0.9 , t = 0.5 and the weights in Fig. 4, determine the updated values of m10 and w21 for the next iteration
based on the update rules obtained in Q5.b.
d. Assuming that the backpropagation algorithm only updates the weights m10
and w21, show that the updated weights are valid.
Input layer
Hidden layer Output layer
f(·) 1 f(·) z1(x)
/w22 =−0.7 (bias) 1
Figure 4: A diagram of 3-layer feedforward neural network.
Q6. Consider an XOR problem given in Table 1. Design a radial basis function (RBF)
neural network with bias to solve the problem. Choose input pattern 1 and pattern
4 as the centres; Gaussian function in the hidden units (with σ = ρmax ) and linear
function in the output units.
p x1 x2 t 1000 2011 3101 4110
Table 1: XOR problem.
a. Draw a diagram showing the structure of an RBF neural network solving the XOR problem.
b. Compute the outputs at the hidden units for all input patterns and list them in a table. Show that the patterns at the hidden units are linearly separable.
c. Determine the output weights using the least squares method. Compute the outputs at the output unit(s) and list them in a table. Show that the XOR problem is solved.
d. Determine the classes for the input patterns given in Table 2.
0.5 −0.1 −0.2 1.2 0.8 0.3 1.8 0.6
Table 2: Input patterns for the XOR classifier.
class 1 Q7. Consider a classifier which is able to separate two classes, i.e., class 2
g(x) < x2 where g(x) is an unknown function. We have a limited information about the
function g(x). Table 3 lists the input-output mappings of the function g(x). 3
m10 = −0.4
1 0.0500 0.0863
2 0.2000 0.2662
3 0.2500 0.2362
4 0.3000 0.1687
5 0.4000 0.1260
6 0.4300 0.1756
7 0.4800 0.3290
8 0.6000 0.6694
9 0.7000 0.4573
10 0.8000 0.3320 11 0.9000 0.4063 12 0.9500 0.3535
Table 3: Input-output mappings for function g(x).
a. Sketch the function g(x) and determine the possible class the following points
x = 0.1, 0.35, 0.55, 0.75, 0.9 belong to.
b. An RBF neural network is employed to implement the classifier. Determine its
number of inputs and outputs.
c. An RBF neural network with three hidden units is employed to implement the classifier. Gaussian function with σ = 0.1 is used in all hidden units and linear function is used in all output units. Given that c1 = 0.2, c2 = 0.6 c3 = 0.9, determine the output weights of the RBF neural network using the least squares method. Give the equation of the designed RBF neural networks.
d. Given clusters S1 = {x1, x2, x3}, S2 = {x4, x5}, S3 = {x6, x7, x8, x9} and S4 = {x10, x11, x12}, find the centres for an RBF neural network implementing the classifier. Gaussian function with σ = 2ρavg is used in all hidden units and linear function is used in all output units. Based on the found centres, determine the output weights of the RBF neural network using the least squares method. Draw the diagram of the designed RBF neural network and show the weights.
e. Which of the 2 RBF neural networks perform better? Justify your answer.
Q8. A nonlinear classifier with multiple inputs and single output is implemented by a feedforward neural network. All inputs are in the range of −1 and 1. Propose a method to replace the feedforward neural network with an RBF neural network.
程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com