代写代考 Neural Networks

Neural Networks
1. Give brief definitions of the following terms:
• action potential
• firing rate

Copyright By PowCoder代写 加微信 powcoder

• an artificial neural network
2. A neuron has a transfer function which is a linear weighted sum of its inputs and an activation function that is the Heaviside function. If the weights are w = [0.1, −0.5, 0.4], and the threshold zero, what is the output of this neuron when the input is: x1 = [0.1, −0.5, 0.4]t and x2 = [0.1, 0.5, 0.4]t ?
3. A Linear Threshold Unit has one input, x1, that can take binary values. Apply the sequential Delta learning rule so that the output of this neuron, y, is equal to x ̄ (or NOT(x )), i.e., such that:
x1 y 01 10
Assume initial values of θ = 1.5 and w1 = 2, and use a learning rate of 1. 4. Repeat the above question using the batch Delta learning rule.
5. A Linear Threshold Unit has two inputs, x1 and x2, that can take binary values. Apply the sequential Delta learning rule so that the output of this neuron, y, is equal to x1AND x2, i.e., such that:
x1 x2 y 000 010 100 111
Assume initial values of θ = −0.5, w1 = 1 and w2 = 1, and use a learning rate of 1. 6. Consider the following linearly separable data set.
Apply the Sequential Delta Learning Algorithm to find the parameters of a linear threshold neuron that will correctly classify this data. Assume initial values of θ = −1, w1 = 0 and w2 = 0, and a learning rate of 1.
class (0,2) 1 (1,2) 1 (2,1) 1 (−3, 1) 0 (−2, −1) 0 (−3, −2) 0
7. A negative feedback network has three inputs and two output neurons, that are connected with weights W = 􏰗110􏰘 T
1 1 1 . Determine the activation of the output neurons after 5 iterations when the input is x = (1,1,0) , as- 6

suming that the output neurons are updated using parameter α = 0.25, and the activations of the output neurons are initialised to be all zero.
8. Repeat the previous question using a value of α = 0.5.
9. A more stable method of calculating the activations in a negative feedback network is to use the following update rules:
e = x ⊘ 􏰒WT y􏰓 ε2
y←[y]ε1 ⊙W ̃e
where [v]ε = max(ε, v); ε1 and ε2 are parameters; W ̃ is equal to W but with each row normalised to sum to one; and ⊘
and ⊙ indicate element-wise division and multiplication respectively. This is called Regulatory Feedback or Divisive Input Modulation.
T 􏰗110􏰘 Determine the activation of the output neurons after 5 iterations when the input is x = (1, 1, 0) , and W = 1 1 1
assuming that ε1 = ε2 = 0.01, and the activations of the output neurons are initialised to be all zero. 10. The figure below shows an autoencoder neural network.
Draw a diagram of a de-noising autoencoder and briefly explain how a de-noising autoencoder is trained.

程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com