CS计算机代考程序代写 deep learning Tutorial Questions | Week 3

Tutorial Questions | Week 3
COSC2779 – Deep Learning

This tutorial is aimed at reviewing feed forward neural networks. Please try the questions before
you join the session.

1. For a fully-connected deep network with one hidden layer, increasing the number of hidden units should
have what effect on bias and variance?

2. Consider the following one hidden layer network.

x1

x2

a[1] ŷ
w (1)
1

w
(1)

2

w
(2)
1

a[1] = σ1

(
w

(1)
1 x1 + w

(1)
2 x2 + b

(1)
)

y = σ2

(
w

(2)
1 a

[1] + b(2)
)

Show that if σ1 is linear, the network can be represented by one layer perceptron.

3. You want to map every possible image of size 64 × 64 to a binary category (cat or non-cat). Each image
has 3 channels and each pixel in each channel can take an integer value between (and including) 0 and 255.
How many bits do you need to represent this mapping?

4. The mapping from question (3) clearly can not be stored in memory. Instead, you will build a classifier to
do this mapping. You will use a single hidden layer of size 100 for this task. Each weight in the two weight
matrices can be represented in memory using a float of size 64 bits. How many bits do you need to store
your two layer neural network?

5. One of the difficulties with the logistic activation function is that of saturated units. Briefly explain the
problem, and whether switching to tanh fixes the problem. Recall:

σ (z) =
1

1 + exp (−z)

tanh (z) =
exp (z)− exp (−z)
exp (z) + exp (−z)

6. You are asked to develop a NN to identify if a given images contain a cat and/or a dog. Note that some
images may contain both a cat and a dog. What will be a possible output activation and loss function?