程序代写代做代考 algorithm html deep learning 2020/8/14 https://www.cse.unsw.edu.au/~cs9444/20T2/quiz/ans/quiz4_answers.html

2020/8/14 https://www.cse.unsw.edu.au/~cs9444/20T2/quiz/ans/quiz4_answers.html
COMP9444 Neural Networks and Deep Learning Quiz 4 (Image Processing)
This is an optional quiz to test your understanding of the Image Processing topic from Week 4.
1. List five different Image Processing tasks.
image classification object detection object segmentation style transfer generating images generating art image captioning
2. Explain the concept of Data Augmentation, and how it was used in AlexNet.
Data Augmentation is when additional training items are generated from those originally provided, using domain knowledge. In AlexNet, each original image was randomly cropped in different ways to create images of size 224 × 224. Images can also be reflected left-to-right, and changes can be made to the RGB channels of the images.
3. Explain the problem of vanishing and exploding gradients, and how Weight Initialization can help to prevent it.
The differentials in a deep neural network tend to grow according to this equation
Var[∂/∂ ] ≈ (Π =1D G1 out Var[ ( )]) Var[∂/∂z]
where ( ) are the weights at layer , out is the number of weights fanning out from each node in layer , and G1 estimates the average value of the
derivative of the transfer function. If the weights are initialized so that the factor in parentheses corresponding to each layer is approximately 1, then the differentials will remain in a healthy range. Otherwise, they may either grow or vanish exponentially.
4. Describe the Batch Normalization algorithm.
The mean and variance of the activations ( ) at layer
training items are estimated or pre-computed, and are calculated for each node
x̂ ( ) = ( ( ) – Mean[ ( )])/ sqrt(Var[ ( )])
These activations are then shifted and rescaled by ( ) = β ( ) + γ ( ) x̂ ( )
over a batch of activations
https://www.cse.unsw.edu.au/~cs9444/20T2/quiz/ans/quiz4_answers.html 1/2
dezilamron
i i kx
ikik ik iky ikx ikx ikxik
i
in i i w
iw in i x

2020/8/14 https://www.cse.unsw.edu.au/~cs9444/20T2/quiz/ans/quiz4_answers.html
where β ( ), γ ( ) are additional parameters to be learned by backpropagation.
5. Explain the difference between a Residual Network and a Dense Network.
A Residual Network includes “skip” connections which bypass each pair of consecutive layers. These intermediate layers therefore compute a residual component, which is added to the output from previous layers and corrects their errors, or provides additional details which they were not powerful enough to compute.
A Dense Network is built from densely connected blocks, separated by convolution and pooling layers. Within a dense block, each layer is connected by shortcut connections to all preceding layers.
6. What is the formula for the ( )th entry in the Gram matrix at level of a convolutional neural network? (remember to define any terms that you use)
= Σ , where is the th filter at depth in spatial location .
7. Explain the difference between Texture Synthesis and Style Transfer (both in their purpose, and their cost function).
Texture Synthesis aims to produce an image which matches the texture of a given (perhaps, smaller) image. Its cost function is
Estyle =(1/4) Σ =0L ( / ) Σ ( )2
where:
is a weighting factor for each layer
are the number of features, and size of the feature maps, in layer are the Gram matrices of the original and synthetic image.
Neural Style Transfer aims to combine the content of one image ( ) with the style of another, to produce a new image . Its cost function is
E = αEcontent + βEstyle,
where Econtent = (1/2) Σ || ( ) – ( )||2
https://www.cse.unsw.edu.au/~cs9444/20T2/quiz/ans/quiz4_answers.html 2/2
l
j i l A ,j i lG l M , lN
cx
k l l
i kilF kjlF kilF k jilG jilG j,i
cx kilF x kilF k ,i x
l lw j i l A – j i l G j , i 2 lM 2 lN lw l
ik ik