CS代写

Deep Learning

Copyright By PowCoder代写 加微信 powcoder

basic concept of
machine learning

representation learning
find good feature space automatically instead of hand-crafted feature

to achieve high performace with a small dataset
some case will not happen

umanifold assumption apply smooth change according to certain rules

underfitting
model capacity is too small to fit the data accordingly

use model with higher order

over-fitteing

trainning process also accepts data noise

low generalization ability

type of supervision

supervised learning both feature and the output are given

unsupervised learning the feature is given, but the output is not given

semi-supervised learning
some data have both feature and outpu but others only have feature

in some cases, it’s easy to get feature but outputs hould requires manual tasks

reinforcement learning
agent perform the action and if the action is correct, it will give reward.

e.g. Alpha Go

Deep Learning

end-to-end traning

learning features/representations

learning multi-level features and an output different from representation learning

neural network based model
a series of linear classifiers, non-linear activations and loss function

cascadeing the neurons to form a neural network

Loss function

Hinge loss

cross-entropy loss

log likelihood loss

regression loss

sj for score of the false label

syi for score of ture label

L=sum(Li)/N 求平均

for classification

for classification

for contiuous output

or yi for label and si for prediction

Active Function

对每个output计算exp(output),并归一化

Regularization avoid overfitting

gradient descent

backpropagation

Receptive field the region of the input space that affects a particular unit of the network

convolutional filter size

three 3*3 Conv layers and single 7*7 Conv layer have the same receptive

but the small filter size produce more expressive activation map, which is better

large filter size has more parameters

spatial dimention (size of input – size of kernel + 2*padding)/stride + 1

1*1 Conv can reduce depth size

case studies

small fitler, Deeper network

Inception module
Naive Inception module Total depth after concatenation can onlhy grow at every layer

Improved Inception module bottleneck layers which use 1*1 convolutions to reduce feature depth

Residual block

difficult for optimization

AlexNet use one 7*7 Conv layer

VGG use three 3*3 Conv layer

the first CNN-based winner

batch size the number of input images/data

倒数是(1-s)*s

Convolution layer input 100*100*3, 32 kernels 3*3*3 reshape to 27*10000 and 32*27

regularization/use large dataset for training/Dropout

程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com