$\newcommand{\vct}[1]{\boldsymbol{#1}} \newcommand{\mtx}[1]{\mathbf{#1}} \newcommand{\tr}{^\mathrm{T}} \newcommand{\reals}{\mathbb{R}} \newcommand{\lpa}{\left(} \newcommand{\rpa}{\right)} \newcommand{\lsb}{\left[} \newcommand{\rsb}{\right]} \newcommand{\lbr}{\left\lbrace} \newcommand{\rbr}{\right\rbrace} \newcommand{\fset}[1]{\lbr #1 \rbr} \newcommand{\pd}[2]{\frac{\partial #1}{\partial #2}}$
Multiple layer models¶
In this notebook we will explore network models with multiple layers of transformations. This will build upon the single-layer affine model we looked at in the previous notebook and use material covered in the second and third lectures.
You will need to use these models for the experiments you will be running in the first coursework so part of the aim of this lab will be to get you familiar with how to construct multiple layer models in our framework and how to train them.
What is a layer?¶
Often when discussing (neural) network models, a network layer is taken to mean an input to output transformation of the form
\begin{equation} \vct{y} = \vct{f}(\mtx{W} \vct{x} + \vct{b}) \qquad \Leftrightarrow \qquad y_k = f\lpa\sum_{d=1}^D \lpa W_{kd} x_d \rpa + b_k \rpa \end{equation}
where $\mtx{W}$ and $\vct{b}$ parameterise an affine transformation as discussed in the previous notebook, and $f$ is a function applied elementwise to the result of the affine transformation. For example a common choice for $f$ is the logistic sigmoid function \begin{equation} f(u) = \frac{1}{1 + \exp(-u)}. \end{equation}
In the second lecture slides you were shown how to train a model consisting of an affine transformation followed by the elementwise logistic sigmoid using gradient descent. This was referred to as a ‘sigmoid single-layer network’.
In the previous notebook we also referred to single-layer models, where in that case the layer was an affine transformation, with you implementing the various necessary methods for the AffineLayer class before using an instance of that class within a SingleLayerModel on a regression problem. We could in that case consider the function $f$ to simply be the identity function $f(u) = u$. In the code for the labs we will however use a slightly different convention. Here we will consider the affine transformations and subsequent elementwise function $f$ to each be a separate transformation layer.
This will mean we can combine our already implemented AffineLayer class with any non-linear function applied to the outputs just by implementing a layer object for the relevant non-linearity and then stacking the two layers together. An alternative would be to have our new layer objects inherit from AffineLayer and then call the relevant parent class methods in the child class however this would mean we need to include a lot of the same boilerplate code in every new class.
To give a concrete example, in the mlp.layers module there is a definition for a SigmoidLayer equivalent to the following (documentation strings have been removed here for brevity)
class SigmoidLayer(Layer):
def fprop(self, inputs):
return 1. / (1. + np.exp(-inputs))
def bprop(self, inputs, outputs, grads_wrt_outputs):
return grads_wrt_outputs * outputs * (1. – outputs)
As you can see this SigmoidLayer class has a very lightweight definition, defining just two key methods:
• fprop which takes a batch of activations at the input to the layer and forward propagates them to produce activates at the outputs (directly equivalently to the fprop method you implemented for then AffineLayer in the previous notebook),
• brop which takes a batch of gradients with respect to the outputs of the layer and backward propagates them to calculate gradients with respect to the inputs of the layer (explained in more detail below).
This SigmoidLayer class only implements the logistic sigmoid non-linearity transformation and so does not have any parameters. Therefore unlike AffineLayer it is derived directly from the base Layer class rather than LayerWithParameters and does not need to implement grads_wrt_params or params methods.
To create a model consisting of an affine transformation followed by applying an elementwise logistic sigmoid transformation we first create a list of the two layer objects (in the order they are applied from inputs to outputs) and then use this to instantiate a new MultipleLayerModel object:
from mlp.layers import AffineLayer, SigmoidLayer
from mlp.models import MultipleLayerModel
layers = [AffineLayer(input_dim, output_dim), SigmoidLayer()]
model = MultipleLayerModel(layers)
Because of the modular way in which the layers are defined we can also stack an arbitrarily long sequence of layers together to produce deeper models. For instance the following would define a model consisting of three pairs of affine and logistic sigmoid transformations.
model = MultipleLayerModel([
AffineLayer(input_dim, hidden_dim), SigmoidLayer(),
AffineLayer(hidden_dim, hidden_dim), SigmoidLayer(),
AffineLayer(hidden_dim, output_dim), SigmoidLayer(),
])
Back-propagation of gradients¶
To allow training models consisting of a stack of multiple layers, all layers need to implement a bprop method in addition to the fprop we encountered in the previous week.
The bprop method takes gradients of an error function with respect to the outputs of a layer and uses these gradients to calculate gradients of the error function with respect to the inputs of a layer. As the inputs to a non-input layer in a multiple-layer model consist of the outputs of the previous layer, this means we can calculate the gradients of the error function with respect to the outputs of every layer in the model by iteratively propagating the gradients backwards through the layers of the model (i.e. from the last to first layer), hence the term ‘back-propagation’ or ‘bprop’ for short. A block diagram illustrating this is shown for a three layer model below.

For a layer with parameters, the gradients with respect to the layer outputs are required to calculate gradients with respect to the layer parameters. Therefore by combining backward propagation of gradients through the model with computing the gradients with respect to parameters in the relevant layers we can calculate gradients of the error function with respect to all of the parameters of a multiple-layer model in a very efficient manner (in fact the computational cost of computing gradients with respect to all of the parameters of the model using this method will only be a constant factor times the cost of calculating the model outputs in the forwards pass).
We so far have abstractly talked about calculating gradients with respect to the inputs of a layer using gradients with respect to the layer outputs. More concretely we will be using the chain rule for derivatives to do this, similarly to how we used the chain rule in exercise 4 of the previous notebook to calculate gradients with respect to the parameters of an affine layer given gradients with respect to the outputs of the layer.
In particular if our layer has a batch of $B$ vector inputs each of dimension $D$, $\fset{\vct{x}^{(b)}}_{b=1}^B$, and produces a batch of $B$ vector outputs each of dimension $K$, $\fset{\vct{y}^{(b)}}_{b=1}^B$, then we can calculate the gradient with respect to the $d^\textrm{th}$ dimension of the $b^{\textrm{th}}$ input given the gradients with respect to the $b^{\textrm{th}}$ output using
\begin{equation} \pd{\bar{E}}{x^{(b)}_d} = \sum_{k=1}^K \lpa \pd{\bar{E}}{y^{(b)}_k} \pd{y^{(b)}_k}{x^{(b)}_d} \rpa. \end{equation}
Mathematically therefore the bprop method takes an array of gradients with respect to the outputs $\pd{\bar{E}}{y^{(b)}_k}$ and applies a sum-product operation with the partial derivatives of each output with respect to each input $\pd{y^{(b)}_k}{x^{(b)}_d}$ to produce gradients with respect to the inputs of the layer $\pd{\bar{E}}{x^{(b)}_d}$.
For the affine transformation used in the AffineLayer implemented last week, i.e a forwards propagation corresponding to
\begin{equation} y^{(b)}_k = \sum_{d=1}^D \lpa W_{kd} x^{(b)}_d \rpa + b_k \end{equation}
then the corresponding partial derivatives of layer outputs with respect to inputs are
\begin{equation} \pd{y^{(b)}_k}{x^{(b)}_d} = W_{kd} \end{equation}
and so the backwards-propagation method for the AffineLayer takes the following form
\begin{equation} \pd{\bar{E}}{x^{(b)}_d} = \sum_{k=1}^K \lpa \pd{\bar{E}}{y^{(b)}_k} W_{kd} \rpa. \end{equation}
This can be efficiently implemented in NumPy using the dot function
class AffineLayer(LayerWithParameters):
# … [implementation of remaining methods from previous week] …
def bprop(self, inputs, outputs, grads_wrt_outputs):
return grads_wrt_outputs.dot(self.weights)
An important special case applies when the outputs of a layer are an elementwise function of the inputs such that $y^{(b)}_k$ only depends on $x^{(b)}_d$ when $d = k$. In this case the partial derivatives $\pd{y^{(b)}_k}{x^{(b)}_d}$ will be zero for $k \neq d$ and so the above summation collapses to a single term, giving
\begin{equation} \pd{\bar{E}}{x^{(b)}_d} = \pd{\bar{E}}{y^{(b)}_d} \pd{y^{(b)}_d}{x^{(b)}_d} \end{equation}
i.e. to calculate the gradient with respect to the $b^{\textrm{th}}$ input vector we just perform an elementwise multiplication of the gradient with respect to the $b^{\textrm{th}}$ output vector with the vector of derivatives of the outputs with respect to the inputs. This case applies to the SigmoidLayer and to all other layers applying an elementwise function to their inputs.
For the logistic sigmoid layer we have that
\begin{equation} y^{(b)}_d = \frac{1}{1 + \exp(-x^{(b)}_d)} \qquad \Rightarrow \qquad \pd{y^{(b)}_d}{x^{(b)}_d} = \frac{\exp(-x^{(b)}_d)}{\lsb 1 + \exp(-x^{(b)}_d) \rsb^2} = y^{(b)}_d \lsb 1 – y^{(b)}_d \rsb \end{equation}
which you should now be able relate to the implementation of SigmoidLayer.bprop given earlier:
class SigmoidLayer(Layer):
def fprop(self, inputs):
return 1. / (1. + np.exp(-inputs))
def bprop(self, inputs, outputs, grads_wrt_outputs):
return grads_wrt_outputs * outputs * (1. – outputs)
Exercise 1: training a softmax model on MNIST¶
For this first exercise we will train a model consisting of an affine transformation plus softmax on a multiclass classification task: classifying the digit labels for handwritten digit images from the MNIST data set introduced in the first notebook.
First run the cell below to import the necessary modules and classes and to load the MNIST data provider objects. As it takes a little while to load the MNIST data from disk into memory it is worth loading the data providers just once in a separate cell like this rather than recreating the objects for every training run.
We are loading two data provider objects here – one corresponding to the training data set and a second to use as a validation data set. This is data we do not train the model on but measure the performance of the trained model on to assess its ability to generalise to unseen data.
If you are in the Monday or Tuesday lab sessions you will not yet have had the lecture introducing the concepts of generalisation and validation data sets (though those doing MLPR alongside this course should already be familiar with these ideas). As you will need to report both training and validation set performances in your experiments for the first coursework assignment we are providing code here to give an example of how to do this.
In [1]:
import numpy as np
import matplotlib.pyplot as plt
import logging
from mlp.layers import AffineLayer, SoftmaxLayer, SigmoidLayer
from mlp.errors import CrossEntropyError, CrossEntropySoftmaxError
from mlp.models import SingleLayerModel, MultipleLayerModel
from mlp.initialisers import UniformInit
from mlp.learning_rules import GradientDescentLearningRule
from mlp.data_providers import MNISTDataProvider
from mlp.optimisers import Optimiser
%matplotlib inline
plt.style.use(‘ggplot’)
# Seed a random number generator
seed = 6102016
rng = np.random.RandomState(seed)
# Set up a logger object to print info about the training run to stdout
logger = logging.getLogger()
logger.setLevel(logging.INFO)
logger.handlers = [logging.StreamHandler()]
# Create data provider objects for the MNIST data set
train_data = MNISTDataProvider(‘train’, rng=rng)
valid_data = MNISTDataProvider(‘valid’, rng=rng)
input_dim, output_dim = 784, 10
To minimise replication of code and allow you to run experiments more quickly a helper function is provided below which trains a model and plots the evolution of the error and classification accuracy of the model (on both training and validation sets) over training.
In [6]:
–
Running the cell below will create a model consisting of an affine layer follower by a softmax transformation and train it on the MNIST data set by minimising the multi-class cross entropy error function using a basic gradient descent learning rule. By using the helper function defined above, at the end of training curves of the evolution of the error function and also classification accuracy of the model over the training epochs will be plotted.
You should try running the code for various settings of the training hyperparameters defined at the beginning of the cell to get a feel for how these affect how training proceeds. You may wish to create multiple copies of the cell below to allow you to keep track of and compare the results across different hyperparameter settings.
Varying initialisation scale¶
First try a few different parameter initialisation scales
init_scale = 0.01¶
In [8]:
# Set training run hyperparameters
batch_size = 100 # number of data points in a batch
init_scale = 0.01 # scale for random parameter initialisation
learning_rate = 0.1 # learning rate for gradient descent
num_epochs = 1 # number of training epochs to perform
stats_interval = 5 # epoch interval between recording and printing stats
# Reset random number generator and data provider states on each run
# to ensure reproducibility of results
rng.seed(seed)
train_data.reset()
valid_data.reset()
# Alter data-provider batch size
train_data.batch_size = batch_size
valid_data.batch_size = batch_size
# Create a parameter initialiser which will sample random uniform values
# from [-init_scale, init_scale]
param_init = UniformInit(-init_scale, init_scale, rng=rng)
# Create affine + softmax model
model = MultipleLayerModel([
AffineLayer(input_dim, output_dim, param_init, param_init),
SoftmaxLayer()
])
# Initialise a cross entropy error object
error = CrossEntropyError()
# Use a basic gradient descent learning rule
learning_rule = GradientDescentLearningRule(learning_rate=learning_rate)
res = train_model_and_plot_stats(
model, error, learning_rule, train_data, valid_data, num_epochs, stats_interval)
print(res[0])
print(res[1])
Epoch 0:
error(train)=2.31e+00, acc(train)=1.04e-01, error(valid)=2.31e+00, acc(valid)=9.90e-02, params_penalty=0.00e+00
[[ 2.31074313 0.10362 2.31085411 0.099 0. ]]
{‘acc(train)’: 1, ‘error(valid)’: 2, ‘params_penalty’: 4, ‘acc(valid)’: 3, ‘error(train)’: 0}


init_scale = 0.1¶
In [4]:
# Set training run hyperparameters
batch_size = 100 # number of data points in a batch
init_scale = 0.1 # scale for random parameter initialisation
learning_rate = 0.1 # learning rate for gradient descent
num_epochs = 100 # number of training epochs to perform
stats_interval = 5 # epoch interval between recording and printing stats
# Reset random number generator and data provider states on each run
# to ensure reproducibility of results
rng.seed(seed)
train_data.reset()
valid_data.reset()
# Alter data-provider batch size
train_data.batch_size = batch_size
valid_data.batch_size = batch_size
# Create a parameter initialiser which will sample random uniform values
# from [-init_scale, init_scale]
param_init = UniformInit(-init_scale, init_scale, rng=rng)
# Create affine + softmax model
model = MultipleLayerModel([
AffineLayer(input_dim, output_dim, param_init, param_init),
SoftmaxLayer()
])
# Initialise a cross entropy error object
error = CrossEntropyError()
# Use a basic gradient descent learning rule
learning_rule = GradientDescentLearningRule(learning_rate=learning_rate)
_ = train_model_and_plot_stats(
model, error, learning_rule, train_data, valid_data, num_epochs, stats_interval)
Epoch 5: 0.21s to complete
error(train)=3.11e-01, acc(train)=9.13e-01, error(valid)=2.92e-01, acc(valid)=9.18e-01
Epoch 10: 0.22s to complete
error(train)=2.89e-01, acc(train)=9.20e-01, error(valid)=2.77e-01, acc(valid)=9.23e-01
Epoch 15: 0.22s to complete
error(train)=2.79e-01, acc(train)=9.22e-01, error(valid)=2.70e-01, acc(valid)=9.24e-01
Epoch 20: 0.22s to complete
error(train)=2.72e-01, acc(train)=9.24e-01, error(valid)=2.66e-01, acc(valid)=9.26e-01
Epoch 25: 0.22s to complete
error(train)=2.68e-01, acc(train)=9.25e-01, error(valid)=2.66e-01, acc(valid)=9.26e-01
Epoch 30: 0.22s to complete
error(train)=2.63e-01, acc(train)=9.27e-01, error(valid)=2.62e-01, acc(valid)=9.26e-01
Epoch 35: 0.22s to complete
error(train)=2.60e-01, acc(train)=9.28e-01, error(valid)=2.61e-01, acc(valid)=9.28e-01
Epoch 40: 0.22s to complete
error(train)=2.59e-01, acc(train)=9.28e-01, error(valid)=2.61e-01, acc(valid)=9.28e-01
Epoch 45: 0.22s to complete
error(train)=2.55e-01, acc(train)=9.29e-01, error(valid)=2.59e-01, acc(valid)=9.29e-01
Epoch 50: 0.21s to complete
error(train)=2.54e-01, acc(train)=9.30e-01, error(valid)=2.59e-01, acc(valid)=9.30e-01
Epoch 55: 0.22s to complete
error(train)=2.52e-01, acc(train)=9.29e-01, error(valid)=2.59e-01, acc(valid)=9.30e-01
Epoch 60: 0.22s to complete
error(train)=2.52e-01, acc(train)=9.29e-01, error(valid)=2.60e-01, acc(valid)=9.29e-01
Epoch 65: 0.22s to complete
error(train)=2.50e-01, acc(train)=9.31e-01, error(valid)=2.58e-01, acc(valid)=9.30e-01
Epoch 70: 0.22s to complete
error(train)=2.49e-01, acc(train)=9.31e-01, error(valid)=2.59e-01, acc(valid)=9.31e-01
Epoch 75: 0.22s to complete
error(train)=2.47e-01, acc(train)=9.32e-01, error(valid)=2.58e-01, acc(valid)=9.30e-01
Epoch 80: 0.21s to complete
error(train)=2.46e-01, acc(train)=9.31e-01, error(valid)=2.58e-01, acc(valid)=9.31e-01
Epoch 85: 0.22s to complete
error(train)=2.45e-01, acc(train)=9.32e-01, error(valid)=2.58e-01, acc(valid)=9.31e-01
Epoch 90: 0.23s to complete
error(train)=2.44e-01, acc(train)=9.32e-01, error(valid)=2.58e-01, acc(valid)=9.30e-01
Epoch 95: 0.21s to complete
error(train)=2.44e-01, acc(train)=9.32e-01, error(valid)=2.58e-01, acc(valid)=9.30e-01
Epoch 100: 0.22s to complete
error(train)=2.43e-01, acc(train)=9.33e-01, error(valid)=2.59e-01, acc(valid)=9.29e-01


init_scale = 0.5¶
In [5]:
# Set training run hyperparameters
batch_size = 100 # number of data points in a batch
init_scale = 0.5 # scale for random parameter initialisation
learning_rate = 0.1 # learning rate for gradient descent
num_epochs = 100 # number of training epochs to perform
stats_interval = 5 # epoch interval between recording and printing stats
# Reset random number generator and data provider states on each run
# to ensure reproducibility of results
rng.seed(seed)
train_data.reset()
valid_data.reset()
# Alter data-provider batch size
train_data.batch_size = batch_size
valid_data.batch_size = batch_size
# Create a parameter initialiser which will sample random uniform values
# from [-init_scale, init_scale]
param_init = UniformInit(-init_scale, init_scale, rng=rng)
# Create affine + softmax model
model = MultipleLayerModel([
AffineLayer(input_dim, output_dim, param_init, param_init),
SoftmaxLayer()
])
# Initialise a cross entropy error object
error = CrossEntropyError()
# Use a basic gradient descent learning rule
learning_rule = GradientDescentLearningRule(learning_rate=learning_rate)
_ = train_model_and_plot_stats(
model, error, learning_rule, train_data, valid_data, num_epochs, stats_interval)
Epoch 5: 0.22s to complete
error(train)=3.38e-01, acc(train)=9.03e-01, error(valid)=3.17e-01, acc(valid)=9.11e-01
Epoch 10: 0.21s to complete
error(train)=3.06e-01, acc(train)=9.13e-01, error(valid)=2.94e-01, acc(valid)=9.17e-01
Epoch 15: 0.22s to complete
error(train)=2.92e-01, acc(train)=9.17e-01, error(valid)=2.83e-01, acc(valid)=9.20e-01
Epoch 20: 0.22s to complete
error(train)=2.82e-01, acc(train)=9.20e-01, error(valid)=2.77e-01, acc(valid)=9.22e-01
Epoch 25: 0.21s to complete
error(train)=2.77e-01, acc(train)=9.22e-01, error(valid)=2.75e-01, acc(valid)=9.22e-01
Epoch 30: 0.22s to complete
error(train)=2.71e-01, acc(train)=9.24e-01, error(valid)=2.71e-01, acc(valid)=9.25e-01
Epoch 35: 0.22s to complete
error(train)=2.67e-01, acc(train)=9.25e-01, error(valid)=2.69e-01, acc(valid)=9.26e-01
Epoch 40: 0.21s to complete
error(train)=2.65e-01, acc(train)=9.27e-01, error(valid)=2.68e-01, acc(valid)=9.26e-01
Epoch 45: 0.22s to complete
error(train)=2.61e-01, acc(train)=9.27e-01, error(valid)=2.66e-01, acc(valid)=9.27e-01
Epoch 50: 0.22s to complete
error(train)=2.59e-01, acc(train)=9.28e-01, error(valid)=2.65e-01, acc(valid)=9.27e-01
Epoch 55: 0.21s to complete
error(train)=2.57e-01, acc(train)=9.29e-01, error(valid)=2.64e-01, acc(valid)=9.29e-01
Epoch 60: 0.22s to complete
error(train)=2.56e-01, acc(train)=9.28e-01, error(valid)=2.65e-01, acc(valid)=9.28e-01
Epoch 65: 0.21s to complete
error(train)=2.54e-01, acc(train)=9.30e-01, error(valid)=2.63e-01, acc(valid)=9.28e-01
Epoch 70: 0.21s to complete
error(train)=2.52e-01, acc(train)=9.30e-01, error(valid)=2.64e-01, acc(valid)=9.28e-01
Epoch 75: 0.22s to complete
error(train)=2.50e-01, acc(train)=9.31e-01, error(valid)=2.62e-01, acc(valid)=9.29e-01
Epoch 80: 0.21s to complete
error(train)=2.49e-01, acc(train)=9.31e-01, error(valid)=2.63e-01, acc(valid)=9.28e-01
Epoch 85: 0.21s to complete
error(train)=2.48e-01, acc(train)=9.31e-01, error(valid)=2.62e-01, acc(valid)=9.30e-01
Epoch 90: 0.22s to complete
error(train)=2.47e-01, acc(train)=9.31e-01, error(valid)=2.62e-01, acc(valid)=9.28e-01
Epoch 95: 0.21s to complete
error(train)=2.47e-01, acc(train)=9.31e-01, error(valid)=2.62e-01, acc(valid)=9.29e-01
Epoch 100: 0.39s to complete
error(train)=2.45e-01, acc(train)=9.32e-01, error(valid)=2.62e-01, acc(valid)=9.28e-01


init_scale
Final error(train)
Final error(valid)
0.01
2.43e-01
2.58e-01
0.1
2.43e-01
2.59e-01
0.5
2.45e-01
2.62e-01
Larger initialisation scale of 0.5 seems to give slightly slower initial learning than smaller scales of 0.1 and 0.01 however difference is only slight suggesting for this shallow architecure training performance is not particularly sensitive to initialisation scale.
Varying learning rate¶
Now let’s try some different values for learning rate.
learning_rate = 0.05¶
In [6]:
# Set training run hyperparameters
batch_size = 100 # number of data points in a batch
init_scale = 0.1 # scale for random parameter initialisation
learning_rate = 0.05 # learning rate for gradient descent
num_epochs = 100 # number of training epochs to perform
stats_interval = 5 # epoch interval between recording and printing stats
# Reset random number generator and data provider states on each run
# to ensure reproducibility of results
rng.seed(seed)
train_data.reset()
valid_data.reset()
# Alter data-provider batch size
train_data.batch_size = batch_size
valid_data.batch_size = batch_size
# Create a parameter initialiser which will sample random uniform values
# from [-init_scale, init_scale]
param_init = UniformInit(-init_scale, init_scale, rng=rng)
# Create affine + softmax model
model = MultipleLayerModel([
AffineLayer(input_dim, output_dim, param_init, param_init),
SoftmaxLayer()
])
# Initialise a cross entropy error object
error = CrossEntropyError()
# Use a basic gradient descent learning rule
learning_rule = GradientDescentLearningRule(learning_rate=learning_rate)
_ = train_model_and_plot_stats(
model, error, learning_rule, train_data, valid_data, num_epochs, stats_interval)
Epoch 5: 0.22s to complete
error(train)=3.41e-01, acc(train)=9.05e-01, error(valid)=3.16e-01, acc(valid)=9.12e-01
Epoch 10: 0.21s to complete
error(train)=3.10e-01, acc(train)=9.14e-01, error(valid)=2.92e-01, acc(valid)=9.18e-01
Epoch 15: 0.22s to complete
error(train)=2.97e-01, acc(train)=9.18e-01, error(valid)=2.82e-01, acc(valid)=9.21e-01
Epoch 20: 0.22s to complete
error(train)=2.88e-01, acc(train)=9.20e-01, error(valid)=2.76e-01, acc(valid)=9.23e-01
Epoch 25: 0.21s to complete
error(train)=2.83e-01, acc(train)=9.21e-01, error(valid)=2.73e-01, acc(valid)=9.24e-01
Epoch 30: 0.22s to complete
error(train)=2.77e-01, acc(train)=9.22e-01, error(valid)=2.69e-01, acc(valid)=9.24e-01
Epoch 35: 0.22s to complete
error(train)=2.74e-01, acc(train)=9.24e-01, error(valid)=2.67e-01, acc(valid)=9.25e-01
Epoch 40: 0.21s to complete
error(train)=2.72e-01, acc(train)=9.24e-01, error(valid)=2.66e-01, acc(valid)=9.26e-01
Epoch 45: 0.22s to complete
error(train)=2.68e-01, acc(train)=9.26e-01, error(valid)=2.64e-01, acc(valid)=9.27e-01
Epoch 50: 0.22s to complete
error(train)=2.66e-01, acc(train)=9.26e-01, error(valid)=2.63e-01, acc(valid)=9.28e-01
Epoch 55: 0.21s to complete
error(train)=2.64e-01, acc(train)=9.26e-01, error(valid)=2.62e-01, acc(valid)=9.29e-01
Epoch 60: 0.21s to complete
error(train)=2.63e-01, acc(train)=9.26e-01, error(valid)=2.62e-01, acc(valid)=9.28e-01
Epoch 65: 0.23s to complete
error(train)=2.61e-01, acc(train)=9.28e-01, error(valid)=2.61e-01, acc(valid)=9.27e-01
Epoch 70: 0.23s to complete
error(train)=2.60e-01, acc(train)=9.28e-01, error(valid)=2.61e-01, acc(valid)=9.28e-01
Epoch 75: 0.21s to complete
error(train)=2.58e-01, acc(train)=9.29e-01, error(valid)=2.60e-01, acc(valid)=9.29e-01
Epoch 80: 0.23s to complete
error(train)=2.57e-01, acc(train)=9.29e-01, error(valid)=2.60e-01, acc(valid)=9.29e-01
Epoch 85: 0.21s to complete
error(train)=2.56e-01, acc(train)=9.29e-01, error(valid)=2.59e-01, acc(valid)=9.30e-01
Epoch 90: 0.21s to complete
error(train)=2.55e-01, acc(train)=9.29e-01, error(valid)=2.59e-01, acc(valid)=9.29e-01
Epoch 95: 0.22s to complete
error(train)=2.54e-01, acc(train)=9.29e-01, error(valid)=2.59e-01, acc(valid)=9.29e-01
Epoch 100: 0.21s to complete
error(train)=2.53e-01, acc(train)=9.30e-01, error(valid)=2.59e-01, acc(valid)=9.29e-01


learning_rate = 0.1¶
In [7]:
# Set training run hyperparameters
batch_size = 100 # number of data points in a batch
init_scale = 0.1 # scale for random parameter initialisation
learning_rate = 0.1 # learning rate for gradient descent
num_epochs = 100 # number of training epochs to perform
stats_interval = 5 # epoch interval between recording and printing stats
# Reset random number generator and data provider states on each run
# to ensure reproducibility of results
rng.seed(seed)
train_data.reset()
valid_data.reset()
# Alter data-provider batch size
train_data.batch_size = batch_size
valid_data.batch_size = batch_size
# Create a parameter initialiser which will sample random uniform values
# from [-init_scale, init_scale]
param_init = UniformInit(-init_scale, init_scale, rng=rng)
# Create affine + softmax model
model = MultipleLayerModel([
AffineLayer(input_dim, output_dim, param_init, param_init),
SoftmaxLayer()
])
# Initialise a cross entropy error object
error = CrossEntropyError()
# Use a basic gradient descent learning rule
learning_rule = GradientDescentLearningRule(learning_rate=learning_rate)
_ = train_model_and_plot_stats(
model, error, learning_rule, train_data, valid_data, num_epochs, stats_interval)
Epoch 5: 0.21s to complete
error(train)=3.11e-01, acc(train)=9.13e-01, error(valid)=2.92e-01, acc(valid)=9.18e-01
Epoch 10: 0.21s to complete
error(train)=2.89e-01, acc(train)=9.20e-01, error(valid)=2.77e-01, acc(valid)=9.23e-01
Epoch 15: 0.22s to complete
error(train)=2.79e-01, acc(train)=9.22e-01, error(valid)=2.70e-01, acc(valid)=9.24e-01
Epoch 20: 0.21s to complete
error(train)=2.72e-01, acc(train)=9.24e-01, error(valid)=2.66e-01, acc(valid)=9.26e-01
Epoch 25: 0.21s to complete
error(train)=2.68e-01, acc(train)=9.25e-01, error(valid)=2.66e-01, acc(valid)=9.26e-01
Epoch 30: 0.22s to complete
error(train)=2.63e-01, acc(train)=9.27e-01, error(valid)=2.62e-01, acc(valid)=9.26e-01
Epoch 35: 0.22s to complete
error(train)=2.60e-01, acc(train)=9.28e-01, error(valid)=2.61e-01, acc(valid)=9.28e-01
Epoch 40: 0.21s to complete
error(train)=2.59e-01, acc(train)=9.28e-01, error(valid)=2.61e-01, acc(valid)=9.28e-01
Epoch 45: 0.22s to complete
error(train)=2.55e-01, acc(train)=9.29e-01, error(valid)=2.59e-01, acc(valid)=9.29e-01
Epoch 50: 0.21s to complete
error(train)=2.54e-01, acc(train)=9.30e-01, error(valid)=2.59e-01, acc(valid)=9.30e-01
Epoch 55: 0.25s to complete
error(train)=2.52e-01, acc(train)=9.29e-01, error(valid)=2.59e-01, acc(valid)=9.30e-01
Epoch 60: 0.22s to complete
error(train)=2.52e-01, acc(train)=9.29e-01, error(valid)=2.60e-01, acc(valid)=9.29e-01
Epoch 65: 0.22s to complete
error(train)=2.50e-01, acc(train)=9.31e-01, error(valid)=2.58e-01, acc(valid)=9.30e-01
Epoch 70: 0.21s to complete
error(train)=2.49e-01, acc(train)=9.31e-01, error(valid)=2.59e-01, acc(valid)=9.31e-01
Epoch 75: 0.23s to complete
error(train)=2.47e-01, acc(train)=9.32e-01, error(valid)=2.58e-01, acc(valid)=9.30e-01
Epoch 80: 0.48s to complete
error(train)=2.46e-01, acc(train)=9.31e-01, error(valid)=2.58e-01, acc(valid)=9.31e-01
Epoch 85: 0.22s to complete
error(train)=2.45e-01, acc(train)=9.32e-01, error(valid)=2.58e-01, acc(valid)=9.31e-01
Epoch 90: 0.21s to complete
error(train)=2.44e-01, acc(train)=9.32e-01, error(valid)=2.58e-01, acc(valid)=9.30e-01
Epoch 95: 0.21s to complete
error(train)=2.44e-01, acc(train)=9.32e-01, error(valid)=2.58e-01, acc(valid)=9.30e-01
Epoch 100: 0.22s to complete
error(train)=2.43e-01, acc(train)=9.33e-01, error(valid)=2.59e-01, acc(valid)=9.29e-01


learning_rate = 0.2¶
In [8]:
# Set training run hyperparameters
batch_size = 100 # number of data points in a batch
init_scale = 0.1 # scale for random parameter initialisation
learning_rate = 0.2 # learning rate for gradient descent
num_epochs = 100 # number of training epochs to perform
stats_interval = 5 # epoch interval between recording and printing stats
# Reset random number generator and data provider states on each run
# to ensure reproducibility of results
rng.seed(seed)
train_data.reset()
valid_data.reset()
# Alter data-provider batch size
train_data.batch_size = batch_size
valid_data.batch_size = batch_size
# Create a parameter initialiser which will sample random uniform values
# from [-init_scale, init_scale]
param_init = UniformInit(-init_scale, init_scale, rng=rng)
# Create affine + softmax model
model = MultipleLayerModel([
AffineLayer(input_dim, output_dim, param_init, param_init),
SoftmaxLayer()
])
# Initialise a cross entropy error object
error = CrossEntropyError()
# Use a basic gradient descent learning rule
learning_rule = GradientDescentLearningRule(learning_rate=learning_rate)
_ = train_model_and_plot_stats(
model, error, learning_rule, train_data, valid_data, num_epochs, stats_interval)
Epoch 5: 0.21s to complete
error(train)=2.90e-01, acc(train)=9.19e-01, error(valid)=2.77e-01, acc(valid)=9.22e-01
Epoch 10: 0.22s to complete
error(train)=2.75e-01, acc(train)=9.23e-01, error(valid)=2.69e-01, acc(valid)=9.25e-01
Epoch 15: 0.21s to complete
error(train)=2.66e-01, acc(train)=9.26e-01, error(valid)=2.64e-01, acc(valid)=9.26e-01
Epoch 20: 0.21s to complete
error(train)=2.60e-01, acc(train)=9.28e-01, error(valid)=2.61e-01, acc(valid)=9.28e-01
Epoch 25: 0.22s to complete
error(train)=2.57e-01, acc(train)=9.28e-01, error(valid)=2.64e-01, acc(valid)=9.27e-01
Epoch 30: 0.22s to complete
error(train)=2.53e-01, acc(train)=9.29e-01, error(valid)=2.61e-01, acc(valid)=9.30e-01
Epoch 35: 0.22s to complete
error(train)=2.50e-01, acc(train)=9.30e-01, error(valid)=2.60e-01, acc(valid)=9.30e-01
Epoch 40: 0.21s to complete
error(train)=2.49e-01, acc(train)=9.31e-01, error(valid)=2.61e-01, acc(valid)=9.28e-01
Epoch 45: 0.21s to complete
error(train)=2.45e-01, acc(train)=9.32e-01, error(valid)=2.58e-01, acc(valid)=9.30e-01
Epoch 50: 0.22s to complete
error(train)=2.45e-01, acc(train)=9.32e-01, error(valid)=2.60e-01, acc(valid)=9.31e-01
Epoch 55: 0.22s to complete
error(train)=2.43e-01, acc(train)=9.32e-01, error(valid)=2.59e-01, acc(valid)=9.29e-01
Epoch 60: 0.22s to complete
error(train)=2.44e-01, acc(train)=9.31e-01, error(valid)=2.63e-01, acc(valid)=9.29e-01
Epoch 65: 0.22s to complete
error(train)=2.41e-01, acc(train)=9.34e-01, error(valid)=2.60e-01, acc(valid)=9.30e-01
Epoch 70: 0.22s to complete
error(train)=2.40e-01, acc(train)=9.34e-01, error(valid)=2.62e-01, acc(valid)=9.29e-01
Epoch 75: 0.22s to complete
error(train)=2.38e-01, acc(train)=9.34e-01, error(valid)=2.60e-01, acc(valid)=9.30e-01
Epoch 80: 0.22s to complete
error(train)=2.38e-01, acc(train)=9.33e-01, error(valid)=2.62e-01, acc(valid)=9.29e-01
Epoch 85: 0.22s to complete
error(train)=2.36e-01, acc(train)=9.35e-01, error(valid)=2.61e-01, acc(valid)=9.30e-01
Epoch 90: 0.22s to complete
error(train)=2.36e-01, acc(train)=9.34e-01, error(valid)=2.61e-01, acc(valid)=9.28e-01
Epoch 95: 0.22s to complete
error(train)=2.37e-01, acc(train)=9.34e-01, error(valid)=2.63e-01, acc(valid)=9.29e-01
Epoch 100: 0.22s to complete
error(train)=2.35e-01, acc(train)=9.35e-01, error(valid)=2.63e-01, acc(valid)=9.29e-01


learning_rate = 0.5¶
In [9]:
# Set training run hyperparameters
batch_size = 100 # number of data points in a batch
init_scale = 0.1 # scale for random parameter initialisation
learning_rate = 0.5 # learning rate for gradient descent
num_epochs = 100 # number of training epochs to perform
stats_interval = 5 # epoch interval between recording and printing stats
# Reset random number generator and data provider states on each run
# to ensure reproducibility of results
rng.seed(seed)
train_data.reset()
valid_data.reset()
# Alter data-provider batch size
train_data.batch_size = batch_size
valid_data.batch_size = batch_size
# Create a parameter initialiser which will sample random uniform values
# from [-init_scale, init_scale]
param_init = UniformInit(-init_scale, init_scale, rng=rng)
# Create affine + softmax model
model = MultipleLayerModel([
AffineLayer(input_dim, output_dim, param_init, param_init),
SoftmaxLayer()
])
# Initialise a cross entropy error object
error = CrossEntropyError()
# Use a basic gradient descent learning rule
learning_rule = GradientDescentLearningRule(learning_rate=learning_rate)
_ = train_model_and_plot_stats(
model, error, learning_rule, train_data, valid_data, num_epochs, stats_interval)
Epoch 5: 0.21s to complete
error(train)=2.79e-01, acc(train)=9.20e-01, error(valid)=2.74e-01, acc(valid)=9.23e-01
Epoch 10: 0.22s to complete
error(train)=2.68e-01, acc(train)=9.24e-01, error(valid)=2.72e-01, acc(valid)=9.26e-01
Epoch 15: 0.22s to complete
error(train)=2.55e-01, acc(train)=9.28e-01, error(valid)=2.66e-01, acc(valid)=9.26e-01
Epoch 20: 0.22s to complete
error(train)=2.49e-01, acc(train)=9.31e-01, error(valid)=2.61e-01, acc(valid)=9.29e-01
Epoch 25: 0.22s to complete
error(train)=2.52e-01, acc(train)=9.29e-01, error(valid)=2.73e-01, acc(valid)=9.25e-01
Epoch 30: 0.22s to complete
error(train)=2.47e-01, acc(train)=9.31e-01, error(valid)=2.70e-01, acc(valid)=9.27e-01
Epoch 35: 0.21s to complete
error(train)=2.44e-01, acc(train)=9.32e-01, error(valid)=2.69e-01, acc(valid)=9.27e-01
Epoch 40: 0.22s to complete
error(train)=2.44e-01, acc(train)=9.32e-01, error(valid)=2.72e-01, acc(valid)=9.26e-01
Epoch 45: 0.22s to complete
error(train)=2.36e-01, acc(train)=9.35e-01, error(valid)=2.66e-01, acc(valid)=9.29e-01
Epoch 50: 0.22s to complete
error(train)=2.38e-01, acc(train)=9.33e-01, error(valid)=2.69e-01, acc(valid)=9.28e-01
Epoch 55: 0.22s to complete
error(train)=2.36e-01, acc(train)=9.34e-01, error(valid)=2.68e-01, acc(valid)=9.26e-01
Epoch 60: 0.22s to complete
error(train)=2.46e-01, acc(train)=9.29e-01, error(valid)=2.81e-01, acc(valid)=9.22e-01
Epoch 65: 0.21s to complete
error(train)=2.33e-01, acc(train)=9.35e-01, error(valid)=2.70e-01, acc(valid)=9.28e-01
Epoch 70: 0.21s to complete
error(train)=2.35e-01, acc(train)=9.36e-01, error(valid)=2.75e-01, acc(valid)=9.27e-01
Epoch 75: 0.22s to complete
error(train)=2.31e-01, acc(train)=9.36e-01, error(valid)=2.70e-01, acc(valid)=9.26e-01
Epoch 80: 0.21s to complete
error(train)=2.35e-01, acc(train)=9.34e-01, error(valid)=2.76e-01, acc(valid)=9.25e-01
Epoch 85: 0.21s to complete
error(train)=2.32e-01, acc(train)=9.35e-01, error(valid)=2.75e-01, acc(valid)=9.26e-01
Epoch 90: 0.22s to complete
error(train)=2.29e-01, acc(train)=9.37e-01, error(valid)=2.74e-01, acc(valid)=9.26e-01
Epoch 95: 0.21s to complete
error(train)=2.31e-01, acc(train)=9.35e-01, error(valid)=2.76e-01, acc(valid)=9.27e-01
Epoch 100: 0.25s to complete
error(train)=2.31e-01, acc(train)=9.36e-01, error(valid)=2.77e-01, acc(valid)=9.27e-01


learning_rate
Final error(train)
Final error(valid)
0.05
$2.53\times 10^{-1}$
$2.59\times 10^{-1}$
0.1
$2.43\times 10^{-1}$
$2.59\times 10^{-1}$
0.2
$2.35\times 10^{-1}$
$2.63\times 10^{-1}$
0.5
$2.31\times 10^{-1}$
$2.77\times 10^{-1}$
Increasing the learning rate, as would be expected, increase the speed of learning, with the final training error reached monotonically decreasing over the learning rates tested as the learning rate was increased. Note however the validation set error increases for larger learning rates – this suggests the model is overfitting to the data, with the larger learning rates causing the model to begin overfitting sooner – we could have afforded to halt learning earlier in these cases when there was no further improvement in the validation set error. Notice also the error curves for the largest learning rate value are much more noisy suggesting learning is becoming quite unstable with this large a step size, with a lot of the gradient descent steps overshooting and causing the error function value to increase.
Optional extra: more efficient softmax gradient evaluation¶
In the lectures you were shown that for certain combinations of error function and final output layers, that the expressions for the gradients take particularly simple forms.
In particular it can be shown that the combinations of
• logistic sigmoid output layer and binary cross entropy error function
• softmax output layer and cross entropy error function
lead to particularly simple forms for the gradients of the error function with respect to the inputs to the final layer. In particular for the latter softmax and cross entropy error function case we have that
\begin{equation} y^{(b)}_k = \textrm{Softmax}_k\lpa\vct{x}^{(b)}\rpa = \frac{\exp(x^{(b)}_k)}{\sum_{d=1}^D \lbr \exp(x^{(b)}_d) \rbr} \qquad E^{(b)} = \textrm{CrossEntropy}\lpa\vct{y}^{(b)},\,\vct{t}^{(b)}\rpa = -\sum_{d=1}^D \lbr t^{(b)}_d \log(y^{(b)}_d) \rbr \end{equation}
and it can be shown (this is an instructive mathematical exercise if you want a challenge!) that
\begin{equation} \pd{E^{(b)}}{x^{(b)}_d} = y^{(b)}_d – t^{(b)}_d. \end{equation}
The combination of CrossEntropyError and SoftmaxLayer used to train the model above calculate this gradient less directly by first calculating the gradient of the error with respect to the model outputs in CrossEntropyError.grad and then back-propagating this gradient to the inputs of the softmax layer using SoftmaxLayer.bprop.
Rather than computing the gradient in two steps like this we can instead wrap the softmax transformation in to the definition of the error function and make use of the simpler gradient expression above. More explicitly we define an error function as follows
\begin{equation} E^{(b)} = \textrm{CrossEntropySoftmax}\lpa\vct{y}^{(b)},\,\vct{t}^{(b)}\rpa = -\sum_{d=1}^D \lbr t^{(b)}_d \log\lsb\textrm{Softmax}_d\lpa \vct{y}^{(b)}\rpa\rsb\rbr \end{equation}
with corresponding gradient
\begin{equation} \pd{E^{(b)}}{y^{(b)}_d} = \textrm{Softmax}_d\lpa \vct{y}^{(b)}\rpa – t^{(b)}_d. \end{equation}
The final layer of the model will then be an affine transformation which produces unbounded output values corresponding to the logarithms of the unnormalised predicted class probabilities. An implementation of this error function is provided in CrossEntropySoftmaxError. The cell below sets up a model with a single affine transformation layer and trains it on MNIST using this new cost. If you run it with equivalent hyperparameters to one of your runs with the alternative formulation above you should get identical error and classification curves (other than floating point error) but with a minor improvement in training speed.
In [10]:
# Set training run hyperparameters
batch_size = 100 # number of data points in a batch
init_scale = 0.1 # scale for random parameter initialisation
learning_rate = 0.1 # learning rate for gradient descent
num_epochs = 100 # number of training epochs to perform
stats_interval = 5 # epoch interval between recording and printing stats
# Reset random number generator and data provider states on each run
# to ensure reproducibility of results
rng.seed(seed)
train_data.reset()
valid_data.reset()
# Alter data-provider batch size
train_data.batch_size = batch_size
valid_data.batch_size = batch_size
# Create a parameter initialiser which will sample random uniform values
# from [-init_scale, init_scale]
param_init = UniformInit(-init_scale, init_scale, rng=rng)
# Create affine model (outputs are logs of unnormalised class probabilities)
model = SingleLayerModel(
AffineLayer(input_dim, output_dim, param_init, param_init)
)
# Initialise the error object
error = CrossEntropySoftmaxError()
# Use a basic gradient descent learning rule
learning_rule = GradientDescentLearningRule(learning_rate=learning_rate)
_ = train_model_and_plot_stats(
model, error, learning_rule, train_data, valid_data, num_epochs, stats_interval)
Epoch 5: 0.19s to complete
error(train)=3.11e-01, acc(train)=9.13e-01, error(valid)=2.92e-01, acc(valid)=9.18e-01
Epoch 10: 0.20s to complete
error(train)=2.89e-01, acc(train)=9.20e-01, error(valid)=2.77e-01, acc(valid)=9.23e-01
Epoch 15: 0.20s to complete
error(train)=2.79e-01, acc(train)=9.22e-01, error(valid)=2.70e-01, acc(valid)=9.24e-01
Epoch 20: 0.19s to complete
error(train)=2.72e-01, acc(train)=9.24e-01, error(valid)=2.66e-01, acc(valid)=9.26e-01
Epoch 25: 0.19s to complete
error(train)=2.68e-01, acc(train)=9.25e-01, error(valid)=2.66e-01, acc(valid)=9.26e-01
Epoch 30: 0.20s to complete
error(train)=2.63e-01, acc(train)=9.27e-01, error(valid)=2.62e-01, acc(valid)=9.26e-01
Epoch 35: 0.20s to complete
error(train)=2.60e-01, acc(train)=9.28e-01, error(valid)=2.61e-01, acc(valid)=9.28e-01
Epoch 40: 0.19s to complete
error(train)=2.59e-01, acc(train)=9.28e-01, error(valid)=2.61e-01, acc(valid)=9.28e-01
Epoch 45: 0.20s to complete
error(train)=2.55e-01, acc(train)=9.29e-01, error(valid)=2.59e-01, acc(valid)=9.29e-01
Epoch 50: 0.20s to complete
error(train)=2.54e-01, acc(train)=9.30e-01, error(valid)=2.59e-01, acc(valid)=9.30e-01
Epoch 55: 0.19s to complete
error(train)=2.52e-01, acc(train)=9.29e-01, error(valid)=2.59e-01, acc(valid)=9.30e-01
Epoch 60: 0.19s to complete
error(train)=2.52e-01, acc(train)=9.29e-01, error(valid)=2.60e-01, acc(valid)=9.29e-01
Epoch 65: 0.20s to complete
error(train)=2.50e-01, acc(train)=9.31e-01, error(valid)=2.58e-01, acc(valid)=9.30e-01
Epoch 70: 0.19s to complete
error(train)=2.49e-01, acc(train)=9.31e-01, error(valid)=2.59e-01, acc(valid)=9.31e-01
Epoch 75: 0.19s to complete
error(train)=2.47e-01, acc(train)=9.32e-01, error(valid)=2.58e-01, acc(valid)=9.30e-01
Epoch 80: 0.20s to complete
error(train)=2.46e-01, acc(train)=9.31e-01, error(valid)=2.58e-01, acc(valid)=9.31e-01
Epoch 85: 0.19s to complete
error(train)=2.45e-01, acc(train)=9.32e-01, error(valid)=2.58e-01, acc(valid)=9.31e-01
Epoch 90: 0.19s to complete
error(train)=2.44e-01, acc(train)=9.32e-01, error(valid)=2.58e-01, acc(valid)=9.30e-01
Epoch 95: 0.20s to complete
error(train)=2.44e-01, acc(train)=9.32e-01, error(valid)=2.58e-01, acc(valid)=9.30e-01
Epoch 100: 0.20s to complete
error(train)=2.43e-01, acc(train)=9.33e-01, error(valid)=2.59e-01, acc(valid)=9.29e-01


This gives exactly the same training curves (and error / accuracy values over training) as the two runs with equivalent parameters above (second `init_scale` experiment and second `learning_rate` experiment).
The times per epoch seems to be slightly lower on average (0.20s compared to 0.22s) suggesting the reformulation gives a small efficiency gain (though this will become less apparent in deeper architectures as the benefit only applies to the final layer).
Exercise 2: training deeper models on MNIST¶
We are now going to investigate using deeper multiple-layer model archictures for the MNIST classification task. You should experiment with training models with two to five AffineLayer transformations interleaved with SigmoidLayer nonlinear transformations. Intermediate hidden layers between the input and output should have a dimension of 100. For example the layers definition of a model with two AffineLayer transformations would be
layers = [
AffineLayer(input_dim, 100),
SigmoidLayer(),
AffineLayer(100, output_dim),
SoftmaxLayer()
]
If you read through the extension to the first exercise you may wish to use the CrossEntropySoftmaxError without the final SoftmaxLayer.
Use the code from the first exercise as a starting point and start with training hyperparameters which gave reasonable performance for the shallow architecture trained previously.
Some questions to investigate:
• How does increasing the number of layers affect the model’s performance on the training data set? And on the validation data set?
• Do deeper models seem to be harder or easier to train (e.g. in terms of ease of choosing training hyperparameters to give good final performance and/or quick convergence)?
• Do the models seem to be sensitive to the choice of the parameter initialisation range? Can you think of any reasons for why setting individual parameter initialisation scales for each AffineLayer in a model might be useful? Can you come up with (or find) any heuristics for setting the parameter initialisation scales?
You do not need to come up with explanations for all of these (though if you can that’s great!), they are meant as prompts to get you thinking about the various issues involved in training multiple-layer models.
You may wish to start with shorter pilot training runs (by decreasing the number of training epochs) for each of the model architectures to get an initial idea of appropriate hyperparameter settings before doing one or two longer training runs to assess the final performance of the architectures.
In [11]:
# disable logging by setting handler to dummy object
logger.handlers = [logging.NullHandler()]
Models with two affine layers¶
In [12]:
# Set training run hyperparameters
batch_size = 100 # number of data points in a batch
num_epochs = 100 # number of training epochs to perform
stats_interval = 5 # epoch interval between recording and printing stats
learning_rate = 0.2 # learning rate for gradient descent
init_scales = [0.1, 0.2, 0.5, 1.] # scale for random parameter initialisation
final_errors_train = []
final_errors_valid = []
final_accs_train = []
final_accs_valid = []
for init_scale in init_scales:
print(‘-‘ * 80)
print(‘learning_rate={0:.2f} init_scale={1:.2f}’
.format(learning_rate, init_scale))
print(‘-‘ * 80)
# Reset random number generator and data provider states on each run
# to ensure reproducibility of results
rng.seed(seed)
train_data.reset()
valid_data.reset()
# Alter data-provider batch size
train_data.batch_size = batch_size
valid_data.batch_size = batch_size
# Create a parameter initialiser which will sample random uniform values
# from [-init_scale, init_scale]
param_init = UniformInit(-init_scale, init_scale, rng=rng)
# Create a model with two affine layers
hidden_dim = 100
model = MultipleLayerModel([
AffineLayer(input_dim, hidden_dim, param_init, param_init),
SigmoidLayer(),
AffineLayer(hidden_dim, output_dim, param_init, param_init)
])
# Initialise a cross entropy error object
error = CrossEntropySoftmaxError()
# Use a basic gradient descent learning rule
learning_rule = GradientDescentLearningRule(learning_rate=learning_rate)
stats, keys, run_time, fig_1, ax_1, fig_2, ax_2 = train_model_and_plot_stats(
model, error, learning_rule, train_data, valid_data, num_epochs, stats_interval)
plt.show()
print(‘ final error(train) = {0:.2e}’).format(stats[-1, keys[‘error(train)’]])
print(‘ final error(valid) = {0:.2e}’).format(stats[-1, keys[‘error(valid)’]])
print(‘ final acc(train) = {0:.2e}’).format(stats[-1, keys[‘acc(train)’]])
print(‘ final acc(valid) = {0:.2e}’).format(stats[-1, keys[‘acc(valid)’]])
print(‘ run time per epoch = {0:.2f}s’).format(run_time * 1. / num_epochs)
final_errors_train.append(stats[-1, keys[‘error(train)’]])
final_errors_valid.append(stats[-1, keys[‘error(valid)’]])
final_accs_train.append(stats[-1, keys[‘acc(train)’]])
final_accs_valid.append(stats[-1, keys[‘acc(valid)’]])
——————————————————————————–
learning_rate=0.20 init_scale=0.10
——————————————————————————–


final error(train) = 1.87e-02
final error(valid) = 7.73e-02
final acc(train) = 9.98e-01
final acc(valid) = 9.76e-01
run time per epoch = 0.80s
——————————————————————————–
learning_rate=0.20 init_scale=0.20
——————————————————————————–


final error(train) = 1.85e-02
final error(valid) = 7.47e-02
final acc(train) = 9.98e-01
final acc(valid) = 9.78e-01
run time per epoch = 0.81s
——————————————————————————–
learning_rate=0.20 init_scale=0.50
——————————————————————————–


final error(train) = 2.05e-02
final error(valid) = 9.07e-02
final acc(train) = 9.97e-01
final acc(valid) = 9.75e-01
run time per epoch = 0.80s
——————————————————————————–
learning_rate=0.20 init_scale=1.00
——————————————————————————–


final error(train) = 3.41e-02
final error(valid) = 1.26e-01
final acc(train) = 9.93e-01
final acc(valid) = 9.64e-01
run time per epoch = 0.81s
In [13]:
j = 0
print(‘| init_scale | final error(train) | final error(valid) | final acc(train) | final acc(valid) |’)
print(‘|————|——————–|——————–|——————|——————|’)
for init_scale in init_scales:
print(‘| {0:.1f} | {1:.2e} | {2:.2e} | {3:.2f} | {4:.2f} |’
.format(init_scale,
final_errors_train[j], final_errors_valid[j],
final_accs_train[j], final_accs_valid[j]))
j += 1
| init_scale | final error(train) | final error(valid) | final acc(train) | final acc(valid) |
|————|——————–|——————–|——————|——————|
| 0.1 | 1.87e-02 | 7.73e-02 | 1.00 | 0.98 |
| 0.2 | 1.85e-02 | 7.47e-02 | 1.00 | 0.98 |
| 0.5 | 2.05e-02 | 9.07e-02 | 1.00 | 0.98 |
| 1.0 | 3.41e-02 | 1.26e-01 | 0.99 | 0.96 |
Models with three affine layers¶
In [14]:
# Set training run hyperparameters
batch_size = 100 # number of data points in a batch
num_epochs = 100 # number of training epochs to perform
stats_interval = 5 # epoch interval between recording and printing stats
learning_rate = 0.2 # learning rate for gradient descent
init_scales = [0.1, 0.2, 0.5, 1.] # scale for random parameter initialisation
final_errors_train = []
final_errors_valid = []
final_accs_train = []
final_accs_valid = []
for init_scale in init_scales:
print(‘-‘ * 80)
print(‘learning_rate={0:.2f} init_scale={1:.2f}’
.format(learning_rate, init_scale))
print(‘-‘ * 80)
# Reset random number generator and data provider states on each run
# to ensure reproducibility of results
rng.seed(seed)
train_data.reset()
valid_data.reset()
# Alter data-provider batch size
train_data.batch_size = batch_size
valid_data.batch_size = batch_size
# Create a parameter initialiser which will sample random uniform values
# from [-init_scale, init_scale]
param_init = UniformInit(-init_scale, init_scale, rng=rng)
# Create a model with three affine layers
hidden_dim = 100
model = MultipleLayerModel([
AffineLayer(input_dim, hidden_dim, param_init, param_init),
SigmoidLayer(),
AffineLayer(hidden_dim, hidden_dim, param_init, param_init),
SigmoidLayer(),
AffineLayer(hidden_dim, output_dim, param_init, param_init)
])
# Initialise a cross entropy error object
error = CrossEntropySoftmaxError()
# Use a basic gradient descent learning rule
learning_rule = GradientDescentLearningRule(learning_rate=learning_rate)
stats, keys, run_time, fig_1, ax_1, fig_2, ax_2 = train_model_and_plot_stats(
model, error, learning_rule, train_data, valid_data, num_epochs, stats_interval)
plt.show()
print(‘ final error(train) = {0:.2e}’).format(stats[-1, keys[‘error(train)’]])
print(‘ final error(valid) = {0:.2e}’).format(stats[-1, keys[‘error(valid)’]])
print(‘ final acc(train) = {0:.2e}’).format(stats[-1, keys[‘acc(train)’]])
print(‘ final acc(valid) = {0:.2e}’).format(stats[-1, keys[‘acc(valid)’]])
print(‘ run time per epoch = {0:.2f}s’).format(run_time * 1. / num_epochs)
final_errors_train.append(stats[-1, keys[‘error(train)’]])
final_errors_valid.append(stats[-1, keys[‘error(valid)’]])
final_accs_train.append(stats[-1, keys[‘acc(train)’]])
final_accs_valid.append(stats[-1, keys[‘acc(valid)’]])
——————————————————————————–
learning_rate=0.20 init_scale=0.10
——————————————————————————–


final error(train) = 5.52e-03
final error(valid) = 8.77e-02
final acc(train) = 1.00e+00
final acc(valid) = 9.77e-01
run time per epoch = 1.15s
——————————————————————————–
learning_rate=0.20 init_scale=0.20
——————————————————————————–


final error(train) = 5.21e-03
final error(valid) = 8.95e-02
final acc(train) = 1.00e+00
final acc(valid) = 9.76e-01
run time per epoch = 1.19s
——————————————————————————–
learning_rate=0.20 init_scale=0.50
——————————————————————————–


final error(train) = 7.20e-03
final error(valid) = 1.02e-01
final acc(train) = 1.00e+00
final acc(valid) = 9.74e-01
run time per epoch = 1.16s
——————————————————————————–
learning_rate=0.20 init_scale=1.00
——————————————————————————–


final error(train) = 1.73e-02
final error(valid) = 1.31e-01
final acc(train) = 9.98e-01
final acc(valid) = 9.66e-01
run time per epoch = 1.29s
In [15]:
j = 0
print(‘| init_scale | final error(train) | final error(valid) | final acc(train) | final acc(valid) |’)
print(‘|————|——————–|——————–|——————|——————|’)
for init_scale in init_scales:
print(‘| {0:.1f} | {1:.2e} | {2:.2e} | {3:.2f} | {4:.2f} |’
.format(init_scale,
final_errors_train[j], final_errors_valid[j],
final_accs_train[j], final_accs_valid[j]))
j += 1
| init_scale | final error(train) | final error(valid) | final acc(train) | final acc(valid) |
|————|——————–|——————–|——————|——————|
| 0.1 | 5.52e-03 | 8.77e-02 | 1.00 | 0.98 |
| 0.2 | 5.21e-03 | 8.95e-02 | 1.00 | 0.98 |
| 0.5 | 7.20e-03 | 1.02e-01 | 1.00 | 0.97 |
| 1.0 | 1.73e-02 | 1.31e-01 | 1.00 | 0.97 |
Models with four affine layers¶
In [16]:
# Set training run hyperparameters
batch_size = 100 # number of data points in a batch
num_epochs = 100 # number of training epochs to perform
stats_interval = 5 # epoch interval between recording and printing stats
learning_rate = 0.2 # learning rate for gradient descent
init_scales = [0.1, 0.2, 0.5, 1.] # scale for random parameter initialisation
final_errors_train = []
final_errors_valid = []
final_accs_train = []
final_accs_valid = []
for init_scale in init_scales:
print(‘-‘ * 80)
print(‘learning_rate={0:.2f} init_scale={1:.2f}’
.format(learning_rate, init_scale))
print(‘-‘ * 80)
# Reset random number generator and data provider states on each run
# to ensure reproducibility of results
rng.seed(seed)
train_data.reset()
valid_data.reset()
# Alter data-provider batch size
train_data.batch_size = batch_size
valid_data.batch_size = batch_size
# Create a parameter initialiser which will sample random uniform values
# from [-init_scale, init_scale]
param_init = UniformInit(-init_scale, init_scale, rng=rng)
# Create a model with four affine layers
hidden_dim = 100
model = MultipleLayerModel([
AffineLayer(input_dim, hidden_dim, param_init, param_init),
SigmoidLayer(),
AffineLayer(hidden_dim, hidden_dim, param_init, param_init),
SigmoidLayer(),
AffineLayer(hidden_dim, hidden_dim, param_init, param_init),
SigmoidLayer(),
AffineLayer(hidden_dim, output_dim, param_init, param_init)
])
# Initialise a cross entropy error object
error = CrossEntropySoftmaxError()
# Use a basic gradient descent learning rule
learning_rule = GradientDescentLearningRule(learning_rate=learning_rate)
stats, keys, run_time, fig_1, ax_1, fig_2, ax_2 = train_model_and_plot_stats(
model, error, learning_rule, train_data, valid_data, num_epochs, stats_interval)
plt.show()
print(‘ final error(train) = {0:.2e}’).format(stats[-1, keys[‘error(train)’]])
print(‘ final error(valid) = {0:.2e}’).format(stats[-1, keys[‘error(valid)’]])
print(‘ final acc(train) = {0:.2e}’).format(stats[-1, keys[‘acc(train)’]])
print(‘ final acc(valid) = {0:.2e}’).format(stats[-1, keys[‘acc(valid)’]])
print(‘ run time per epoch = {0:.2f}s’).format(run_time * 1. / num_epochs)
final_errors_train.append(stats[-1, keys[‘error(train)’]])
final_errors_valid.append(stats[-1, keys[‘error(valid)’]])
final_accs_train.append(stats[-1, keys[‘acc(train)’]])
final_accs_valid.append(stats[-1, keys[‘acc(valid)’]])
——————————————————————————–
learning_rate=0.20 init_scale=0.10
——————————————————————————–


final error(train) = 2.03e-03
final error(valid) = 1.35e-01
final acc(train) = 1.00e+00
final acc(valid) = 9.73e-01
run time per epoch = 1.68s
——————————————————————————–
learning_rate=0.20 init_scale=0.20
——————————————————————————–


final error(train) = 1.99e-03
final error(valid) = 1.17e-01
final acc(train) = 1.00e+00
final acc(valid) = 9.75e-01
run time per epoch = 1.56s
——————————————————————————–
learning_rate=0.20 init_scale=0.50
——————————————————————————–


final error(train) = 3.07e-03
final error(valid) = 1.34e-01
final acc(train) = 1.00e+00
final acc(valid) = 9.71e-01
run time per epoch = 1.62s
——————————————————————————–
learning_rate=0.20 init_scale=1.00
——————————————————————————–


final error(train) = 7.72e-03
final error(valid) = 1.62e-01
final acc(train) = 1.00e+00
final acc(valid) = 9.62e-01
run time per epoch = 1.57s
In [17]:
j = 0
print(‘| init_scale | final error(train) | final error(valid) | final acc(train) | final acc(valid) |’)
print(‘|————|——————–|——————–|——————|——————|’)
for init_scale in init_scales:
print(‘| {0:.1f} | {1:.2e} | {2:.2e} | {3:.2f} | {4:.2f} |’
.format(init_scale,
final_errors_train[j], final_errors_valid[j],
final_accs_train[j], final_accs_valid[j]))
j += 1
| init_scale | final error(train) | final error(valid) | final acc(train) | final acc(valid) |
|————|——————–|——————–|——————|——————|
| 0.1 | 2.03e-03 | 1.35e-01 | 1.00 | 0.97 |
| 0.2 | 1.99e-03 | 1.17e-01 | 1.00 | 0.97 |
| 0.5 | 3.07e-03 | 1.34e-01 | 1.00 | 0.97 |
| 1.0 | 7.72e-03 | 1.62e-01 | 1.00 | 0.96 |
Models with five affine layers¶
In [4]:
# Set training run hyperparameters
batch_size = 100 # number of data points in a batch
num_epochs = 100 # number of training epochs to perform
stats_interval = 5 # epoch interval between recording and printing stats
learning_rate = 0.2 # learning rate for gradient descent
init_scales = [0.1, 0.2, 0.5, 1.] # scale for random parameter initialisation
final_errors_train = []
final_errors_valid = []
final_accs_train = []
final_accs_valid = []
for init_scale in init_scales:
print(‘-‘ * 80)
print(‘learning_rate={0:.2f} init_scale={1:.2f}’
.format(learning_rate, init_scale))
print(‘-‘ * 80)
# Reset random number generator and data provider states on each run
# to ensure reproducibility of results
rng.seed(seed)
train_data.reset()
valid_data.reset()
# Alter data-provider batch size
train_data.batch_size = batch_size
valid_data.batch_size = batch_size
# Create a parameter initialiser which will sample random uniform values
# from [-init_scale, init_scale]
param_init = UniformInit(-init_scale, init_scale, rng=rng)
# Create a model with five affine layers
hidden_dim = 100
model = MultipleLayerModel([
AffineLayer(input_dim, hidden_dim, param_init, param_init),
SigmoidLayer(),
AffineLayer(hidden_dim, hidden_dim, param_init, param_init),
SigmoidLayer(),
AffineLayer(hidden_dim, hidden_dim, param_init, param_init),
SigmoidLayer(),
AffineLayer(hidden_dim, hidden_dim, param_init, param_init),
SigmoidLayer(),
AffineLayer(hidden_dim, output_dim, param_init, param_init)
])
# Initialise a cross entropy error object
error = CrossEntropySoftmaxError()
# Use a basic gradient descent learning rule
learning_rule = GradientDescentLearningRule(learning_rate=learning_rate)
stats, keys, run_time, fig_1, ax_1, fig_2, ax_2 = train_model_and_plot_stats(
model, error, learning_rule, train_data, valid_data, num_epochs, stats_interval)
plt.show()
print(‘ final error(train) = {0:.2e}’).format(stats[-1, keys[‘error(train)’]])
print(‘ final error(valid) = {0:.2e}’).format(stats[-1, keys[‘error(valid)’]])
print(‘ final acc(train) = {0:.2e}’).format(stats[-1, keys[‘acc(train)’]])
print(‘ final acc(valid) = {0:.2e}’).format(stats[-1, keys[‘acc(valid)’]])
print(‘ run time per epoch = {0:.2f}s’).format(run_time * 1. / num_epochs)
final_errors_train.append(stats[-1, keys[‘error(train)’]])
final_errors_valid.append(stats[-1, keys[‘error(valid)’]])
final_accs_train.append(stats[-1, keys[‘acc(train)’]])
final_accs_valid.append(stats[-1, keys[‘acc(valid)’]])
——————————————————————————–
learning_rate=0.20 init_scale=0.10
——————————————————————————–
Epoch 0:
error(train)=2.34e+00, acc(train)=1.04e-01, error(valid)=2.33e+00, acc(valid)=1.09e-01, params_penalty=0.00e+00
Epoch 5: 0.92s to complete
error(train)=2.30e+00, acc(train)=1.14e-01, error(valid)=2.30e+00, acc(valid)=1.06e-01, params_penalty=0.00e+00
Epoch 10: 0.91s to complete
error(train)=2.30e+00, acc(train)=1.14e-01, error(valid)=2.30e+00, acc(valid)=1.06e-01, params_penalty=0.00e+00
Epoch 15: 1.22s to complete
error(train)=2.27e+00, acc(train)=2.44e-01, error(valid)=2.27e+00, acc(valid)=2.44e-01, params_penalty=0.00e+00
Epoch 20: 1.00s to complete
error(train)=1.00e+00, acc(train)=6.55e-01, error(valid)=9.83e-01, acc(valid)=6.59e-01, params_penalty=0.00e+00
Epoch 25: 0.93s to complete
error(train)=4.00e-01, acc(train)=8.96e-01, error(valid)=4.02e-01, acc(valid)=8.97e-01, params_penalty=0.00e+00
Epoch 30: 0.99s to complete
error(train)=2.54e-01, acc(train)=9.35e-01, error(valid)=2.86e-01, acc(valid)=9.32e-01, params_penalty=0.00e+00
Epoch 35: 0.96s to complete
error(train)=1.89e-01, acc(train)=9.47e-01, error(valid)=2.48e-01, acc(valid)=9.38e-01, params_penalty=0.00e+00
Epoch 40: 1.14s to complete
error(train)=1.35e-01, acc(train)=9.62e-01, error(valid)=2.09e-01, acc(valid)=9.50e-01, params_penalty=0.00e+00
Epoch 45: 1.21s to complete
error(train)=9.77e-02, acc(train)=9.74e-01, error(valid)=1.87e-01, acc(valid)=9.57e-01, params_penalty=0.00e+00
Epoch 50: 1.17s to complete
error(train)=7.26e-02, acc(train)=9.81e-01, error(valid)=1.76e-01, acc(valid)=9.59e-01, params_penalty=0.00e+00
Epoch 55: 1.24s to complete
error(train)=5.60e-02, acc(train)=9.85e-01, error(valid)=1.78e-01, acc(valid)=9.61e-01, params_penalty=0.00e+00
Epoch 60: 1.17s to complete
error(train)=4.73e-02, acc(train)=9.87e-01, error(valid)=1.79e-01, acc(valid)=9.58e-01, params_penalty=0.00e+00
Epoch 65: 1.05s to complete
error(train)=3.47e-02, acc(train)=9.91e-01, error(valid)=1.84e-01, acc(valid)=9.62e-01, params_penalty=0.00e+00
Epoch 70: 1.46s to complete
error(train)=2.41e-02, acc(train)=9.95e-01, error(valid)=1.80e-01, acc(valid)=9.63e-01, params_penalty=0.00e+00
Epoch 75: 0.97s to complete
error(train)=1.75e-02, acc(train)=9.97e-01, error(valid)=1.87e-01, acc(valid)=9.62e-01, params_penalty=0.00e+00
Epoch 80: 0.99s to complete
error(train)=1.54e-02, acc(train)=9.97e-01, error(valid)=2.03e-01, acc(valid)=9.62e-01, params_penalty=0.00e+00
Epoch 85: 1.15s to complete
error(train)=1.06e-02, acc(train)=9.99e-01, error(valid)=2.00e-01, acc(valid)=9.64e-01, params_penalty=0.00e+00
Epoch 90: 1.07s to complete
error(train)=7.86e-03, acc(train)=9.99e-01, error(valid)=2.08e-01, acc(valid)=9.64e-01, params_penalty=0.00e+00
Epoch 95: 1.00s to complete
error(train)=6.40e-03, acc(train)=9.99e-01, error(valid)=2.15e-01, acc(valid)=9.64e-01, params_penalty=0.00e+00
Epoch 100: 0.97s to complete
error(train)=4.75e-03, acc(train)=1.00e+00, error(valid)=2.19e-01, acc(valid)=9.64e-01, params_penalty=0.00e+00


final error(train) = 4.75e-03
final error(valid) = 2.19e-01
final acc(train) = 1.00e+00
final acc(valid) = 9.64e-01
run time per epoch = 1.25s
——————————————————————————–
learning_rate=0.20 init_scale=0.20
——————————————————————————–
Epoch 0:
error(train)=2.46e+00, acc(train)=1.04e-01, error(valid)=2.45e+00, acc(valid)=1.09e-01, params_penalty=0.00e+00
Epoch 5: 0.94s to complete
error(train)=5.53e-01, acc(train)=8.34e-01, error(valid)=5.10e-01, acc(valid)=8.50e-01, params_penalty=0.00e+00
Epoch 10: 1.14s to complete
error(train)=1.98e-01, acc(train)=9.44e-01, error(valid)=1.99e-01, acc(valid)=9.45e-01, params_penalty=0.00e+00
Epoch 15: 0.95s to complete
error(train)=1.23e-01, acc(train)=9.64e-01, error(valid)=1.44e-01, acc(valid)=9.59e-01, params_penalty=0.00e+00
Epoch 20: 0.94s to complete
error(train)=8.03e-02, acc(train)=9.78e-01, error(valid)=1.21e-01, acc(valid)=9.66e-01, params_penalty=0.00e+00
Epoch 25: 0.93s to complete
error(train)=8.42e-02, acc(train)=9.74e-01, error(valid)=1.51e-01, acc(valid)=9.62e-01, params_penalty=0.00e+00
Epoch 30: 0.92s to complete
error(train)=7.81e-02, acc(train)=9.74e-01, error(valid)=1.59e-01, acc(valid)=9.59e-01, params_penalty=0.00e+00
Epoch 35: 1.12s to complete
error(train)=3.37e-02, acc(train)=9.91e-01, error(valid)=1.13e-01, acc(valid)=9.70e-01, params_penalty=0.00e+00
Epoch 40: 1.79s to complete
error(train)=2.52e-02, acc(train)=9.94e-01, error(valid)=1.20e-01, acc(valid)=9.71e-01, params_penalty=0.00e+00
Epoch 45: 1.09s to complete
error(train)=2.00e-02, acc(train)=9.95e-01, error(valid)=1.20e-01, acc(valid)=9.70e-01, params_penalty=0.00e+00
Epoch 50: 0.98s to complete
error(train)=1.26e-02, acc(train)=9.98e-01, error(valid)=1.20e-01, acc(valid)=9.70e-01, params_penalty=0.00e+00
Epoch 55: 1.13s to complete
error(train)=8.13e-03, acc(train)=9.99e-01, error(valid)=1.23e-01, acc(valid)=9.70e-01, params_penalty=0.00e+00
Epoch 60: 1.08s to complete
error(train)=6.10e-03, acc(train)=9.99e-01, error(valid)=1.28e-01, acc(valid)=9.71e-01, params_penalty=0.00e+00
Epoch 65: 0.95s to complete
error(train)=4.59e-03, acc(train)=1.00e+00, error(valid)=1.36e-01, acc(valid)=9.71e-01, params_penalty=0.00e+00
Epoch 70: 0.97s to complete
error(train)=3.20e-03, acc(train)=1.00e+00, error(valid)=1.35e-01, acc(valid)=9.71e-01, params_penalty=0.00e+00
Epoch 75: 0.94s to complete
error(train)=2.65e-03, acc(train)=1.00e+00, error(valid)=1.41e-01, acc(valid)=9.70e-01, params_penalty=0.00e+00
Epoch 80: 1.02s to complete
error(train)=2.02e-03, acc(train)=1.00e+00, error(valid)=1.42e-01, acc(valid)=9.71e-01, params_penalty=0.00e+00
Epoch 85: 1.00s to complete
error(train)=1.71e-03, acc(train)=1.00e+00, error(valid)=1.45e-01, acc(valid)=9.71e-01, params_penalty=0.00e+00
Epoch 90: 1.15s to complete
error(train)=1.71e-03, acc(train)=1.00e+00, error(valid)=1.50e-01, acc(valid)=9.72e-01, params_penalty=0.00e+00
Epoch 95: 1.00s to complete
error(train)=1.29e-03, acc(train)=1.00e+00, error(valid)=1.48e-01, acc(valid)=9.71e-01, params_penalty=0.00e+00
Epoch 100: 0.95s to complete
error(train)=1.14e-03, acc(train)=1.00e+00, error(valid)=1.49e-01, acc(valid)=9.71e-01, params_penalty=0.00e+00


final error(train) = 1.14e-03
final error(valid) = 1.49e-01
final acc(train) = 1.00e+00
final acc(valid) = 9.71e-01
run time per epoch = 1.16s
——————————————————————————–
learning_rate=0.20 init_scale=0.50
——————————————————————————–
Epoch 0:
error(train)=3.36e+00, acc(train)=1.04e-01, error(valid)=3.33e+00, acc(valid)=1.09e-01, params_penalty=0.00e+00
Epoch 5: 0.96s to complete
error(train)=2.16e-01, acc(train)=9.36e-01, error(valid)=2.05e-01, acc(valid)=9.40e-01, params_penalty=0.00e+00
Epoch 10: 1.00s to complete
error(train)=1.41e-01, acc(train)=9.57e-01, error(valid)=1.52e-01, acc(valid)=9.57e-01, params_penalty=0.00e+00
Epoch 15: 1.09s to complete
error(train)=1.02e-01, acc(train)=9.69e-01, error(valid)=1.29e-01, acc(valid)=9.62e-01, params_penalty=0.00e+00
Epoch 20: 0.97s to complete
error(train)=7.68e-02, acc(train)=9.78e-01, error(valid)=1.21e-01, acc(valid)=9.66e-01, params_penalty=0.00e+00
Epoch 25: 1.03s to complete
error(train)=5.93e-02, acc(train)=9.83e-01, error(valid)=1.20e-01, acc(valid)=9.65e-01, params_penalty=0.00e+00
Epoch 30: 0.96s to complete
error(train)=4.85e-02, acc(train)=9.86e-01, error(valid)=1.23e-01, acc(valid)=9.66e-01, params_penalty=0.00e+00
Epoch 35: 0.92s to complete
error(train)=3.23e-02, acc(train)=9.92e-01, error(valid)=1.15e-01, acc(valid)=9.69e-01, params_penalty=0.00e+00
Epoch 40: 0.95s to complete
error(train)=2.37e-02, acc(train)=9.95e-01, error(valid)=1.13e-01, acc(valid)=9.69e-01, params_penalty=0.00e+00
Epoch 45: 0.91s to complete
error(train)=1.78e-02, acc(train)=9.97e-01, error(valid)=1.20e-01, acc(valid)=9.68e-01, params_penalty=0.00e+00
Epoch 50: 1.00s to complete
error(train)=1.40e-02, acc(train)=9.98e-01, error(valid)=1.23e-01, acc(valid)=9.69e-01, params_penalty=0.00e+00
Epoch 55: 1.08s to complete
error(train)=1.00e-02, acc(train)=9.99e-01, error(valid)=1.25e-01, acc(valid)=9.70e-01, params_penalty=0.00e+00
Epoch 60: 0.93s to complete
error(train)=8.09e-03, acc(train)=9.99e-01, error(valid)=1.29e-01, acc(valid)=9.70e-01, params_penalty=0.00e+00
Epoch 65: 0.93s to complete
error(train)=6.33e-03, acc(train)=1.00e+00, error(valid)=1.33e-01, acc(valid)=9.69e-01, params_penalty=0.00e+00
Epoch 70: 0.93s to complete
error(train)=4.86e-03, acc(train)=1.00e+00, error(valid)=1.35e-01, acc(valid)=9.70e-01, params_penalty=0.00e+00
Epoch 75: 0.96s to complete
error(train)=5.00e-03, acc(train)=1.00e+00, error(valid)=1.41e-01, acc(valid)=9.70e-01, params_penalty=0.00e+00
Epoch 80: 1.13s to complete
error(train)=3.28e-03, acc(train)=1.00e+00, error(valid)=1.39e-01, acc(valid)=9.70e-01, params_penalty=0.00e+00
Epoch 85: 1.24s to complete
error(train)=2.70e-03, acc(train)=1.00e+00, error(valid)=1.42e-01, acc(valid)=9.70e-01, params_penalty=0.00e+00
Epoch 90: 1.40s to complete
error(train)=2.37e-03, acc(train)=1.00e+00, error(valid)=1.44e-01, acc(valid)=9.71e-01, params_penalty=0.00e+00
Epoch 95: 1.00s to complete
error(train)=2.15e-03, acc(train)=1.00e+00, error(valid)=1.46e-01, acc(valid)=9.70e-01, params_penalty=0.00e+00
Epoch 100: 0.94s to complete
error(train)=1.81e-03, acc(train)=1.00e+00, error(valid)=1.47e-01, acc(valid)=9.71e-01, params_penalty=0.00e+00


final error(train) = 1.81e-03
final error(valid) = 1.47e-01
final acc(train) = 1.00e+00
final acc(valid) = 9.71e-01
run time per epoch = 1.16s
——————————————————————————–
learning_rate=0.20 init_scale=1.00
——————————————————————————–
Epoch 0:
error(train)=5.77e+00, acc(train)=7.52e-02, error(valid)=5.69e+00, acc(valid)=7.71e-02, params_penalty=0.00e+00
Epoch 5: 0.94s to complete
error(train)=2.39e-01, acc(train)=9.28e-01, error(valid)=2.34e-01, acc(valid)=9.31e-01, params_penalty=0.00e+00
Epoch 10: 1.00s to complete
error(train)=1.65e-01, acc(train)=9.51e-01, error(valid)=1.84e-01, acc(valid)=9.45e-01, params_penalty=0.00e+00
Epoch 15: 1.00s to complete
error(train)=1.26e-01, acc(train)=9.63e-01, error(valid)=1.66e-01, acc(valid)=9.50e-01, params_penalty=0.00e+00
Epoch 20: 0.94s to complete
error(train)=1.06e-01, acc(train)=9.68e-01, error(valid)=1.62e-01, acc(valid)=9.55e-01, params_penalty=0.00e+00
Epoch 25: 1.18s to complete
error(train)=8.20e-02, acc(train)=9.77e-01, error(valid)=1.56e-01, acc(valid)=9.57e-01, params_penalty=0.00e+00
Epoch 30: 0.98s to complete
error(train)=6.80e-02, acc(train)=9.80e-01, error(valid)=1.54e-01, acc(valid)=9.57e-01, params_penalty=0.00e+00
Epoch 35: 1.01s to complete
error(train)=5.64e-02, acc(train)=9.85e-01, error(valid)=1.57e-01, acc(valid)=9.58e-01, params_penalty=0.00e+00
Epoch 40: 1.03s to complete
error(train)=4.22e-02, acc(train)=9.90e-01, error(valid)=1.51e-01, acc(valid)=9.58e-01, params_penalty=0.00e+00
Epoch 45: 0.94s to complete
error(train)=3.54e-02, acc(train)=9.92e-01, error(valid)=1.54e-01, acc(valid)=9.60e-01, params_penalty=0.00e+00
Epoch 50: 1.07s to complete
error(train)=2.80e-02, acc(train)=9.94e-01, error(valid)=1.56e-01, acc(valid)=9.59e-01, params_penalty=0.00e+00
Epoch 55: 1.17s to complete
error(train)=2.24e-02, acc(train)=9.96e-01, error(valid)=1.58e-01, acc(valid)=9.61e-01, params_penalty=0.00e+00
Epoch 60: 0.96s to complete
error(train)=1.88e-02, acc(train)=9.97e-01, error(valid)=1.61e-01, acc(valid)=9.61e-01, params_penalty=0.00e+00
Epoch 65: 0.96s to complete
error(train)=1.60e-02, acc(train)=9.98e-01, error(valid)=1.68e-01, acc(valid)=9.60e-01, params_penalty=0.00e+00
Epoch 70: 1.06s to complete
error(train)=1.27e-02, acc(train)=9.99e-01, error(valid)=1.68e-01, acc(valid)=9.61e-01, params_penalty=0.00e+00
Epoch 75: 0.99s to complete
error(train)=1.15e-02, acc(train)=9.99e-01, error(valid)=1.74e-01, acc(valid)=9.60e-01, params_penalty=0.00e+00
Epoch 80: 1.09s to complete
error(train)=9.71e-03, acc(train)=9.99e-01, error(valid)=1.77e-01, acc(valid)=9.60e-01, params_penalty=0.00e+00
Epoch 85: 1.56s to complete
error(train)=7.61e-03, acc(train)=9.99e-01, error(valid)=1.78e-01, acc(valid)=9.61e-01, params_penalty=0.00e+00
Epoch 90: 1.16s to complete
error(train)=6.44e-03, acc(train)=1.00e+00, error(valid)=1.82e-01, acc(valid)=9.61e-01, params_penalty=0.00e+00
Epoch 95: 1.64s to complete
error(train)=5.54e-03, acc(train)=1.00e+00, error(valid)=1.85e-01, acc(valid)=9.61e-01, params_penalty=0.00e+00
Epoch 100: 1.27s to complete
error(train)=4.95e-03, acc(train)=1.00e+00, error(valid)=1.88e-01, acc(valid)=9.60e-01, params_penalty=0.00e+00


final error(train) = 4.95e-03
final error(valid) = 1.88e-01
final acc(train) = 1.00e+00
final acc(valid) = 9.60e-01
run time per epoch = 1.21s
In [19]:
j = 0
print(‘| init_scale | final error(train) | final error(valid) | final acc(train) | final acc(valid) |’)
print(‘|————|——————–|——————–|——————|——————|’)
for init_scale in init_scales:
print(‘| {0:.1f} | {1:.2e} | {2:.2e} | {3:.2f} | {4:.2f} |’
.format(init_scale,
final_errors_train[j], final_errors_valid[j],
final_accs_train[j], final_accs_valid[j]))
j += 1
| init_scale | final error(train) | final error(valid) | final acc(train) | final acc(valid) |
|————|——————–|——————–|——————|——————|
| 0.1 | 4.75e-03 | 2.19e-01 | 1.00 | 0.96 |
| 0.2 | 1.14e-03 | 1.49e-01 | 1.00 | 0.97 |
| 0.5 | 1.81e-03 | 1.47e-01 | 1.00 | 0.97 |
| 1.0 | 4.95e-03 | 1.88e-01 | 1.00 | 0.96 |
How does increasing the number of layers affect the model’s performance on the training data set? And on the validation data set?
The best final training set error across the four initialisation scales used above for each model architecture, consistently decreases as we increase the number of layers.
Number of affine layers
Best final training set error
2
$1.85 \times 10^{-2}$
3
$5.21 \times 10^{-3}$
4
$1.99 \times 10^{-3}$
5
$1.14 \times 10^{-3}$
This makes sense as because the number of layers increase, for a fixed hidden layer width, the total number of free parameters in the model increases and so we would expect the model to be able to fit too the training data better.
If we look at the validation set however we see the opposite trend; as the number of layers increases the best final validation set error increases.
Number of affine layers
Best final validation set error
2
$7.47 \times 10^{-2}$
3
$8.77 \times 10^{-2}$
4
$1.17 \times 10^{-1}$
5
$1.47 \times 10^{-1}$
If we look more closely at the training curves for the models with more layers we can see what is happening here. For the models with three or more layers, after a certain number of epochs the validation set error begins to *increase* even as the training set error continues to decrease. This indicates that these models have begun *overfitting* to the training data. We could get a better validation set error in these cases by stopping the training early. *Early stopping* like this is one way of trying to overcome overfitting, in later labs we will consider other methods for improving generalisation by reducing overfitting.
Do deeper models seem to be harder or easier to train (e.g. in terms of ease of choosing training hyperparameters to give good final performance and/or quick convergence)?
Do the models seem to be sensitive to the choice of the parameter initialisation range? Can you think of any reasons for why setting individual parameter initialisation scales for each AffineLayer in a model might be useful? Can you come up with (or find) any heuristics for setting the parameter initialisation scales?
The final performance of the deeper models becomes increasingly sensitive to the choice of parameter initialisation. For the models with two affine layers, the final training errors for initialisation scales 0.1, 0.2 and 0.5 are all within approximately 10% of each other, while for the models with five affine layers there is an approximately 400% increase in final training error if moving from an initialisation scale of 0.2 to 0.1 and a 50% increase in final training error when moving from 0.2 to 0.5. The smaller parameter initialisation scales for the deeper models in particular seem to give poorer initial performance (error curves start from higher values) and for the five affine layer model the smallest parameter initialisation scale run shows a pronounced flatter section at the start of training with around 15 epochs before the error starts significantly decreasing.
In general the models with more layers also take longer to train per epoch, so on top of issues of potential overfitting and difficulty of choosing parameter initilisations we also need to factor in the potentially slower training of deeper models if computational time is a key constraint.
We might expect the appropriate initialisation scale for a given affine layer to depend on its input and output dimensionalities. Each output is calculated as the weighted sum of all the inputs, and so for a larger number of inputs the typical magnitude of the output activations will become larger as each will be calculate from a sum over more values. Similarly the backpropagated gradient at each input is calculated as a weighted sum over the gradients at each output, and so for a larger number outputs the typical magnitude of backpropagated gradients will become larger.
If we wish to keep some measure of the typical magnitude of the activations and backpropagated gradients at a given layer roughly constant through the network then we may therefore wish to set the parameter initialisation in a layer dimensionality dependent way. One heuristic based on trying to achieve a roughly constant variance in activations and backpropagated gradients through the network is to initialise the weights for a layer from a distribution with variance inversely proportional to the sum of the input and output dimensions of the layer. This is sometimes known as the Glorot or Xavier initialisation, after the name of the author of [the paper](http://jmlr.org/proceedings/papers/v9/glorot10a/glorot10a.pdf) in which this scheme was proposed. You will use this parameter initialisation scheme in the first coursework.