notes

  • topics: convolutions, pooling, GPUs (to do more training), algorithmic expansion of data (reduce overfitting), dropout (also reduce overfitting), ensembles of networks
  • upon reflection, it is strange to use networks with fc (fully-connected) layers to classify images.

    • such a network does not take into account the spatial information of the images:

      • it treats input pixels which are far apart and close together as the same.
  • cnns use three basic ideas: local receptive fields, shared weights and pooling.

    1. the creation of maps that learn a feature, i.e. a feature map
    2. a subset of the input image (28x28), say 5x5 is weighted and added to a bias to create a single node of the feature map. these weights and biases remain the same for each new 5x5 subset computation from the original input image

      • hence shared weights. this relates to the feature maps being invariant to where that feature occurs in the input image
    3. we discard the positional information with pooling , because the relative position of a feature is more important than the absolute location of that feature
  • the "local receptive field" slides over by a "stride-length"

    • BTW, we can use validation data to choose the stride length that gives the best performance!
  • for the \(j\), \(k\)-th hidden neuron the output is:

    \begin{equation} \label{eq:a} \sigma \left ( b + \sum^4_{l=0}\sum^4_{m=0} w_{l,m} a_{j+l, k+m} \right ) \end{equation}
  • the convolution operation can be used to rewrite \ref{eq:a}:

    \begin{equation} \label{eq:a-conv} a^1 = \sigma(b + w * a ^0) \end{equation}
  • pooling layers are usually used immediately after convolutional layers.

    • they take each feature map and prepare a condensed feature map
    • max-pooling just takes the highest activation in a given 2x2 region
    • L2 pooling takes the square root of the sum of the squares of the activations in the 2x2 region.
    • we can certainly leverage the validation data to see which pooling strategy is most superior!
  • we need to modify backprop (from network.py / network2.py for CNN's.)
  • softmax plus log-likelihood cost is more common in modern image classification networks.

experiments

  • in the code, the convolutional and pooling layers are treated as a single layer.

network3.py

network3.py

DONE single hidden layer, baseline

CLOSED: [2025-04-14 Mon 13:06]

  • State "DONE" from [2025-04-14 Mon 13:06]

60 epochs, \(\eta = 0.1\), mini-batch 10, 100 hidden neurons

(theo310) z5362216@k105:~/neural-networks-and-deep-learning/src $ python
Python 3.10.8 (main, Dec  5 2022, 10:38:26) [GCC 12.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import network3
WARNING (theano.tensor.blas): Using NumPy C-API based implementation for BLAS functions.
Trying to run under a GPU.  If this is not desired, then modify network3.py
to set the GPU flag to False.
>>> from network3 import Network
>>> from network3 import ConvPoolLayer, FullyConnectedLayer, SoftmaxLayer
>>> training_data, validation_data, test_data = network3.load_data_shared()
>>> mini_batch_size = 10
>>> net = Network([FullyConnectedLayer(n_in=784, n_out=100),SoftmaxLayer(n_in=100,n_out=10)], mini_batch_size)
>>> net.SGD(training_data, 60, mini_batch_size, 0.1, validation_data, test_data)

#+RESULTS Epoch 59: validation accuracy 97.74% Finished training network. Best validation accuracy of 97.82% obtained at iteration 114999 Corresponding test accuracy of 97.67%

DONE adding 1 convolutional-pooling layer:

CLOSED: [2025-04-14 Mon 13:06]

  • State "DONE" from [2025-04-14 Mon 13:06]
>>> net = Network([
... ConvPoolLayer(
... image_shape=(mini_batch_size, 1, 28, 28),
... filter_shape=(20,1,5,5),
... poolsize=(2,2)),
... FullyConnectedLayer(n_in=20*12*12, n_out=100),
... SoftmaxLayer(n_in=100, n_out=10)], mini_batch_size)
>>> net.SGD(training_data, 60, mini_batch_size, 0.1, validation_data, test_data)

#+RESULTS Epoch 59: validation accuracy 98.90% This is the best validation accuracy to date. The corresponding test accuracy is 98.81% Finished training network. Best validation accuracy of 98.90% obtained at iteration 299999 Corresponding test accuracy of 98.81%

DONE adding a second conv-pool layer:

CLOSED: [2025-04-14 Mon 13:06]

  • State "DONE" from [2025-04-14 Mon 13:06]
net = Network([
    ConvPoolLayer(
	image_shape=(mini_batch_size, 1, 28, 28),
	filter_shape=(20,1,5,5),poolsize=(2,2)),
    ConvPoolLayer(
	image_shape=(mini_batch_size, 20, 12, 12),
	filter_shape=(40,20,5,5),poolsize=(2,2)),
    FullyConnectedLayer(n_in=40*4*4, n_out=100),
    SoftmaxLayer(n_in=100, n_out=10)], mini_batch_size)
>>> net.SGD(training_data, 60, mini_batch_size, 0.1, validation_data, test_data)

Epoch 59: validation accuracy 98.94% Finished training network. Best validation accuracy of 98.94% obtained at iteration 259999 Corresponding test accuracy of 98.98%

DONE changing to relu activation function:

CLOSED: [2025-04-14 Mon 14:34]

  • State "DONE" from "WAIT" [2025-04-14 Mon 14:34]
  • State "WAIT" from [2025-04-14 Mon 13:06]
  from network3 import ReLU
  net = Network([
      ConvPoolLayer(
	  image_shape=(mini_batch_size, 1, 28, 28),
	  filter_shape=(20,1,5,5),poolsize=(2,2), activation_fn=ReLU),
      ConvPoolLayer(
	  image_shape=(mini_batch_size, 20, 12, 12),
	  filter_shape=(40,20,5,5),poolsize=(2,2), activation_fn=ReLU),
      FullyConnectedLayer(n_in=40*4*4, n_out=100, activation_fn=ReLU),
      SoftmaxLayer(n_in=100, n_out=10)], mini_batch_size)
  net.SGD(training_data, 60, mini_batch_size, 0.03, validation_data, test_data, lmbda=0.1)

:Epoch 59: validation accuracy 99.12% :Finished training network. :Best validation accuracy of 99.13% obtained at iteration 199999 :Corresponding test accuracy of 99.19%

DONE augment training data:

CLOSED: [2025-04-14 Mon 21:43]

  • State "DONE" from "WAIT" [2025-04-14 Mon 21:43]
  • State "WAIT" from "TODO" [2025-04-14 Mon 15:11]

minor but significant! move each image 1 pixel up/down/left/right

  python expand_mnist.py
  expanded_training_data, _, _ = network3.load_data_shared("../data/mnist_expanded.pkl.gz")
  net = Network([
      ConvPoolLayer(
	  image_shape=(mini_batch_size, 1, 28, 28),
	  filter_shape=(20,1,5,5),poolsize=(2,2), activation_fn=ReLU),
      ConvPoolLayer(
	  image_shape=(mini_batch_size, 20, 12, 12),
	  filter_shape=(40,20,5,5),poolsize=(2,2), activation_fn=ReLU),
      FullyConnectedLayer(n_in=40*4*4, n_out=100, activation_fn=ReLU),
      SoftmaxLayer(n_in=100, n_out=10)], mini_batch_size)
  net.SGD(expanded_training_data, 60, mini_batch_size, 0.03, validation_data, test_data, lmbda=0.1)

Epoch 59: validation accuracy 99.39% Finished training network. Best validation accuracy of 99.40% obtained at iteration 1449999 Corresponding test accuracy of 99.36%

expand_mnist code

expand_mnist.py

DONE dropout (regularisation)

CLOSED: [2025-04-16 Wed 21:01]

  • State "DONE" from "TODO" [2025-04-16 Wed 21:01]
     net = Network([
	 ConvPoolLayer(
	     image_shape=(mini_batch_size, 1, 28, 28),
	     filter_shape=(20,1,5,5),poolsize=(2,2), activation_fn=ReLU),
	 ConvPoolLayer(
	     image_shape=(mini_batch_size, 20, 12, 12),
	     filter_shape=(40,20,5,5),poolsize=(2,2), activation_fn=ReLU),
	 FullyConnectedLayer(n_in=40*4*4, n_out=1000, activation_fn=ReLU, p_dropout=0.5),
	 FullyConnectedLayer(n_in=1000, n_out=1000, activation_fn=ReLU, p_dropout=0.5),
	 SoftmaxLayer(n_in=1000, n_out=10, p_dropout=0.5)
     ], mini_batch_size)
     >>> net.SGD(expanded_training_data, 40, mini_batch_size, 0.03, validation_data, test_data, lmbda=0.1)

Epoch 39: validation accuracy 99.54% Finished training network. Best validation accuracy of 99.62% obtained at iteration 874999 Corresponding test accuracy of 99.60%

discussion

from fc to c-p

  • a big advantage of shared weights (and biases) is that it greatly reduces the number of parameters of the network.

    • in this case, if you just had a c-p (convolutional-pooling) layer instead of fc layer, then you would have a 40x saving

second conv-pool layer

  • what does it conceptually mean to add such a layer?

    • just think of the new input images as slighly more condensed versions of the original image, with lots of patterns still to discover.
    • interestingly there is not just 1 input layer anymore. there would be however many inputs as there are feature maps.

      • the answer to this, is the same answer if the input image was RGB: just let the convolutional operation sample across all channels.

relu

  • empirically this performs better than the sigmoid.

    • \(max(0,z)\) doesn't saturate in the limit of large \(z\), unlike sigmoid neurons
relu
relu

expanded mnist

  • reasonable gains to be had here. we explode the training data from 50,000 images to 250,000.

    • each copy generates another 4, one pixel up/down/left/right
  • in 2003, Simard, Steinkraus and Platt improved their MNIST performance to 99.6 by mimicking handwriting data augmentation with "elastic distortions"

    • they did not have ReLU back then.

dropout

  • our best result.
  • we applied dropout to FC layers only. convolutional layers have their own regularisation due to the shared weights.

ensembles

  • implemented by Nielsen on his own, he achieved 99.67 percent accuracy.

    • realise that this implies 9,967 / 10,000 images were classified correctly!

conclusion

we managed to train despite the difficulties (exploding / vanishing gradients).

these difficulties did not disappear, but rather we avoided them by:

  • using convolutional layers, reducing the number of parameters which would suffer
  • using dropout, and more data to reduce overfitting
  • using ReLU instead of sigmoids
  • using GPU's
  • good weight initialisations

    • note that these are different for the different activation functions.

deep belief networks are worth looking into. they can do both unsupervised and semi-supervised learning. they are generative models. a key component of these are restricted Boltzmann Machines.

To recognise shapes, first learn to generate images.—Geoffrey Hinton

The ability to learn hierarchies of concepts, building up multiple layers of abstraction, seems to be fundamental to making sense of the world.

Conway's Law:

Any organization that designs a system… will inevinable produce a design whose structure is a copy of the organization's communication structure.

The mark of a mature field is the necessity for specialisation, c.f. Hippocrates / Galen in medicine:

"the fields start out monolithic, with just a few deep ideas. early experts can master all those ideas. but as time passes that monolithic character changes. we discover many deep new ideas, too many for any one person to really master."