Feedforward Deep Neural Networks

This page includes my Chapter notes for the book by Michael Nielsen.

chapter 3: improving the way neural networks learn

chapter 4: a visual proof that neural nets can compute any function

chapter 5: why are deep neural networks hard to train?

chapter 6: deep learning

chapter 1: using neural networks to recognise handwritten digits

notes

  • insight is forever
  • his code is written in python 2.7
  • emotional commitment is a key to achieving mastery
/projects/ml/dl/feedforward/ch1/
primary-visual.png
The visual cortex is located in the occipital lobe
  • primary visual cortex has 140 million neurons
  • two types of artificial neuron: perceptron, sigmoid neuron
  • perceptron takes binary inputs and produces a single binary output.
  • perceptrons should be considered as making decisions after weighing up evidence (inputs)
  • neural nets can express NAND, which means any computation can be built using these gates!
/projects/ml/dl/feedforward/ch1/
nand.svg

sigmoid neurons

  • you want to tweak the weights and biases such that small changes in either will produce a small change in the output
  • as such we must break free from the sgn step function and introduce the sigmoid function
/projects/ml/dl/feedforward/ch1/
sgn.svg
Binary, Discontinuous Sign
\(\leadsto\)
/projects/ml/dl/feedforward/ch1/
sig.svg
Continuous, Differentiable Sigmoid

thus the mathematics of \(\varphi\) becomes: \[\begin{align*} \sigma(z) &\equiv \cfrac{1}{1+e^{-z}}\\ &\implies \cfrac{1}{1+\text{exp}(-\sum_j w_jx_j -b)} \end{align*}\]

Read more >

chapter 2: how the backpropagation algorithm works

  • the algorithm was introduced in the 1970s, but its importance wasn't fully appreciated until the famous 1986 paper by David Rumelhart, Geoffrey Hinton, and Ronald Williams.
  • "workhorse of learning in neural networks"
  • at the heart of it is an expression that tells us how quickly the cost function changes when we change the weights and biases.
/projects/ml/dl/feedforward/ch2/
activations.svg
activation diagram of a single neuron in matrix notation

notation

  • \(w_{jk}^l\) means the weight of the j\(^{th}\) neuron in layer \(l\) to the k\(^{th}\) neuron in the previous layer

    Read more >