In the last two lecturs, we discuss a general framework for learning, neural networks.
Elements of Statistical Learning (ESL) Chapter 11: https://web.stanford.edu/~hastie/ElemStatLearn/.
Stanford CS231n: http://cs231n.github.io.
On the origin of deep learning by Wang and Raj (2017): https://arxiv.org/pdf/1702.07800.pdf
Aka single layer perceptron (SLP) or single hidden layer back-propagation network.
Sum of nonlinear functions of linear combinations of the inputs, typically represented by a network diagram.
Output layer: \(Y=(Y_1, \ldots, Y_K)\) are \(K\)-dimensional output. For univariate response, \(K=1\); for \(K\)-class classification, \(k\)-th unit models the probability of class \(k\).
Input layer: \(X=(X_1, \ldots, X_p)\) are \(p\)-dimensional input features.
Hidden layer: \(Z=(Z_1, \ldots, Z_M)\) are derived features created from linear combinations of inputs \(X\).
\(T=(T_1, \ldots, T_K)\) are the output features that are directly associated with the outputs \(Y\) through output functions \(g_k(\cdot)\).
\(g_k(T) = T\) for regression. \(g_k(T) = e^{T_k} / \sum_{k=1}^K e^{T_k}\) for \(K\)-class classification (softmax regression).
Number of weights (parameters) is \(M(p+1) + K(M+1)\).
Activation function \(\sigma\):
\(\sigma(v)=\) a step function: human brain models where each unit represents a neuron, and the connections represent synapses; the neurons fired when the total signal passed to that unit exceeded a certain threshold.
Rectifier. \(\sigma(v) = v_+ = max(0, v)\). A unit employing the rectifier is called a rectified linear unit (ReLU). According to Wikipedia: The rectifier is, as of 2018, the most popular activation function for deep neural networks.
Given training data \((X_1, Y_1), \ldots, (X_n, Y_n)\), the loss function \(L\) can be:
Sum of squares error (SSE): \[ L = \sum_{k=1}^K \sum_{i=1}^n [y_{ik} - f_k(x_i)]^2. \]
Cross-entropy (deviance): \[ L = - \sum_{k=1}^K \sum_{i=1}^n y_{ik} \log f_k(x_i). \]
Model fitting: back-propagation (gradient descent)
where \(\gamma_r\) is the learning rate.
Back-propagation equations \[ s_{mi} = \sigma'(\alpha_m^T x_i) \sum_{k=1}^K \beta_{km} \delta_{ki}. \]
Advantages: each hidden unit passes and receives information only to and from units that share a connection; can be implemented efficiently on a parallel architecture computer.
Stochastic gradient descent (SGD). In real machine learning applications, training set can be large. Back-propagation over all training cases can be expensive. Learning can also be carried out online — processing each batch one at a time, updating the gradient after each training batch, and cycling through the training cases many times. A training epoch refers to one sweep through the entire training set.
AdaGrad and RMSProp improve the stability of SGD by trying to incorpoate Hessian information in a computationally cheap way.
Neural network model is a projection pursuit type additive model: \[ f(X) = \beta_0 + \sum_{m=1}^M \beta_m \sigma(\alpha_{m0} + \alpha_M^T X). \]
Aka multi-layer perceptron (MLP).
Boolean Approximation: an MLP of one hidden layer1 can represent any boolean function exactly.
Continuous Approximation: an MLP of one hidden layer can approximate any bounded continuous function with arbitrary accuracy.
Arbitrary Approximation: an MLP of two hidden layers can approximate any function with arbitrary accuracy.
Neural networks are not a fully automatic tool, as they are sometimes advertised; as with all statistical models, subject matter knowledge should and often be used to improve their performance.
Starting values: usually starting values for weights are chosen to be random values near zero; hence the model starts out nearly linear (for sigmoid), and becomes nonlinear as the weights increase.
Figure from Srivastava, Hinton, Krizhevsky, Sutskever, and Salakhutdinov (2014).
Scaling of inputs: mean 0 and standard deviation 1. With standardized inputs, it is typical to take random uniform weights over the range [−0.7,+0.7].
How many hidden units and how many hidden layers: guided by domain knowledge and experimentation.
Multiple minima: try with different starting values.
Fully connected networks don’t scale well with dimension of input images. E.g. \(96 \times 96\) images have about \(10^4\) input units, and assuming you want to learn 100 features, you have about \(10^6\) parameters to learn.
In locally connected networks, each hidden unit only connects to a small contiguous region of pixels in the input, e.g., a patch of image or a time span of the input audio.
Consider \(96 \times 96\) images. For each feature, first learn a \(8 \times 8\) feature detector (or filter or kernel) from (possibly randomly sampled) \(8 \times 8\) patches from the larger image. Then apply the learned detector to all \(8 \times 8\) regions of the \(96 \times 96\) image to obtain one \(89 \times 89\) convolved feature for that feature.
From Wang and Raj (2017):
Input: 256 pixel values from \(16 \times 16\) grayscale images. Output: 0, 1, …, 9 10 class-classification.
A modest experiment subset: 320 training digits and 160 testing digits.
network | links | weights | accuracy |
---|---|---|---|
net 1 | 2570 | 2570 | 80.0% |
net 2 | 3124 | 3214 | 87.0% |
net 3 | 1226 | 1226 | 88.5% |
net 4 | 2266 | 1131 | 94.0% |
net 5 | 5194 | 1060 | 98.4% |
Net-5 and similar networks were start-of-the-art in early 1990s.
On the larger benchmark dataset MNIST (60,000 training images, 10,000 testing images), accuracies of following methods were reported:
Method | Error rate |
---|---|
tangent distance with 1-nearest neighbor classifier | 1.1% |
degree-9 polynomial SVM | 0.8% |
LeNet-5 | 0.8% |
boosted LeNet-4 | 0.7% |
Source: http://cs231n.github.io/convolutional-networks/
ImageNet dataset. Classify 1.2 million high-resoultion images into 1000 classes.
A combination of techniques: GPU, ReLU, DropOut.
96 learnt filters:
Souces: http://web.stanford.edu/class/cs224n/
https://colah.github.io/posts/2015-08-Understanding-LSTMs/
http://karpathy.github.io/2015/05/21/rnn-effectiveness/
http://www.wildml.com/2015/09/recurrent-neural-networks-tutorial-part-1-introduction-to-rnns/
MLP (multi-layer perceptron) and CNN (convolutional neural network) are examples of feed forward neural network, where connections between the units do not form a cycle.
MLP and CNN accept a fixed-sized vector as input (e.g. an image) and produce a fixed-sized vector as output (e.g. probabilities of different classes).
RNNs allow us to operate over sequences of vectors: sequences in the input, the output, or in the most general case both.
Applications of RNN:
Above: generated (fake) LaTeX on algebraic geometry; see http://karpathy.github.io/2015/05/21/rnn-effectiveness/.
RNNs accept an input vector \(x\) and give you an output vector \(y\). However, crucially this output vector’s contents are influenced not only by the input you just fed in, but also on the entire history of inputs you’ve fed in the past.
The cell state allows information to flow along it unchanged.
The gates give the ability to remove or add information to the cell state.
Souces: https://sites.google.com/view/cvpr2018tutorialongans/
https://medium.com/ai-society/gans-from-scratch-1-a-deep-introduction-with-code-in-pytorch-and-tensorflow-cb03cdcdba0f
https://skymind.ai/wiki/generative-adversarial-network-gan
The coolest idea in deep learning in the last 20 years.
- Yann LeCun on GANs.
Applications:
AI-generated celebrity photos: https://www.youtube.com/watch?v=G06dEcZ-QTg
Self play
Value function of GAN \[ \min_G \max_D V(D, G) = \mathbb{E}_{x \sim p_{\text{data}}(x)} [\log D(x)] + \mathbb{E}_{z \sim p_z(z)} [\log (1 - D(G(z)))]. \]
Training GAN