Data science

In the last two lecturs, we discuss a general framework for learning, neural networks.

History and recent surge

From Wang and Raj (2017):

Learning sources

Single layer neural network (SLP)

Multi-layer neural network (MLP)

Universal approximation properties

Practical issues

Neural networks are not a fully automatic tool, as they are sometimes advertised; as with all statistical models, subject matter knowledge should and often be used to improve their performance.

Convolutional neural networks (CNN)

Sources: https://colah.github.io/posts/2014-07-Conv-Nets-Modular/

Example: handwritten digit recognition

Example: image classification

Source: http://cs231n.github.io/convolutional-networks/

Recurrent neural networks (RNN)

Souces: http://web.stanford.edu/class/cs224n/
https://colah.github.io/posts/2015-08-Understanding-LSTMs/
http://karpathy.github.io/2015/05/21/rnn-effectiveness/
http://www.wildml.com/2015/09/recurrent-neural-networks-tutorial-part-1-introduction-to-rnns/

Generative Adversarial Networks (GANs)

Souces: https://sites.google.com/view/cvpr2018tutorialongans/
https://medium.com/ai-society/gans-from-scratch-1-a-deep-introduction-with-code-in-pytorch-and-tensorflow-cb03cdcdba0f
https://skymind.ai/wiki/generative-adversarial-network-gan

The coolest idea in deep learning in the last 20 years.
- Yann LeCun on GANs.