http://www.andreykurenkov.com/writing/a-brief-history-of-neural-nets-and-deep-learning/
The stuff promised in this video - still not really around.
http://psycnet.apa.org/index.cfm?fa=buy.optionToBuy&id=1959-09865-001
http://www-isl.stanford.edu/~widrow/papers/t1960anadaptive.pdf
Perceptron
by Frank Rosenblatt
1957
Adaline
by Bernard Widrow and Tedd Hoff
1960
http://www.nytimes.com/1958/07/08/archives/new-navy-device-learns-by-doing-psychologist-shows-embryo-of.html
“The Navy revealed the embryo of an electronic computer today that it expects will be able to walk, talk, see, write, reproduce itself an be conscious of its existence … Dr. Frank Rosenblatt, a research psychologist at the Cornell Aeronautical Laboratory, Buffalo, said Perceptrons might be fired to the planets as mechanical space explorers”
New York Times
July 08, 1958
linearly Separable?
Perceptrons
by Marvin Minsky (founder of MIT AI lab)
1969
Text
(1974, 1982 by Paul Werbos, 1986 by Hinton)
https://devblogs.nvidia.com/parallelforall/inference-next-step-gpu-accelerated-deep-learning/
(by Hubel & Wiesel, 1959)
motivated by biological insights
(LeNet-5, Yann LeCun 1980)
http://yann.lecun.com/exdb/publis/pdf/lecun-89e.pdf
"At some point in the late 1990s, one of these systems was reading 10 to 20% of all the checks in the US.”
"NavLab 1984 ~ 1994 : Alvinn”
http://pages.cs.wisc.edu/~jerryzhu/cs540/handouts/neural.pdf
http://neuralnetworksanddeeplearning.com/chap6.html
1995 Paper
"Comparison of Learning Algorithm For Handwritten Digit Recognition"
"New Machine Learning approach worked better"
http://yann.lecun.com/exdb/publis/pdf/lecun-95b.pdf
Canadian Institute for Advanced Research
which encourages basic research without direct application, was what motivated Hinton to move to Canada in 1987, and funded his work afterward.
http://www.andreykurenkov.com/writing/a-brief-history-of-neural-nets-and-deep-learning-part-4/
http://www.andreykurenkov.com/writing/a-brief-history-of-neural-nets-and-deep-learning-part-4/
by Hinton and Bengio
https://www.cs.toronto.edu/~hinton/absps/fastnc.pdf
http://papers.nips.cc/paper/3048-greedy-layer-wise-training-of-deep-networks.pdf
Neural networks with many layers really could be trained well, if the weights are initialized in a clever way rather than randomly. (By Hinton)
Deep machine learning methods (that is, methods with many processing steps, or equivalently with hierarchical feature representations of the data) are more efficient for difficult problems than shallow methods (which two-layer ANNs or support vector machines are examples of). (By Benzio)