The Timeline of Deep Learning

AI Deep Learning Study

Deep History of Learning

1943 – Warren S. McCulloch and Walter Pitts published a highly simplified model of a neuron in their research paper

1950 – The Turing test was developed by Alan Turing for determining whether a machine can ’think’ like a human

1958 – Perceptron, a binary single neuron model algorithm, was invented at the Cornell Aeronautical Laboratory by Frank Rosenblatt

1960 – ADALINE (Adaptive Linear Neuron or later Adaptive Linear Element), an early single-layer artificial neural network was developed by Bernard Widrow and Ted Hoff

1969 – Marvin Minsky and Seymour Papert and published that a perceptron can’t solve the XOR problem.

1974 – Paul Werbos managed to solve the problem of insufficient processor power in artificial neural networks by developing a backpropogation algorithm

1980 – Fukushima proposed neocognitron, a hierarchical multilayered neural network consisting of many layers of cells, and has variable connections between the cells in adjoining layers

1982 – John Hopfield popularized a Hopfield network, a form of recurrent or fully interconnected neural network

1985 – Boltzmann machine were heavily popularized and promoted by Geoffrey Hinton and Terry Sejnowski in cognitive sciences communities

1986 – Multilayer perceptron, a class of feedforward artificial neural network represented by Rumelhart, Hinton & Williams

1986 – Restricted Boltzmann machine was invented under the name Harmonium by Paul Smolensky

1995 – Geoffrey Hinton, Peter Dayan and Brendan Frey developed and successfully trained a six layer network

1997 – Bidirectional Recurrent Neural Networks (BRNN) invented by Schuster and Paliwal

1997 – Long short-term memory (LSTM) was proposed by Sepp Hochreiter and Jürgen Schmidhuber

1998 – Yann LeCun proposed LeNet, a convolutional neural network structure

1999 – Nvidia marketed the GeForce 256 as “the world’s first GPU”

2006 – Ruslan Salakhutdinov and Geoffrey Hinton represented Deep Boltzmann Machines

2014 – Generative adversarial network (GAN) designed by Ian Goodfellow

2016 – Tensor Processing Unit (TPU), an AI accelerator application-specific integrated circuit (ASIC) developed by Google

2017 – Capsule neural network was introduced by Sara Sabour, Nicholas Frosst and Geoffrey Hinton