Chapter 2

A Brief History of Deep Learning

Deep learning is currently at the heart of some of the most powerful AI systems. We are going to give you a sense of how deep learning has come to where it is today. Keep scrolling!


To understand how deep learning has progressed, we may first look at its inspiration, the neuron. That will give us a glimpse of how AI extracts valuable sensory experiences from this chaotic world and solidifies them into algorithms that are “smart”.

A neuron is the building block of the human brain. Neurons are nerve cells that take in some stimuli from other neurons. When enough stimuli are received, these neurons fire signals (more stimuli) to other connected neurons.

Inspired by neurons, computer scientists created perceptrons that take in many stimuli, or inputs, and output one value. We will examine perceptrons in more details in the next chapter.

It turns out that perceptrons are far from enough to learn complicated concepts, but some simple concepts, such as the logical AND, can be learned by a perceptron. The model can compute the logical AND of its inputs. This computation is visualized in the animation as drawing a line to divide the possible outcomes of the logical AND into two groups, each consisting of only one type of circle.

More complex concepts, such as XOR, cannot be learned by a simple perceptron. This is due to the fact that it is not linearly separable - we cannot find a straight line to divide the filled circles from the open ones.

To solve this problem, computer scientists turn to more complicated “simulations” of the brain. A feedforward neural network is a network of perceptrons organized into layers. These simple neural networks can represent every rule, from classifying images to predicting income.

Simple feedforward neural networks are very nice, but they have their own problems. Smaller networks do not have the capacity to take on complex problems, but deeper networks need a lot more computation power. This time, computer scientists looked at higher level human learning mechanisms for inspirations.

One such inspiration is that we humans conceptualize fundamental building blocks of things, and combine them to create more complex concepts. We recognize a dog as having a nose, two ears, and four legs. This behavior inspired computer scientists to create Convolutional Neural Networks (CNN), which are constructed from reusable groups of units that learn to recognize visual patterns like basic shapes.

The power of deep learning made some traditional machine-learning techniques go out of fashion. With enough data and computation power, deep learning is able to achieve a level of performance traditional algorithms could never hope for. With recent developments in computer hardware, and the abundance of data, we are finally able to apply deep learning to the real world.


Now that you have a general idea of the inspirations behind some of the most important concepts in deep learning, you are prepared to get into the details of how things work. In the next chapter, we will discuss perceptrons, an important building block of deep learning.

Further Reading

The XOR Problem in Neural Networks
Deep Learning 101 - History and Background
A Short History Of Deep Learning


Perceptron: An algorithm inspired by neurons. A single-layer neural network that performs binary classification on input data.

Feedforward Neural Networks: The first and simplest type of artificial neural networks. It is composed of many perceptrons arranged into layers and can perform more complex tasks than perceptrons.

Convolutional Neural; Networks: A type of artificial neural networks that is mainly used to analyze and classify images.