In that case, we have what’s called a Deep Neural Network.

Our linear models combine to create non-linear models, and in turn, those combine to create even more non-linear models.

Lots of hidden nodes can autonomously create highly complex models while not being explicitly programmed to do so.

This is precisely where the magic of Neural Networks happens.

Image Credits: Udacity — Deep Learning NanodegreeFeedforwardThe process of taking inputs, combining their weights to obtain a non-linear model, then combining those to produce a non-linear output is called Feedforward.

Starting from the right, we take a series of inputs, do matrix multiplication over the weights, apply a sigmoid function, then move on to the next later and do the same.

The final output is our prediction.

Image Credits: Udacity — Deep Learning NanodegreeBackpropagationOnce we’ve done a Feedforward operation, we first compare the output of the model with the desired output.

We then calculate the error.

Once we have that, we run the Feedforward operation backwards (Backpropagation) to spread the error to each of the weights.

Then we use this to update the weights and get a better model.

We repeat this process until we are happy with the model.

Why does this work?.Because while the Feedforward step was telling us the direction and amount that a perceptron should change next time, the Backpropagation step is saying, “if you want that perceptron to be x amount higher, then I am going to have to change these previous perceptrons to be y amount higher/lower because their weights were amplyfying the final prediction by n times”.

In general, feed forwarding is just composing a bunch of functions, and Backpropagation is taking the derivative at each piece to update our weights.

In conclusionThese are the very basics of Deep Learning and Artificial Neural Networks.

The forward and backward flow of calculations that repeatedly adjust themselves have tremendous potential to discover patterns in data.

Many techniques used by Deep Learning have been around for decades, such as the algorithms to recognize hand-written postal codes in the ‘90s.

The use of Deep Learning has surged over the past five years due to 3 factors:Deep Learning methods have obtained a higher accuracy than people in classifying images.

Modern GPUs allow us to train complex networks in less time than ever before.

Massive amounts of data required for Deep Learning has become increasingly accessible.

The basic ANN described above is just the start.

There are many more complex models that have been developed in recent years including:Convolutional Neural NetworksA class of Deep Neural Networks, most commonly applied to analyzing visual imagery.

Recurrent Neural NetworksA class of Deep Neural Networks that allows exhibiting temporal dynamic behavior.

Unlike standard Feedforward neural networks, RNNs can use their internal state (memory) to process sequences of inputs.

This technique is especially good for audio (eg.

Apple’s Siri or Amazon’s Alexa).

Generative Adverserial NetworksA class of Deep Neural Networks where multiple neural networks contest with each other in a zero-sum game framework.

This technique can generate text, photographs or other inputs that look at least superficially authentic to human observers.

None of these faces are those of real people.

Are Neural Networks the best possible structures for finding patterns in a given data-set?.Who knows — it might very well be that in a decade, we will make use of new algorithms that stray significantly from ANNs.

That said, I think we’re onto something here.

The baffling aspect of Neural Networks is how well they actually perform in practice.

As PhD student at Oxford University and research scientist at DeepMind, Andrew Trask puts it, the extraordinary thing about Deep Learning is that unlike the other revolutions in human history,this field is more of a mental innovation than a mechanical one.

[…] Deep Learning seeks to automate intelligence bit by bit.

”.