Three Must-Own Books for Deep Learning Practitioners

Developing neural networks is often referred to as a dark art.

The reason for this is that being skilled at developing neural network models comes from experience.

There are no reliable methods to analytically calculate how to design a “good” or “best” model for your specific dataset.

You must draw on experience and experiment in order to discover what works on your problem.

A lot of this experience can come from actually developing neural networks on test problems.

Nevertheless, many people have come before and recorded their discoveries, best practices, and preferred techniques.

You can learn a lot about how to design and configure neural networks from some of the best books on the topic.

In this post, you will discover the three books that I recommend reading and having next to you when developing neural networks for your datasets.

Let’s get started.

There are three books that I think you must own physical copies of if you are a neural network practitioner.

They are:These books are references, not tutorials.

You dip into them again and again before and during projects to ensure that you are getting everything you can out of your data and models.

These are the books that I read and reference all the time.

If you have books that you recommend when developing neural network models, please let me know in the comments below.

Now, let’s take a closer look at each book in turn.

Neural Networks for Pattern Recognition by Christopher Bishop was released in 1995.

Neural Networks for Pattern RecognitionThis great book was followed about a decade later by the still classic textbook Pattern Recognition and Machine Learning (fondly referred to as PRML).

Christopher Bishop is both a professor at the University of Edinburgh and a director at Microsoft’s Cambridge research lab.

This book is a classic in the field of neural networks.

It is a handbook that handily captures both the state of theory at the time, and techniques that remain just as relevant today nearly 25 years later.

Although reading the book cover to cover will provide you a robust foundation, I’d instead encourage you to use it as a reference for getting the most out of your neural network models.

I’d recommend dipping into the following chapters as needed:Chapter 9 is worth the sticker price for the book alone, giving a laundry list of descriptions for regularization methods and ensemble methods you should be testing.

I recommend this book because given the description of new methods almost daily, practitioners often forget the tried and true basics.

I don’t think this book is in print anymore, but you can find secondhand and international versions everywhere online.

Take my free 7-day email crash course now (with sample code).

Click to sign-up and also get a free PDF Ebook version of the course.

Download Your FREE Mini-CourseNeural Smithing: Supervised Learning in Feedforward Artificial Neural Networks by Russell Reed and Robert Marks was released in 1999.

Neural Smithing: Supervised Learning in Feedforward Artificial Neural NetworksI have a large soft spot for this book.

I purchased it soon after it was released and used it as a reference for many of my own implementations of neural network algorithms through the 2000s.

There are two things I like the most about this book:The book uses mathematics and descriptions to explain concepts, but importantly they also use snippets of pseudocode or ANSI C to show how things work.

This is invaluable the first time you’re coding backpropagation of error or an activation functions.

The book also uses plots of the decision surface models.

This is invaluable to understand what the models are doing/seeing during training under different learning algorithms and how things like regularization effect the model.

There is perhaps an over focus on pruning methods given the authors interest in the area; nevertheless, I’d recommend dipping into the following chapters when developing your own models:Although I recommend buying this book and having it next to you (always), Robert Marks has a reprint of the book on his website in HTML format:Deep Learning by Ian Goodfellow, Yoshua Bengio, and Aaron Courville was released in 2016.

Deep LearningThis is the missing bridge between the classic books of the 1990s and modern deep learning.

Importantly, neural networks are introduced with careful mention of the innovations and milestones that have made the field into what it is today.

Specifically Chapter 6: Deep Feedforward Networks and Section 6.

6 Historical Notes.

There are three chapters that are must-reads for neural network practitioners; they are:Chapter 11 especially is important as it ties together specific methods and how and when to use them in practice.

It is by far worth the purchase price of the book alone.

This is a must have.

You need a physical copy of this book.

Nevertheless, the entire text is available on the books website here:This section provides more resources on the topic if you are looking to go deeper.

In this post, you discovered the three reference books that I think that every neural network practitioner must own.

Do you use one or more of these books yourself?.What chapters do you reference heavily?Are there other books that you reference a lot?.Let me know below.

…with just a few lines of python codeDiscover how in my new Ebook: Better Deep LearningIt provides self-study tutorials on topics like: weight decay, batch normalization, dropout, model stacking and much more…Skip the Academics.

Just Results.

Click to learn more.

.. More details

Leave a Reply