How to Implement VGG, Inception and ResNet Modules for Convolutional Neural Networks from Scratch

There are discrete architectural elements from milestone models that you can use in the design of your own convolutional neural networks.

Specifically, models that have achieved state-of-the-art results for tasks like image classification use discrete architecture elements repeated multiple times, such as the VGG block in the VGG models, the inception module in the GoogLeNet, and the residual module in the ResNet.

Once you able to implement parameterized versions of these architecture elements, you can use them in the design of your own models for computer vision and other applications.

In this tutorial, you will discover how to implement the key architecture elements from milestone convolutional neural network models, from scratch.

After completing this tutorial, you will know:Let’s get started.

How to Implement Major Architecture Innovations for Convolutional Neural NetworksPhoto by daveynin, some rights reserved.

This tutorial is divided into three parts; they are:Take my free 7-day email crash course now (with sample code).

Click to sign-up and also get a free PDF Ebook version of the course.

Download Your FREE Mini-CourseThe VGG convolutional neural network architecture, named for the Visual Geometry Group at Oxford, was an important milestone in the use of deep learning methods for computer vision.

The architecture was described in the 2014 paper titled “Very Deep Convolutional Networks for Large-Scale Image Recognition” by Karen Simonyan and Andrew Zisserman and achieved top results in the LSVRC-2014 computer vision competition.

The key innovation in this architecture was the definition and repetition of what we will refer to as VGG-blocks.

These are groups of convolutional layers that use small filters (e.

g.

3×3 pixels) followed by a max pooling layer.

The image is passed through a stack of convolutional (conv.

) layers, where we use filters with a very small receptive field: 3 x 3 (which is the smallest size to capture the notion of left/right, up/down, center).

[…] Max-pooling is performed over a 2 x 2 pixel window, with stride 2.

— Very Deep Convolutional Networks for Large-Scale Image Recognition, 2014.

A convolutional neural network with VGG-blocks is a sensible starting point when developing a new model from scratch as it is easy to understand, easy to implement, and very effective at extracting features from images.

We can generalize the specification of a VGG-block as one or more convolutional layers with the same number of filters and a filter size of 3×3, a stride of 1×1, same padding so the output size is the same as the input size for each filter, and the use of a rectified linear activation function.

These layers are then followed by a max pooling layer with a size of 2×2 and a stride of the same dimensions.

We can define a function to create a VGG-block using the Keras functional API with a given number of convolutional layers and with a given number of filters per layer.

To use the function, one would pass in the layer prior to the block and receive the layer for the end of the block that can be used to integrate into the model.

For example, the first layer might be an input layer which could be passed into the function as an argument.

The function then returns a reference to the final layer in the block, the pooling layer, that could be connected to a flatten layer and subsequent dense layers for making a classification prediction.

We can demonstrate how to use this function by defining a small model that expects square color images as input and adds a single VGG block to the model with two convolutional layers, each with 64 filters.

Running the example creates the model and summarizes the structure.

We can see that, as intended, the model added a single VGG block with two convolutional layers each with 64 filters, followed by a max pooling layer.

A plot is also created of the model architecture that may help to make the model layout more concrete.

Note, creating the plot assumes that you have pydot and pygraphviz installed.

If this is not the case, you can comment out the import statement and call to the plot_model() function in the example.

Plot of Convolutional Neural Network Architecture With a VGG BlockUsing VGG blocks in your own models should be common because they are so simple and effective.

We can expand the example and demonstrate a single model that has three VGG blocks, the first two blocks have two convolutional layers with 64 and 128 filters respectively, the third block has four convolutional layers with 256 filters.

This is a common usage of VGG blocks where the number of filters is increased with the depth of the model.

The complete code listing is provided below.

Again, running the example summarizes the model architecture and we can clearly see the pattern of VGG blocks.

A plot of the model architecture is created providing a different perspective on the same linear progression of layers.

Plot of Convolutional Neural Network Architecture With Multiple VGG BlocksThe inception module was described and used in the GoogLeNet model in the 2015 paper by Christian Szegedy, et al.

titled “Going Deeper with Convolutions.

”Like the VGG model, the GoogLeNet model achieved top results in the 2014 version of the ILSVRC challenge.

The key innovation on the inception model is called the inception module.

This is a block of parallel convolutional layers with different sized filters (e.

g.

1×1, 3×3, 5×5) and a and 3×3 max pooling layer, the results of which are then concatenated.

In order to avoid patch-alignment issues, current incarnations of the Inception architecture are restricted to filter sizes 1×1, 3×3 and 5×5; this decision was based more on convenience rather than necessity.

[…] Additionally, since pooling operations have been essential for the success of current convolutional networks, it suggests that adding an alternative parallel pooling path in each such stage should have additional beneficial effect, too— Going Deeper with Convolutions, 2015.

This is a very simple and powerful architectural unit that allows the model to learn not only parallel filters of the same size, but parallel filters of differing sizes, allowing learning at multiple scales.

We can implement an inception module directly using the Keras functional API.

The function below will create a single inception module with a fixed number of filters for each of the parallel convolutional layers.

From the GoogLeNet architecture described in the paper, it does not appear to use a systematic sizing of filters for parallel convolutional layers as the model is highly optimized.

As such, we can parameterize the module definition so that we can specify the number of filters to use in each of the 1×1, 3×3, and 5×5 filters.

To use the function, provide the reference to the prior layer as input, the number of filters, and it will return a reference to the concatenated filters layer that you can then connect to more inception modules or a submodel for making a prediction.

We can demonstrate how to use this function by creating a model with a single inception module.

In this case, the number of filters is based on “inception (3a)” from Table 1 in the paper.

The complete example is listed below.

Running the example creates the model and summarizes the layers.

We know the convolutional and pooling layers are parallel, but this summary does not capture the structure easily.

A plot of the model architecture is also created that helps to clearly see the parallel structure of the module as well as the matching shapes of the output of each element of the module that allows their direct concatenation by the third dimension (filters or channels).

Plot of Convolutional Neural Network Architecture With a Naive Inception ModuleThe version of the inception module that we have implemented is called the naive inception module.

A modification to the module was made in order to reduce the amount of computation required.

Specifically, 1×1 convolutional layers were added to reduce the number of filters before the 3×3 and 5×5 convolutional layers, and to increase the number of filters after the pooling layer.

This leads to the second idea of the Inception architecture: judiciously reducing dimension wherever the computational requirements would increase too much otherwise.

[…] That is, 1×1 convolutions are used to compute reductions before the expensive 3×3 and 5×5 convolutions.

Besides being used as reductions, they also include the use of rectified linear activation making them dual-purpose— Going Deeper with Convolutions, 2015.

If you intend to use many inception modules in your model, you may require this computational performance-based modification.

The function below implements this optimization improvement with parameterization so that you can control the amount of reduction in the number of filters prior to the 3×3 and 5×5 convolutional layers and the number of increased filters after max pooling.

We can create a model with two of these optimized inception modules to get a concrete idea of how the architecture looks in practice.

In this case, the number of filter configurations are based on “inception (3a)” and “inception (3b)” from Table 1 in the paper.

The complete example is listed below.

Running the example creates a linear summary of the layers that does not really help to understand what is going on.

A plot of the model architecture is created that does make the layout of each module clear and how the first model feeds the second module.

Note that the first 1×1 convolution in each inception module is on the far right for space reasons, but besides that, the other layers are organized left to right within each module.

Plot of Convolutional Neural Network Architecture With a Efficient Inception ModuleThe Residual Network, or ResNet, architecture for convolutional neural networks was proposed by Kaiming He, et al.

in their 2016 paper titled “Deep Residual Learning for Image Recognition,” which achieved success on the 2015 version of the ILSVRC challenge.

A key innovation in the ResNet was the residual module.

The residual module, specifically the identity residual model, is a block of two convolutional layers with the same number of filters and a small filter size where the output of the second layer is added with the input to the first convolutional layer.

Drawn as a graph, the input to the module is added to the output of the module and is called a shortcut connection.

We can implement this directly in Keras using the functional API and the add() merge function.

A limitation with this direct implementation is that if the number of filters in the input layer does not match the number of filters in the last convolutional layer of the module (defined by n_filters), then we will get an error.

One solution is to use a 1×1 convolution layer, often referred to as a projection layer, to either increase the number of filters for the input layer or reduce the number of filters for the last convolutional layer in the module.

The former solution makes more sense, and is the approach proposed in the paper, referred to as a projection shortcut.

When the dimensions increase […], we consider two options: (A) The shortcut still performs identity mapping, with extra zero entries padded for increasing dimensions.

This option introduces no extra parameter; (B) The projection shortcut […] is used to match dimensions (done by 1×1 convolutions).

— Deep Residual Learning for Image Recognition, 2015.

Below is an updated version of the function that will use the identity if possible, otherwise a projection of the number of filters in the input does not match the n_filters argument.

We can demonstrate the usage of this module in a simple model.

The complete example is listed below.

Running the example first creates the model then prints a summary of the layers.

Because the module is linear, this summary is helpful to see what is going on.

A plot of the model architecture is also created.

We can see the module with the inflation of the number of filters in the input and the addition of the two elements at the end of the module.

Plot of Convolutional Neural Network Architecture With an Residual ModuleThe paper describes other types of residual connections such as bottlenecks.

These are left as an exercise for the reader that can be implemented easily by updating the residual_module() function.

This section provides more resources on the topic if you are looking to go deeper.

In this tutorial, you discovered how to implement key architecture elements from milestone convolutional neural network models, from scratch.

Specifically, you learned:Do you have any questions?.Ask your questions in the comments below and I will do my best to answer.

…with just a few lines of python codeDiscover how in my new Ebook: Deep Learning for Computer VisionIt provides self-study tutorials on topics like: classification, object detection (yolo and rcnn), face recognition (vggface and facenet), data preparation and much more…Skip the Academics.

Just Results.

Click to learn more.

.. More details

Leave a Reply