FastAI Image Segmentation

FastAI Image SegmentationSegment your images using the FastAI deep learning libraryGilbert TannerBlockedUnblockFollowFollowingMar 30Figure 1: Segmentation resultsImage segmentation is the process of taking a digital image and segmenting it into multiple segments of pixels.

The goal of image segmentation is to simplify and/or change the representation of an image into something more meaningful and easier to understand.

The FastAI library allows us to build image segmentation models using only a few lines of code by providing us with classes and methods for both loading in the data and creating a model to perform the segmentation(UNET).

If you are unfamiliar with the FastAI library I would highly recommend that you check out the Practical Deep Learning for Coders course which is really awesome and not only teaches you about the library but also about the technologies and practices used to make the library great.

In this article, we will go over where we can get image segmentation data, how we could create our own data as well as what a U-NET is and how we can use for image segmentation.

U-NETA U-NET is a convolutional neural network that was initially developed for biomedical image segmentation but has since proven its value for all image segmentation task no matter what specific topic.

Figure 2: U-NET ArchitectureThe U-NET architecture consists of two paths.

The contraction path (the encoder) and the expansion path (the decoder).

The encoder extracts features which contain information about what is in an image using convolutional and pooling layers.

Whilst encoding the size of the feature map gets reduced.

The decoder is then used to recover the feature map size for the segmentation image, for which it uses Up-convolution layers.

Because the decoding process loses some of the higher level features the encoder learned, the U-NET has skip connections.

That means that the outputs of the encoding layers are passed directly to the decoding layers so that all the important pieces of information can be preserved.

That’s only a high-level overview of what a U-NET is.

For more information check out the original paper.

Getting our dataFor this tutorial, we will use the CamVid data-set which is a really high-quality road-segmentation data-set provided by the University of Cambridge.

Another nice thing about the data-set is that we don’t need to download it manually because it is included in the FastAI library and so we can simply download it using the untar_data method.

Next of we will take a quick look at our data by graphing a random image and its segmentation using methods provided by FastAI, that allow us to both obtain all the image-paths and open an image.

Figure 3: Example imageNow we need to create a function that maps from the path of an image to the path of its segmentation.

This outputs the path to the segmentation of the chosen image and we can now use it to open a segmentation image.

Figure 4: Segmentation ExampleNow that we know how our data looks like we can create our data-set using the SegmentationItemList class provided by FastAI.

We can show a few examples using the show_batch method which is available for all sorts of databunches in FastAI.

Figure 5: Examples with segmentation overlayCreating our own dataIn order to create your own segmentation data you first of need to make or download some pictures of the objects, you want to detect.

Then you need to create the segmentation using some kind of software.

For regular object detection, you would need to annotate the objects in an image using a bounding box, but for segmentation, you need to give every pixel in an image a color specific to its class.

Figure 6: Segmentation example (from Pixel Annotation Tool)Thankfully there are free tools out there that can help you label your segmentation data.

One of those tools is called Pixel Annotation Tool and it provides you with the ability to color the pixels using different brush sizes.

Figure 7: Pixel Annotation ToolCreating and training our modelNow that we have our data and know what a U-NET is we can use the FastAI library to create and train our segmentation model.

But before we will create our model we will create a function that will measure the accuracy of the model.

The accuracy on the CamVid data-set should be measured without the void class and therefore we will exclude the void class from our accuracy function.

To create a U-NET in FastAI the unet_learner class can be used.

We not only going to pass it our data but we will also specify an encoder-network (Resnet34 in our case), our accuracy function as well as a weight-decay of 1e-2.

With our model ready to go we can now search for a fitting learning rate and then start training our model.

This process is the same for all FastAI models and if you aren’t familiar with it yet I would highly recommend that you check out my first FastAI article.

Figure 8: Learning rateFigure 9: Training resultsStandardly only the decoder is unfrozen, which means that our pretrained encoder didn’t receive any training yet so we will now show some results and then train the whole model.

Figure 10: Results after the first training runFigure 11: Training resultsAs you can see we reached an accuracy of 92% and almost perfect segmentation results on a seemingly hard problem which is amazing.

Recommended readingFastAI Multi-label image classificationLearn how to work with multi-label datatowardsdatascience.

comConclusionImage segmentation is the process of taking a digital image and segmenting it into multiple segments of pixels with the goal of getting a more meaningful and simplified image.

FastAI makes it easy for us to perform image segmentation by giving us the ability to load in our segmentation data and to use a U-NET model for segmenting the images.

If you liked this article consider subscribing to my Youtube Channel and following me on social media.

The code covered in this article is available as a Github Repository.

If you have any questions, recommendations or critiques, I can be reached via Twitter or the comment section.

.. More details

Leave a Reply