Deep Learning on Microscopy Imaging

Is it just the spatial structure of the data?First thing you notice running Deep Learning on image data is that the life becomes easier compared to doing Deep Learning on Omics numeric data.

There is a lot of literature and numerous online tutorials that can help you.

I am a big fan of Jason Brownlee and Adrian Rosebrock, I go to their blogs and usually find answers on all my questions.

In contrast , running Deep Learning on Omics numeric data (e.

g.

RNA sequencing) you are basically on your own.

There are not so many people who know better than you what network architecture, activation function, optimizer etc.

fit your particular Omics numeric data, so you really need to experiment a lot.

Microscopy Imaging of CellsHowever, entering microscopy imaging you will find a lot of support from the community.

One great resource available is the Human Protein Atlas (HPA) which among other things delivers digital image data showing localization of proteins in single cells.

Human Protein Atlas (HPA) is a great digital image data resourceThe data were annotated using the citizen science approach via online game and used in Kaggle competition for multi-label image classification.

The data can be downloaded from here, they include ~124 000 train and ~47 000 test 512×512 PNG images for 28 classes, i.

e.

numbers from 0 to 27 code for cell compartments where a certain protein is expressed.

Importantly, multiple classes can be present on the same image since proteins can be expressed in several places of a cell simultaneously.

Looking at the distribution of HPA classes we can see that the proteins of interest are most often expressed in Nucleoplasm (class 0) and Cytosol (class 25).

Now let us check how HPA images look like.

It turns out that displaying HPA images is a non-trivial task as they contain 4 instead of standard 3 RGB channels which highlight the protein of interest (green channel) plus three cellular landmarks: microtubules (red), nucleus (blue) and endoplasmic reticulum (yellow).

One way to merge the 4 channels in Python would be to realize that yellow = red + green and add half of the yellow channel to the red and the other half to the green channel.

Sample Human Protein Atlas (HPA) imagesUsing the “load_image” function I generated a new training data set comprising ~31 000 images by merging the 4 channels for each image ID.

Build Annotation for Cell DetectionNow after we have learnt how to display HPA images and merge the four channels it is time to create annotations for cell type detection with Faster-RCNN and Mask-RCNN neural networks.

For demonstration purposes I went through a handful of images and selected ones which contained both cells with the protein expressed in Nucleoli compartments (Nucleoli, Nucleoli fibrillar center, Nuclear speckles, Nuclear bodies) and cells without signs of the protein expression in the Nucleoli compartments.

In this way I aimed at having 3 classes (Nucleoli, Not Nucleoli and background) on each of my train / test image.

Next, I spent 2 hours with LabelImg assigning bounding boxes and class labels to each of the cells for 45 HPA train images, 5 more images were reserved as a test data set to be used for making predictions.

Manual annotation of cells with LabelImgLabelImg records cell annotations in the xml-format which has the following typical structure:Here you can see the width (512 pixels), height (512 pixels) and depth (3 channels) of the image, and one object with coordinates defined by the bounding box (xmin, ymin, xmax, ymax) and the label “Nucleoli”.

Faster-RCNN needs an annotation file in a special comma-separated (csv) format.

To prepare such a csv-file we need to parse the xml annotations for every image:Once you have parsed the xml annotations, you are almost done, now everything is ready for training the Faster-RCNN model.

Training Faster-RCNN for Cell DetectionHere I skip explaining how Faster-RCNN works, there is plenty of literature going into the algorithm.

I only mention that the Faster-RCNN uses Region Proposal Network (RPN) that generates region proposals for a Detector network that performs the actual object detection.

Hence, the loss function of the Faster-RCNN network combines contributions from regression (localize cells with bounding boxes) and classification (assign class to each localized cell) tasks.

Faster-RCNN can be installed from https://github.

com/kbardool/keras-frcnn.

To train the Faster-RCNN model is as easy as typing:Here “annot.

txt” is the annotation file created in the previous section.

By default Faster-RCNN uses transfer learning with weights from ResNet50.

Training Faster-RCNN on the 45 images with 322 annotated Nucleoli and 306 Not Nucleoli classes took approximately 6 hours per epoch on my laptop with 4 CPU cores, so I only managed to wait for 35 epochs.

The learning curves seem to demonstrate that classification task reached saturation while regression (localizing cells) was still far from reaching a plateau.

To make predictions on the test set using the Faster-RCNN we simply type:Here “test” is the folder containing the images from the test data set.

Let us display a test image in order to check how successful was the model at detecting cells with the protein expressed in the Nucleoli (Nucleoli class) and cells without the protein expressed in the Nucleoli (Not Nucleoli class):Original image (left) and the image after Faster-RCNN object detection was applied (right)Here, on the left is the original test image.

Two cells clearly contain bright green spots in the middle, those are the Nucleoli which are visible due to the protein of interest showing strong expression in these regions, so those two cells should belong to the class Nucleoli.

The rest of the cells do not seem to have the protein expressed in the Nucleoli regions so they should belong to the class Not Nucleoli.

On the right is the test image with bounding boxes and class labels placed by the trained Faster-RCNN model.

Here we can see that the model correctly detected the two cells with visible green spots in the Nucleoli regions while the third bounding box seems to be a false positive prediction, it is not clear what exactly the model detected with high confidence (91% probability): the bounding box includes multiple cells and despite the model put “Nucleoli” class label, we do not observe cells with visible Nucleoli inside the bounding box.

This image illustrates my general impression about using Faster-RCNN for cell detection: it is not always perfect at cell localization, more training might improve it though.

Let us see if the Mask-RCNN can make it better.

Training Mask-RCNN for Cell DetectionMask-RCNN (as well as Faster-RCNN) belongs to the RCNN family of artificial neural networks which are known for higher accuracy of object detection compared to other families, i.

e.

YOLO and SSD, which I do not cover here.

In addition to object detection, Mask-RCNN allows also object segmentation, however we are not going to use it here.

Mask-RCNN can be installed from https://github.

com/matterport/Mask_RCNN.

While Faster-RCNN is very easy to run (it basically only needs the annotation file to be prepared), Mask-RCNN requires much more coding, so please check the complete Jupyter notebook on my github for more details.

Here I explain the key steps of the workflow which basically follows this excellent tutorial.

The peculiarity of Mask-RCNN is that the data are handled by a Dataset object which will be used for feeding it into the Mask-RCNN for training and testing.

Here we aim at object detection rather than segmentation, therefore we will treat bounding boxes as masks, so the “load_mask” function will load in fact bounding boxes coordinates.

We can display a random annotated training image using the handy “display_instances” function from Mask-RCNN:Annotated image prepared for training with Mask-RCNNNow everything is ready for training the Mask-RCNN model.

We will use transfer learning and start from weights from the pre-trained object detection Mask-RCNN model on COCO data set.

Training Mask-RCNN was dramatically faster compared to Faster-RCNN, one epoch took only 2 hours (3X speed up compared to Faster-RCNN) on my laptop with 4 CPU cores, and I stopped training after only 5 epochs because the results of object detection on the test data set were already more than satisfactory.

Here for comparison I present the original test image (left) and the image together with the high confidence (over 70% probability) bounding boxes and class labels placed by the trained Mask-RCNN model.

Original image (left) and the image after Mask-RCNN object detection was applied (right)We observe an amazing increase in accuracy of cell localization, all bounding boxes seem to almost perfectly embrace the cells.

One cell seems to have a wrong label “Nucleoli” despite no obvious green spots can be observed inside the Nucleus.

Perhaps the model would benefit from more training.

Overall, the Mask-RCNN demonstrates remarkable improvement in cell detection compared to the Faster-RCNN.

Such trained Mask-RCNN model can now be used for high-throughput scanning of images for special cellular morphology (with protein expressed in the Nucleoli regions in this particular case) without visual inspection.

SummaryIn this post we have learnt that automated microscopy produces large amounts of digital image data which is ideally suited for analysis with Deep Learning.

Detecting cellular morphologies is a challenging task despite plenty of literature and models available.

We tested Faster-RCNN and Mask-RCNN object detection models using annotated images with multiple classes from Human Protein Atlas (HPA).

Mask-RCNN outperformed Faster-RCNN in both quality and speed of cell type detection.

As usually, let me know in the comments if you have a specific favorite area in Life Sciences which you would like to address within the Deep Learning framework.

Follow me at Medium Nikolay Oskolkov, in twitter @NikolayOskolkov, connect in Linkedin, and check out the codes for this article on my github.

I plan to write the next post about Deep Learning for Evolutionary Science, stay tuned.

.. More details

Leave a Reply