Detecting breast cancer in histopathological images using Deep Learning

Detecting breast cancer in histopathological images using Deep LearningViridiana Romero MartinezBlockedUnblockFollowFollowingJan 3Breast cancer has the second highest mortality rate in women next to lung cancer.

As per clinical statistics, 1 in every 8 women is diagnosed with breast cancer in their lifetime.

However, periodic clinical checkups and self-tests help in early detection and thereby significantly increase the chances of survival.

Invasive detection techniques cause rupture of the tumor, accelerating the spread of cancer to adjoining areas.

Hence, there arises the need for a more robust, fast, accurate, and efficient noninvasive cancer detection system (Selvathi, D & Aarthy Poornila, A.

(2018).

Deep Learning Techniques for Breast Cancer Detection Using Medical Image Analysis).

Early detection can give patients more treatment options.

In order to detect signs of cancer, breast tissue from biopsies is stained to enhance the nuclei and cytoplasm for microscopic examination.

Then, pathologists evaluate the extent of any abnormal structural variation to determine whether there are tumors.

Architectural Distortion (AD) is a very subtle contraction of the breast tissue and may represent the earliest sign of cancer.

Since it is very likely to be unnoticed by radiologists, several approaches have been proposed over the years but none using deep learning techniques.

I believe, AI will become a transformational force in healthcare and soon, computer vision models will be able to get a higher accuracy when researchers have the access to more medical imaging datasets.

I will develop a computer vision model to detect breast cancer in histopathological images.

Two classes will be detected in this project: Benign and Malignant.

Breast Cancer Histopathological Database (BreakHis)The Breast Cancer Histopathological Image Classification (BreakHis) is composed of 9,109 microscopic images of breast tumor tissue collected from 82 patients using different magnifying factors (40X, 100X, 200X, and 400X).

To date, it contains 2,480 benign and 5,429 malignant samples (700X460 pixels, 3-channel RGB, 8-bit depth in each channel, PNG format).

This database has been built in collaboration with the P&D Laboratory — Pathological Anatomy and Cytopathology, Parana, Brazil.

The dataset BreaKHis is divided into two main groups: benign tumors and malignant tumors.

Histologically benign is a term referring to a lesion that does not match any criteria of malignancy — e.

g.

, marked cellular atypia, mitosis, disruption of basement membranes, metastasize, etc.

Normally, benign tumors are relatively “innocents”, presents slow growing and remains localized.

Malignant tumor is a synonym for cancer: lesion can invade and destroy adjacent structures (locally invasive) and spread to distant sites (metastasize) to cause death.

The dataset currently contains four histological distinct types of benign breast tumors: adenosis (A), fibroadenoma (F), phyllodes tumor (PT), and tubular adenoma (TA); and four malignant tumors (breast cancer): carcinoma (DC), lobular carcinoma (LC), mucinous carcinoma (MC) and papillary carcinoma (PC).

I only used 2 classes for this project: Benign and Malignant.

We are going to determine if there exists cancer or not.

I divided the full dataset in training and validation datasets 70% and 30%, respectively.

Picking a pre-trained model: AlexnetI decided to used the pre-trained Alexnet to extract features for classification.

AlexNet is a convolutional neural network that is trained on more than a million images from the ImageNet database.

The network is 8 layers deep and can classify images into 1,000 object categories, such as keyboard, mouse, pencil, and many animals.

As a result, the network has learned rich feature representations for a wide range of images.

The network has an image input size of 227-by-227.

Alexnet architectureAlexnet detailed architectureThe bad thing is that this model was trained in different kind of images and not tumor images, so we can’t use the entire model for the purpose of this project.

Imagenet DatasetSince I still need the pre-trained model to extract the features from the images, we will remove the last fully connected layer, so the network will be used as a feature extractor, giving 1,000 dimensional feature vectors for every image.

The technique I will use for this is called: Transfer Learning.

Transfer Learning is the reuse of a pre-trained model on a different problem.

In this case, classifying histopathological images into benign or malignant.

It’s a technique very popular in Deep Learning and it allows you to train a deep neural network with a small dataset.

In addition, it’s very difficult to find good labelled data to train such complex models, for that reason, I recommend using pre-trained models for your image classification projects.

I decided to use data augmentation to get more images and applied random rotation, random resize crop and random horizontal flip.

The optimizer I used was SGD and a learning rate of 0.

001.

After running some epochs.

I got an accuracy of 80%.

Though the model accuracy is not good enough to use it in real cases, we can still get better results, fine-tuning the pre-trained model, you can change some hyperparameters or the network structure.

I may try different pre-trained models or use more epochs to get a higher accuracy!Computer Vision has become my favorite deep learning field and it’s making a huge impact on the way healthcare is done.

Don’t be surprised when we get to live in a world when people can use mundane devices like smartphones with a camera to make medical diagnostics by themselves.

Medical apps will become a reality and wearables will replace many visits to the doctor’s office.

Don’t get me wrong, I definitely don’t see this technology as a substitute but as an additional help.

I will quote computer vision expert Fei-Fei Li in the talk: “How we’re teaching computers to understand pictures”, presented at an official TED conference, which is what I think, too:“First, we teach them to see.

Then, they help us to see better.

For the first time, human eyes won’t be the only ones pondering and exploring our world.

We will not only use the machines for their intelligence, we will also collaborate with them in ways that we cannot even imagine”.

If you want to check the full code.

See my GitHub profile:viritaromero/Breast-Cancer-DetectionAI app to detect breast cancer in histopathological database – viritaromero/Breast-Cancer-Detectiongithub.

comIf you want to watch the full video of Fei-Fei Li Talk:.

. More details

Leave a Reply