A Simple CNN: Multi Image Classifier

A Simple CNN: Multi Image ClassifierUsing Tensorflow and transfer learning, easily make a labeled image classifier with convolutional neural networkIftekher MamunBlockedUnblockFollowFollowingApr 7SourceComputer vision and neural networks are the hot new IT of machine learning techniques.

With advances of neural networks and an ability to read images as pixel density numbers, numerous companies are relying on this technique for more data.

For example, speed camera uses computer vision to take pictures of license plate of cars who are going above the speeding limit and match the license plate number with their known database to send the ticket to.

Although this is more related to Object Character Recognition than Image Classification, both uses computer vision and neural networks as a base to work.

A more realistic example of image classification would be Facebook tagging algorithm.

When you upload an album with people in them and tag them in Facebook, the tag algorithm breaks down the person’s picture pixel location and store it in the database.

Because each picture has its own unique pixel location, it is relatively easy for the algorithm to realize who is who based on previous pictures located in the database.

Of course the algorithm can make mistake from time to time, but the more you correct it, the better it will be at identifying your friends and automatically tag them for you when you upload.

However, the Facebook tag algorithm is built with artificial intelligence in mind.

This means that the tagging algorithm is capable of learning based on our input and make better classifications in the future.

We will not focus on the AI aspect, but rather on the simplest way to make an image classification algorithm.

The only difference between our model and Facebook’s will be that ours cannot learn from it’s mistake unless we fix it.

However, for a simple neural network project, it is sufficient.

Since it is unethical to use pictures of people, we will be using animals to create our model.

My friend Vicente and I have already made a project on this, so I will be using that as the example to follow through.

The GitHub is linked at the end.

The first step is to gather the data.

This in my opinion, will be the most difficult and annoying aspect of the project.

Remember that the data must be labeled.

Thankfully, Kaggle has labeled images that we can easily download.

The set we worked with can be found here: animal-10 dataset.

If your dataset is not labeled, this can be be time consuming as you would have to manually create new labels for each categories of images.

Another method is to create new labels and only move 100 pictures into their proper labels, and create a classifier like the one we will and have that machine classify the images.

This will lead to errors in classification, so you may want to check manually after each run, and this is where it becomes time consuming.

Now that we have our datasets stored safely in our computer or cloud, let’s make sure we have a training data set, a validation data set, and a testing data set.

Training data set would contain 85–90% of the total labeled data.

This data would be used to train our machine about the different types of images we have.

Validation data set would contain 5–10% of the total labeled data.

This will test how well our machine performs against known labeled data.

The testing data set would contain the rest of the data in an unlabeled format.

This testing data will be used to test how well our machine can classify data it has never seen.

The testing data can also just contain images from Google that you have downloaded, as long as it make sense to the topic you are classifying.

To start, we need to first load up all the necessary library on jupyter notebook.

In this step, we are defining the dimensions of the image.

Depending on your image size, you can change it but we found best that 224, 224 works best.

Then we created a bottleneck file system.

This will be used to convert all image pixels in to their number (numpy array) correspondent and store it in our storage system.

Once we run this, it will take from half hours to several hours depending on the numbers of classifications and how many images per classifications.

Then we simply tell our program where each images are located in our storage so the machine knows where is what.

Finally, we define the epoch and batch sizes for our machine.

For neural networks, this is a key step.

We found that this set of pairing was optimal for our machine learning models but again, depending on the number of images that needs to be adjusted.

This is importing the transfer learning aspect of the convolutional neural network.

Transfer learning is handy because it comes with pre-made neural networks and other necessary components that we would otherwise have to create.

There are many transfer learning model.

I particularly like VGG16 as it uses only 11 convolutional layers and pretty easy to work with.

However, if you are working with larger image files, it is best to use more layers, so I recommend resnet50, which contains 50 convolutional layers.

For our image classifier, we only worked with 6 classifications so using transfer learning on those images did not take too long, but remember that the more images and classifications, the longer this next step will take.

But thankfully since you only need to convert the image pixels to numbers only once, you only have to do the next step for each training, validation and testing only once- unless you have deleted or corrupted the bottleneck file.

This is also where we are incorporating transfer learning from vgg16 to our training and validation modelSince we are making a simple image classifier, there is no need to change the default settings.

Just follow the above steps for the training, validation, and testing directory we created above.

However, you can add different features such as image rotation, transformation, reflection and distortion.

Once the files have been converted and saved to the bottleneck file, we load them and prepare them for our convolutional neural network.

This is also a good way to make sure all your data have been loaded into bottleneck file.

Remember to repeat this step for validation and testing set as well.

Our six classes are: butterflies, chickens, elephants, horses, spiders, and squirrelsNow we create our model.

First step is to initialize the model with Sequential().

After that we flatten our data and add our additional 3 (or more) hidden layers.

This step is fully customizable to what you want.

We made several different models with different drop out, hidden layers and activation.

But since this is a labeled categorical classification, the final activation must always be softmax.

It is also best for loss to be categorical crossenthropy but everything else in model.

compile can be changed.

Then after we have created and compiled our model, we fit our training and validation data to it with the specifications we mentioned earlier.

Finally, we create an evaluation step, to check for the accuracy of our model training set versus validation set.

This is our model now training the data and then validating it.

An epoch is how many times the model trains on our whole data set.

Batch can be explained as taking in small amounts, train and take some more.

Each epoch must finish all batch before moving to the next epoch.

Training with too little epoch can lead to underfitting the data and too many will lead to overfitting the data.

You also want a loss that is as low as possible.

The pictures below will show the accuracy and loss of our data setThis picture below shows how well the machine we just made can predict against unseen data.

Notice it says that its testing on test_data.

Accuracy is the second number.

However, this is not the only method of checking how well our machines performedThere are two great methods to see how well your machine can predict or classify.

One of them is the classification metrics and the other is the confusion matrix.

To use classification metrics, we had to convert our testing data into a different numpy format, numpy array, to read.

That is all the first line of code is doing.

The second cell block takes in the converted code and run it through the built in classification metrics to give us a neat result.

Please note that unless you manually label your classes here, you will get 0–5 as the classes instead of the animals.

The important factors here are precision and f1-score.

The higher the score the better your model is.

Here is a great blog on medium that explains what each of those are.

Now to make a confusion matrix.

There are lots on online tutorial on how to make great confusion matrix.

Ours is a variation of some we found onlineafter cm[i, j]> thresh else it says “black”)The numpy array we created before is placed inside a dataframe.

Confusion matrix works best on dataframes.

The 3rd cell block with multiple iterative codes is purely for color visuals.

The only important code functionality there would be the ‘if normalize’ line as it standardizes the data.

As we can see in our standardized data, our machine is pretty good at classifying which animal is what.

Chickens were misclassified as butterflies most likely due to the many different types of pattern on butterflies.

In addition, butterflies was also misclassified as spiders because of probably the same reason.

Both elephants and horses are rather big animals, so their pixel distribution may have been similar.

The final phase is testing on images.

The cell blocks below will accomplish that:The first cell block is letting our machine know that it has to load the image, change the size and convert it to an array.

Second cell block is using transfer learning’s prediction model and an iterative function to help predict the image properly.

The third cell block is where we define the image location and finally the fourth cell block will print out the final result, depending on the prediction from the second cell block.

For this part, I will not post a picture so you can find out your own results.

However, the GitHub link will be right below so feel free to download our code and see how well it compares to yours.

imamun93/animal-image-classificationsImage Classifications using CNN on different type of animals.

– imamun93/animal-image-classificationsgithub.

comAnimals-10Animal pictures of 10 different categories taken from google imageswww.


comAccuracy, Precision, Recall or F1?Often when I talk to organizations that are looking to implement data science into their processes, they often ask the…towardsdatascience.


. More details

Leave a Reply