Testing TensorFlow Lite image classification model

Testing TensorFlow Lite image classification modelMaking sure that your ML model works correctly on mobile app (part 1)Mirek StanekBlockedUnblockFollowFollowingMay 31This post was originally published at thinkmobile.

dev — a blog about implementing intelligent solutions in mobile apps (link to article).

Building TensorFlow Lite models and deploying them on mobile applications is getting simpler over time.

But even with easier to implement libraries and APIs, there are still at least three major steps to accomplish:Build TensorFlow model,Convert it to TensorFlow Lite model,Implement in on the mobile app.

There is a set of information that needs to be passed between those steps — model input/output shape, values format, etc.

If you know them (e.

g.

thanks to visualizing techniques and tools described in this blog post), there is another problem, many software engineers struggle with.

Why the model implemented on a mobile app works differently than its counterpart in a python environment?Software engineerIn this post, we will try to visualize differences between TensorFlow, TensorFlow Liteand quantized TensorFlow Lite (with post-training quantization) models.

This should help us with early models debugging when something goes really wrong.

Here, we will focus only on TensorFlow side.

It’s worth to remember, that it doesn’t cover mobile app implementation correctness (e.

g.

bitmap preprocessing and data transformation).

This will be described in one of the future posts.

Important notice — the code presented here and in Colab notebook show just some basic ideas for eye-comparison between TensorFlow and TensorFlow Lite models (on small data batch).

It doesn’t check them for speed and any other factor of the performance and doesn’t do any accurate side-by-side cross-comparisons.

TensorFlow model preparationIf you already have TF model as SavedModel, you can skip this paragraph, and go directly to Load TensorFlow model from SavedModel section.

As an example, we will build a simple TensorFlow model that classifies flowers and is built on top of MobileNet v2 thanks to transfer learning technique.

The code was taken and inspired by Udacity’s TensorFlow free course, that I highly recommend for everyone who wants to start working with this machine learning framework (no matter if its machine learning engineer, or software engineer implementing ML solutions on client-side).

Here is the model’s structure:Model for classifying flowers, built on top of MobileNet v2For training, we will use Keras ImageDataGenerators and example dataset provided by Google:Accuracy after 10 epochs of training is ~87%.

For our needs it is fine ????.

When model is ready, we will export it to SavedModel format:Load TensorFlow model from SavedModelNow, when we have TensorFlow model saved in SavedModel format, let’s load it.

If you don’t want to spend time building and training your model, it’s perfectly fine to start from here.

Because our model use custom layer from TensorFlow Hub, we need to point out explicitly its implementation with custom_obiectsparam.

Check model’s predictionNow we will take a batch of 32 images from validation dataset and run inference process on the loaded model:For data visualization, we will use Pandas library.

Here is what we can see when we print values via tf_pred_dataframe.

head().

Prediction results represented as Pandas DataframeEach row here represents prediction results for a separate image (our DataFrame has 32 rows).

Each cell contains the label’s confidence for this image.

All values in a row sum up to 1 (because the final layer of our model uses Softmax activation function).

We can also print those images and predictions:Code from above will show:TensorFlow Lite modelsConvert model to TensorFlow LiteNow we will create two TensorFlow Lite models — non-quantized and quantized, base on the one that we created.

 Because of TensorFlow 2.

0 nature, we’ll need to convert TensorFlow model into concrete function and then do a conversion to TensorFlow Lite (more about it here).

Because of TensorFlow 2.

0’s eager execution, model needs to be converted to Concrete Function before the final conversion to TensorFlow Lite.

In result, we will get two files: flowers.

tflite (TensorFlow Lite standard model) and flowers_quant.

tflite (TensorFlow Lite quantized model with post-training quantization).

Run TFLite modelsNow let’s load TFLite models into Interpreter (tf.

lite.

Interpreter) representation, so we can run the inference process on it.

By default, interpreter can run inference process on one image (input shape: 1x224x224x3).

Before we run inference, we need to resize input and output tensors, to accept a batch of 32 images:Again, we put data into Pandas DataFrame.

Here is what we can see for tflite_pred_dataframe.

head():Prediction results from TFLite model represented as Pandas DataframeWe will do exactly the same operations for the second model — flowers_quant.

tflite.

DataFrame preview for it:Results comparisonNow, what we can do is to concatenate DataFrames from TF, TF Lite, and TF Lite quant models, to have eye-comparison between tables.

Inspiration for this code was taken from StackOverflow (link to the answer).

????In result, we can see DataFrame with highlighted rows that are different between TF/TF Lite models.

As we can see, in most cases predictions are different between all models, usually by small factors.

High-confidence predictions between TensorFlow and TensorFlow Lite models are very close to each other (in some cases there are even similar).

Quantized model outstands the most, but this is the cost of optimizations (model weights 3–4 times less).

To make prediction results even more readable, let’s simplify DataFrames, to show only the highest-score prediction and the corresponding label.

Now each DataFrame — TF, TFLite and TFLite quant shows only label index and confidence for this label.

Let’s concatenate DataFrames and highlight differences between them:As you can see, despite differences, TFLite model usually points out the same label for the image (in our validation batch of images).

Differences in confidence are usually very small.

Quantized TF Lite model isn’t similarly good here.

There are big differences in some confidence scores, and also in some cases, this model points out different label.

Here is a side-by-side comparison for TFLite and TFLite quant models, for our images batch:Now, It’s up to us to decide whether model size reduction (3–4 times in our case) is worth it.

Next stepsIn this blog post, we did a side-by-side comparison between TensorFlow, TensorFlow Lite and quantized TensorFlow Lite models.

We could notice small differences between TF and TFLite, and a bit bigger on TFLite quant.

But this isn’t everything we can check.

Those models were checked on the same environment (Colab or Jupyter notebook), but problems may occur also further — in mobile app implementation.

E.

g.

in image processing or data transformation.

In future blog posts, we will look closely at what we can do to test TF Lite model implementation correctness directly on a mobile device.

Source codeSource code for this blog post is available on Github (Colab notebook, and mobile application in the future): https://github.

com/frogermcs/TFLite-TesterNotebook with the entire code presented in this post can be run here: https://colab.

research.

google.

com/github/frogermcs/TFLite-Tester/blob/master/notebooks/Testing_TFLite_model.

ipynbGoogle ColaboratoryEdit descriptioncolab.

research.

google.

comThanks for reading!.????Please share your feedback below.

????.

. More details

Leave a Reply