Gaining insights on transfer learning with FlashTorch

If you’ve done the maths already… I would be better of randomly guessing it myself.

Intuitively, this perhaps makes sense.

There are only a handful of flower classes included in the original ImageNet dataset, so it’s not too difficult to image that asking the model to identify 102 species of flowers is a push.

Intuition is nice, but I want to make this concrete before moving on to training.

Let’s use FlashTorch to create saliency maps and visualise what the network is (not) seeing.

We’re going to use this image of foxgloves as an example.

What we can appreciate here is that the network, without additional training, is paying attention to the shape of flower cups.

But there are many flowers with similar shape (think bluebells, for instance).

For us humans, it might be obvious (even if we didn’t know the name of the specie) that what makes this flower unique is the mottled patten inside flower cups.

However, the network currently doesn’t know where to pay attention to, apart from the general shape of the flower, because it never really needed to in the old task (ImageNet classification).

Now that we have an insight on why the network is doing poorly, I feel ready to train it.

Eventually, after trial and error, the trained model managed to achieve 98.

7% test accuracy.

Which is great! … but can we explain why?What is it that the network is seeing now, that it wasn’t before?Pretty different right?The network has learnt to pay less attention to the shape of the flower, and focus intensely to those mottled pattern :)Showing what the neural nets have learnt is useful.

Taking it to another level and explaining the process of how neural nets learn is another powerful application of feature visualisation techniques.

Step forward (not away!) from accuracyWith feature visualisation techniques, not only can we obtain better understanding on what neural networks perceive about objects, but also we are better equipped to:Diagnose what the network gets wrong and whySpot and correct biases in algorithmsStep forward from only looking at accuracyUnderstand why the network behaves in the way it doesElucidate mechanisms of how neural nets learnUse FlashTorch today!If you have projects which utilise CNNs in PyTorch, FlashTorch can help you make your projects more interpretable and explainable.

Please let me know what you think if you use it!.I would really appreciate your constructive comments, feedback and suggestions ????Thanks, and happy coding!.. More details

Leave a Reply