We will then calculate predictions for all these images, take the average, and make that our final prediction.log_preds,y = learn.TTA()probs = np.mean(np.exp(log_preds),0)accuracy_np(probs, y)0.9233333333333333Our final accuracy here is pretty good — 92%!Analyzing results.Confusion matrix.First common approach to analyze the result is confusion matrix.preds = np.argmax(probs, axis=1)probs = probs[:,1]from sklearn.metrics import confusion_matrixcm = confusion_matrix(y, preds)plot_confusion_matrix(cm, data.classes)Confusion matrixFrom this plot we can see how much roses/tulips were incorrectly classified.Look at out pictures again.plot_val_with_title(most_by_correct(0, False), "Most incorrect roses")plot_val_with_title(most_by_correct(1, False), "Most incorrect tulips")Finally, we retrained our model with:SGDRNo precomputed activationsUnfreezing all layersTTAAnd result is less training loss and more than 90% of accuracy.Review.Advices from FastAI how to build best image classifier:Enable data augmentation, and precompute=True .Use lr_find() to find highest learning rate where loss is still clearly improving.Train last layer from precomputed activations for 1–2 epochs.Train last layer with data augmentation (i.e. precompute=False) for 2–3 epochs with cycle_len=1.Unfreeze all layers.Set earlier layers to 3x-10x lower learning rate than next higher layer.Use lr_find() again.Train full network with cycle_mult=2 until over-fitting.. More details
- 7 Data Trends for 2020 (and one non-trend)
- What are Autoencoders? Learn How to Enhance a Blurred Image using an Autoencoder!
- Introducing Databricks Ingest: Easy and Efficient Data Ingestion from Different Sources into Delta Lake
- New Data Ingestion Network for Databricks: The Partner Ecosystem for Applications, Database, and Big Data Integrations into Delta Lake