MRNet: Deep-learning-assisted diagnosis for knee MRI scans

MRNet: Deep-learning-assisted diagnosis for knee MRI scansAnd a kaggle-like competition hosted by Stanford ML GroupMisa OguraBlockedUnblockFollowFollowingJun 26Last week I visited Estepona, a town in southern Spain, for a week-long coding retreat.

I worked on reproducing the MRNet paper using PyTorch from scratch, as part of participating in the MRNet Competition.

I have open-sourced the code so you can use it as a starting point to participate in the competition too.

You can access all the code and Jupyter notebooks from the MRNet GitHub repo.

Let’s help advance the safe use of AI in medical imaging!SourceBackgroundIn the paper Deep-learning-assisted diagnosis for knee magnetic resonance imaging: Development and retrospective validation of MRNet, the Stanford ML Group developed an algorithm to predict abnormalities in knee MRI exams, and measured the clinical utility of providing the algorithm’s predictions to radiologists and surgeons during interpretation.

They developed a deep learning model for detecting: general abnormalities, anterior cruciate ligament (ACL) tears, meniscal tears.

MRNet Dataset descriptionThe dataset (~5.

7G) was released along with the publication of the paper.

You can download it by agreeing to the Research Use Agreement and submitting your details on the MRNet Competition page.

It consists of 1,370 knee MRI exams, containing:1,104 (80.

6%) abnormal exams319 (23.

3%) ACL tears508 (37.

1%) meniscal tearsThe dataset is split into:training set (1,130 exams, 1,088 patients)validation set (120 exams, 111 patients) — called tuning set in the paperhidden test set (120 exams, 113 patients) — called validation set in the paperThe hidden test set is not publicly available and is used for scoring models submitted for the competition.

N.

B.

Stratified random sampling was used to ensure at least 50 positive examples of abnormal, ACL tear and meniscal tear were preset in each set.

All exams from each patient were put in the same split.

In the paper, an external validation was performed on a publicly available data.

Exploratory data analysis (EDA)It’s crucial to gain domain knowledge by exploring and familiarising yourself with the data, before attempting to train a model.

For this reason I performed an EDA on the dataset provided.

You can access the publicly hosted version of the notebook here.

Code implementationAll the code needed to train and evaluate models are published in the MRNet GitHub repo.

I highly recommend using GPU for training.

Model submissionOnce you have your model, you can submit it for an official evaluation by following the tutorial provided by the authors.

According to them it takes around 2 weeks for the score to appear on the leaderboard — I’m still waiting for mine to appear!FeedbackThe code is in active development under the MIT License, as I try and improve the code & model.

Feel free to clone it, fork it, and use it — please let me know what you think if you do!I would really appreciate your constructive comments, feedback and suggestions ????Thanks, and happy coding!.

. More details

Leave a Reply