Stand Up for Best Practices:

My coworkers told me to just tweet it and let it go, but I wanted to stand up for good modeling practices.

I thought reason and best practices would prevail, so I started a 6-month process of writing up my results and shared them with Nature.

Upon sharing my results, I received a note from Nature in January 2019 that despite serious concerns about data leakage and model selection that invalidate their experiment, they saw no need to correct the errors, because “Devries et al.

are concerned primarily with using machine learning as [a] tool to extract insight into the natural world, and not with details of the algorithm design”.

The authors provided a much harsher response.

You can read the entire exchange on my github.

It’s not enough to say that I was disappointed.

This was a major paper (it’s Nature!) that bought into AI hype and published a paper despite it using flawed methods.

Then, just this week, I ran across articles by Arnaud Mignan and Marco Broccardo on shortcomings that they found in the aftershocks article.

Here are two more data scientists with expertise in earthquake analysis who also noticed flaws in the paper.

I also have placed my analysis and reproducible code on github.

Go run the analysis yourself and see the issueStanding Up For Predictive Modeling MethodsI want to make it clear: my goal is not to villainize the authors of the aftershocks paper.

I don’t believe that they were malicious, and I think that they would argue their goal was to just show how machine learning could be applied to aftershocks.

Devries is an accomplished earthquake scientist who wanted to use the latest methods for her field of study and found exciting results from it.

But here’s the problem: their insights and results were based on fundamentally flawed methods.

It’s not enough to say, “This isn’t a machine learning paper, it’s an earthquake paper.

” If you use predictive modeling, then the quality of your results are determined by the quality of your modeling.

Your work becomes data science work, and you are on the hook for your scientific rigor.

There is a huge appetite for papers that use the latest technologies and approaches.

It becomes very difficult to push back on these papers.

But if we allow papers or projects with fundamental issues to advance, it hurts all of us.

It undermines the field of predictive modeling.

Please push back on bad data science.

Report bad findings to papers.

And if they don’t take action, go to twitter, post about it, share your results and make noise.

This type of collective action worked to raise awareness of p-values and combat the epidemic of p-hacking.

We need good machine learning practices if we want our field to continue to grow and maintain credibility.

Acknowledgments: I want to thank all the great data scientists at DataRobot that collaborated and supported me this past year, a few of these include: Lukas Innig, Amanda Schierz, Jett Oristaglio, Thomas Stearns, and Taylor Larkin.

.. More details

Leave a Reply