Accuracy Fallacy: The Media’s Coverage of AI Is Bogus

Spoiler: They cant.

However, in the book, “The Bestseller Code: Anatomy of the Blockbuster Novel,” the authors claim theyve “written an algorithm that can tell whether a manuscript will hit the New York Times bestseller list with 80% accuracy,” as The Guardian (U.

K.

) put it.

The Wall Street Journal and The Independent (U.

K.

) also reported this level of accuracy.

However, the authors conveniently established this accuracy level over a manufactured test set of books that were half bestsellers and half not bestsellers.

Since in reality only one in 200 of the books included in this study were destined to become bestsellers, it turns out that a manuscript predicted by the model as a “future bestseller” actually has less than a 2% probability of becoming one.

And many more.

The accuracy fallacy pervades, with researchers perpetrating it in the reports of spotting legal issues in non-disclosure agreements, IBMs claim that they can predict which employees will quit with 95% accuracy, classifying which news headlines are “clickbait”, detecting fraudulent dating profile scams, spotting cyberbullies, predicting the need for first responders after an earthquake, detecting diseases in banana crops, distinguishing high and low-quality embryos for in vitro fertilization, predicting heart attacks, predicting heart issues by eye scan, detecting anxiety and depression in children, diagnosing brain tumors from medical images, detecting brain tumors with a new blood test, predicting the development of Alzheimers, and more.

 For a machine learning researcher seeking publicity, the accuracy fallacy scheme features some real advantages: excitement from the crowds and yet, perhaps, some plausible deniability of the intent to mislead.

After all, if the research process is ultimately clear to an expert who reads the technical report in full, that expert is unlikely to complain that the word “accuracy” is used loosely on the first page but then technically clarified on later pages – especially since “accuracy” in non-technical contexts can more vaguely denote “degree of correctness.

“But this crafty misuse of the word “accuracy” cannot stand.

The deniability isnt really plausible.

In the field of machine learning, accuracy unambiguously means, “how often the predictive model is correct – the percent of cases it gets right in its intended real world usage.

” When a researcher uses the word to mean anything else, theyre at best adopting willful ignorance and at worst consciously laying a trap to ensnare the media.

Frankly, the evidence points toward the latter verdict.

Researchers dramatically misinform the public by using “accuracy” to mean AUC – or, similarly, by reporting accuracy over an artificially balanced test bed thats half positive examples and half negative without spelling out the severe limits of that performance measure right up front.

The accuracy fallacy plays an integral part of the harmful hyping of “AI” in general.

By conveying unrealistic levels of performance, researchers exploit – and simultaneously feed into – the populations fear of awesome, yet fictional, powers held by machine learning (commonly calling it artificial intelligence instead).

Making matters worse, machine learning is further oversold because artificial intelligence is “over-souled” by proselytizers – they credit it with its own volition and humanlike intelligence (thanks to Eric King of “The Modeling Agency” for that pun).

Some things are too hard to reliably predict.

“Gaydar” as a popular notion refers to an unattainable form of human clairvoyance (especially when applied to still images).

We shouldn’t expect machine learning to attain supernatural abilities either.

For important, noteworthy classification problems, predictive models just cant “tell” with reliability.

This challenge goes with the territory, since important things happen more rarely and are more difficult to predict, including bestselling books, criminality, psychosis, and death.

The responsibility falls first on the researcher to communicate unambiguously and unmisleadingly to journalists and second on the journalists to make sure they actually understand the predictive proficiency about which theyre reporting.

But in lieu of that, unfortunately, readers at large must hone a certain vigilance: Be wary about claims of “high accuracy” in machine learning.

If it sounds too good to be true, it probably is.

 A shorter version of this article was originally published by Scientific American.

Related: var disqus_shortname = kdnuggets; (function() { var dsq = document.

createElement(script); dsq.

type = text/javascript; dsq.

async = true; dsq.

src = https://kdnuggets.

disqus.

com/embed.

js; (document.

getElementsByTagName(head)[0] || document.

getElementsByTagName(body)[0]).

appendChild(dsq); })();.

. More details

Leave a Reply