Were 21% of New York City residents really infected with the novel coronavirus?

 The moment I saw this Business Insider headline from April 23, 2020, I knew it would be a perfect case study for a lesson about statistical bias.

 “A statewide antibody study estimates that 21% of New York City residents have had the coronavirus, Cuomo says.

”I couldn’t have asked for a better one.

COVID-19 is no laughing matter, and as a New York City resident who spent her birthday this year battling pneumonia that almost killed her, I’m painfully aware of that.

However, the creative ways people find to misinterpret data is an eternal source of hilarity for statisticians like myself—I’ll take my laughs where I can get them these days.

Image: meme template source info.

Someone is about to get criticized… but who? Grab your schadenfreudean popcorn while I crack my knuckles.

Ready? Let’s begin.

  It depends on where you hear the word.

I’ve made a tongue-in-cheek laundry list of various bias usages for your amusement, but in this article, we’ll focus on the statistical species of bias.

In statistics, bias is all about systematic lopsidedness.

If lopsided results are misleading, that doesn’t necessarily mean that they were born out of the intent to mislead.

Perhaps they were, perhaps they weren’t.

Statistical bias can come about through negligence, ignorance, expediency, or shenanigans.

Let’s talk about conclusions that are off-the-mark, shall we? Image: SOURCE.

Statisticians may use the word bias to refer to:We’ll look at our little case study from each of these (overlapping) perspectives.

  Image: SOURCE.

In statistics, bias is the difference between the expected value of an estimator and its estimand.

That’s awfully technical, so allow me to translate.

Bias refers to results that are systematically off the mark.

 Think archery where your bow is sighted incorrectly.

Bias refers to results that are systematically off the mark.

High bias doesn’t mean you’re shooting all over the place (that’s high variance), but may cause a perfect archer hit below the bullseye all the time.

The headline says the study estimates that 21% of New York City residents have had the coronavirus.

My guess is that this number is biased upwards.

21%? I suspect the real number is lower.

Why? I smell the pungent odor of randomization issues with how the data were obtained, which brings me to statistical subdefinition #2.

  A special way to trigger results that are systematically off the mark is to collect your data in a problematic manner.

For statisticians who love having things to be grumpy about, selection bias is a cherished frenemy.

It visits so often!Selection bias occurs when different members of your population of interest have different probabilities of arriving in your sample.

In other words, you’re making conclusions from your sample as if it were drawn randomly while it was drawn, er, “randomly” instead.

Image: meme template source info.

In that case, your sample isn’t representative of your population… which makes your conclusions untrustworthy.

If your population of interest is all New York City residents, then you don’t have a random sample (SRS) unless every single New York City resident has an equal probability of being included.

Is that requirement met by the NY antibody study? Definitely not.

The study did not represent everyone equally.

Before I even opened the article, I was thinking, “Yeah, right.

What clever thing did they do to collect data from people who stay indoors?” As it turns out, no clever thing.

What’s the probability the study measured someone who is fully self-quarantined? Zero.

How many NYC residents are keeping themselves entirely to themselves? We don’t know.

Undercoverage bias: When your approach can’t cover the whole thing, so some uncovered parts are left out.

Image: SOURCE.

This type of selection bias is called undercoverage bias.

Your sample cannot cover your population if some parts have no chance of being sampled.

One pragmatic quick fix for undercoverage bias is to settle for a less ambitious population definition.

Instead of trying to make inferences about “all NYC residents,” you could choose instead to talk about “all NYC residents who go outside” — problem solved!Not quite.

It gets worse.

What if we have more interesting sampling biases? What if the nonzero probabilities are systematically messed up too? What if there’s something special that made some outside-goers more likely to be tested than others?New Yorkers shopping for pandemic groceries.

Image used with permission.

Let’s see how the data were gathered.

The study tested people “at grocery and big-box stores.

” If you’d like to increase your probability of exposure, where do you go? To places with a higher density of people, like grocery and big-box stores.

Where was the study done? Yup.

People who take bigger risks with the virus had a higher probability of winding up in the antibody study.

How about if you really, really, really want to get the virus? You might go to grocery and big-box stores frequently… more frequently than someone who’s trying to reduce their probability of infection.

Of these two kinds of people, which kind of person would be more likely to have COVID-19 antibodies? Which do you think would be more likely to be in the right place at the right time to participate in the study? Hello, selection bias!Because there’s no difference between a person who thinks this is a good idea and everyone else.

Image: SOURCE.

In fact, the design of this study is a bingo sheet for the various breeds of selection bias — sampling bias, undercoverage bias, self-selection bias, convenience bias, volunteer bias, and others.

If you’d like me to write a follow-up article that takes you on a tour of those (plus tips for how to battle them), retweets are my favorite motivation.

Biased archers have it easy — if you keep hitting the target above the center, at least you can see it and make adjustments.

Researchers with selection bias aren’t so lucky.

Selection bias means all your results are wrong, and you don’t know how wrong.

Selection bias means all your results are wrong, and you don’t know how wrong.

Does that scare you? It should scare you! All I can do is guess that the results are biased upwards by the sampling procedure, but there’s no way to know what the real number is.

But wait, there’s more! It gets even worse.

  What if unequal representation isn’t the only thing messing with our ability to make sane conclusions? There’s a whole cornucopia of other biases that might impair your statistical conclusions.

What if the antibody tests themselves have problems that the researchers are unaware of?For example, information bias occurs when measurements are systematically incorrect.

What if the antibody tests themselves have problems that the researchers are unaware of? What if they only detect antibodies above a strict threshold to avoid false alarms? Then those tests will miss virus cases, so they’ll bias the estimate downward.

Image: SOURCE.

If information bias and selection bias pull invisibly in opposite directions, is the estimate too high or too low? Impossible to know.

What do we know for sure? Some people at grocery and big box stores got an exciting readout from something called an antibody test.

What do we know about NYC residents’ actual exposure rate? *shrug*  Among the many other ways that humans might use the word “bias” are several interdisciplinary ones that statisticians find especially relevant to our favorite way of making conversation: pointing out that someone is wrong about something.

I’ll only mention confirmation bias and reporting bias here.

To be fair to Business Insider, I think they did a pretty good job of reporting.

They even called the results “preliminary” and mentioned some of the same sampling issues I talked about.

Kudos! These are the same properly-cautious noises originally made by the governor of NY and the team who ran the study.

I have no beef with them either.

Instead, my complaint is with the broken telephone game that the rest of the internet is playing.

This sloth didn’t read the article.

Just like some of the folks who will comment after only looking at the title.

We see you.

Image: SOURCE.

Some people won’t take the time to read the whole article.

 Fine, I get it, you’re busy.

Alas, instead of applying appropriate lol-did-not-read humility, some folks treat that title as if it’s the whole story.

When they share what they’ve “learned” with others, they’ll be creating a textbook example of reporting bias.

Reporting bias occurs when people come to a conclusion other than the one they would have made if given all the information their source had.

Whenever people transmit only the most extreme or “juicy” bits of information, and leave behind the boring bits that weaken their conclusions, expect reporting bias.

You’ll find it wherever people have incentives to:Whatever the intent behind reporting bias, its presence decapitates the validity of your conclusions.

Does everyone who’s guilty of it know that they’re doing it? Not if they’ve fallen prey to confirmation bias.

Confirmation bias tampers with your ability to perceive/notice/remember evidence that disagrees with your opinion.

Bringing up this cognitive bias moves us from the realm of statistics to the jungle of psychology, so I’ll be brief.

(Overcoming confirmation bias during COVID-19)Confirmation bias is a problem of perception, attention, and memory.

To put it in the simplest terms, whether or not a piece of evidence “sticks” for you is influenced by the opinion you have beforehand.

If you’re not careful, you’ll mostly notice and remember information that confirms what you already believe.

If you can’t see all sides of a story, you might not even know you’ve only reported your favorite, infecting the people who trust you with falsehoods.

  I’m guessing there are plenty of folks who will wind up concluding unsupported nonsense thanks to this NY antibodies study.

As usual, the least data-literate readers will “learn” the most from it.

Does this mean that the study is worthless? No, but it’s only as good as the assumptions you’ll make about it.

Since there’s very little that we know for sure from its data, the only way to make inferences beyond the facts is to bridge the gap with assumptions.

That’s all statistics is—assumptions, not magic.

The study is only as good as the assumptions you’ll make about it.

Unfortunately, we’re not all equally qualified to make good assumptions that lead to useful conclusions.

For example, while I am a statistician with plenty of real-world data collection experience, I’m not an expert in antibody tests, so you shouldn’t trust me to make wise assumptions about their accuracy.

Excellent! I don’t trust me either, so I’ll end up learning nothing about the virus exposure rate of NYC.

The study is worthless in my hands.

We’re not all equally qualified to make good assumptions that unlock useful conclusions.

I can suspect whatever I like about selection bias causing an overestimate, but all I know is that the results are probably wrong, and we don’t know how wrong.

If you tell your friends that I said the number is below 21%, you’ve just shown us a prime demo of reporting bias.

But when experts who have been studying viruses their whole lives team up with medical professionals and psychologists who are well-versed in the behavior of New Yorkers… and join forces with those who know all the practical details about what actually happened during the development and deployment of those antibody tests to grocery stores, well, perhaps those folks are sitting pretty to make the assumptions that unlock the nutritional goodness of the tasty data collected.

Image: SOURCE.

In their competent hands, the study might be very valuable indeed.

In competent hands, the study might be very valuable indeed.

Perhaps the rest of us should be quiet and let the grown-ups get on with their jobs.

 Original.

Reposted with permission.

 Related: var disqus_shortname = kdnuggets; (function() { var dsq = document.

createElement(script); dsq.

type = text/javascript; dsq.

async = true; dsq.

src = https://kdnuggets.

disqus.

com/embed.

js; (document.

getElementsByTagName(head)[0] || document.

getElementsByTagName(body)[0]).

appendChild(dsq); })();.

Leave a Reply