Data Divisions: Projections, Uncertainty, And Unknowns, Part 2

by | Apr 23, 2020 | Benjamin Radford, Health and Medicine, Investigation, Media Appearances, Media Literacy, News, Research, Science | 0 comments

My article examines uncertainties in covid-19 data, from infection to death rates. While some complain that pandemic predictions have been exaggerated for social or political gain, that’s not necessarily true; journalism always exaggerates dangers, highlighting dire predictions. But models are only as good as the data that goes into them, and collecting valid data on disease is inherently difficult. People act as if they have solid data underlying their opinions, but fail to recognize that we don’t have enough information to reach valid conclusion…

You can read Part 1 Here.

 

Certainty and the Unknown Knowns

The fact that our knowledge is incomplete doesn’t mean that we don’t know anything about the virus; quite the contrary, we have a pretty good handle on the basics including how it spreads, what it does to the body, and how the average person can minimize their risk. 

Humans crave certainty and binary answers, but science can’t offer it. The truth is that we simply don’t know what will happen or how bad it will get. For many aspects of COVID-19, we don’t have enough information to make accurate predictions. In a New York Times interview, one victim of the disease reflected on the measures being taken to stop the spread of the disease: “We could look back at this time in four months and say, ‘We did the right thing’—or we could say, ‘That was silly … or we might never know.’” 

There are simply too many variables, too many factors involved. Even hindsight won’t be 20/20 but instead be seen by many through a partisan prism. We can never know alternative history or what would have happened; it’s like the concern over the “Y2K bug” two decades ago. Was it all over nothing? We don’t know because steps were taken to address the problem. 

But uncertainty has been largely ignored by pundits and social media “experts” alike who routinely discuss and debate statistics while glossing over—or entirely ignoring—the fact that much of it is speculation and guesswork, unanchored by any hard data. It’s like hotly arguing over what exact time a great-aunt’s birthday party should be on July 4, when all she knows is that she was born sometime during the summer. 

So, if we don’t know, why do people think they know or act as if they know? 

Part of this is explained by what in psychology is known as the Dunning-Kruger effect: “in many areas of life, incompetent people do not recognize—scratch that, cannot recognize—just how incompetent they are … . Logic itself almost demands this lack of self-insight: For poor performers to recognize their ineptitude would require them to possess the very expertise they lack. To know how skilled or unskilled you are at using the rules of grammar, for instance, you must have a good working knowledge of those rules, an impossibility among the incompetent. Poor performers—and we are all poor performers at some things—fail to see the flaws in their thinking or the answers they lack.” 

Most people don’t know enough about epidemiology, statistics, or research design to have a good idea of how valid disease data and projections are. And of course, there’s no reason they would have any expertise in those fields, any more than the average person would be expected to have expertise in dentistry or theater. But the difference is that many people feel confident enough in their grasp of the data—or, often, confident enough in someone else’s grasp of the data, as reported via their preferred news source—to comment on it and endorse it (and often argue about it).  

Psychology of Uncertainty

Another factor is that people are uncomfortable admitting when they don’t know something or don’t have enough information to make a decision. If you’ve taken any standardized multiple-choice tests, you probably remember that some of the questions offered a tricky option, usually after three or four possibly correct specific answers. This is some version of “The answer cannot be determined from the information given.” This response (usually Option D) is designed in part to thwart guessing and to see when test-takers recognize that the question is insoluble or the premise incomplete. 

The principle applies widely in the real world. It’s difficult for many people—and especially experts, skeptics, and scientists—to admit they don’t know the answer to a question. Even if it’s outside our expertise, we often feel as if not knowing (or even not having a defensible opinion) is a sign of ignorance or failure. Real experts freely admit uncertainty about the data; Dr. Anthony Fauci has been candid about what he knows and what he doesn’t, responding for example when asked how many people could be carriers, “It’s somewhere between 25 and 50%. And trust me, that is an estimate. I don’t have any scientific data yet to say that. You know when we’ll get the scientific data? When we get those antibody tests out there.” 

Yet there are many examples in our everyday lives when we simply don’t have enough information to reach a logical or valid conclusion about a given question, and often we don’t recognize that fact. We routinely make decisions based on incomplete information, and unlike on standardized tests, in the real world of messy complexities there are not always clear-cut objectively verifiable answers to settle the matter. 

This is especially true online and in the context of a pandemic. Few people bother to chime in on social media discussions or threads to say that there’s not enough information given in the original post to reach a valid conclusion. People blithely share information and opinions without having the slightest clue as to whether it’s true or not. But recognizing that we don’t have enough information to reach a valid conclusion demonstrates a deeper and nuanced understanding of the issue. Noting that a premise needs more evidence or information to complete a logical argument and reach a valid conclusion is a form of critical thinking.

One element of conspiracy thinking is that those who disagree are either stupid (that is, gullible “sheeple” who believe and parrot everything they see in the news—usually specifically the “mainstream media” or “MSM”) or simply lying (experts and journalists across various media platforms who know the truth but are intentionally misleading the public for political or economic gain). This “If You Disagree with Me, Are You Stupid or Dishonest?” worldview has little room for uncertainty or charity and misunderstands the situation. 

The appropriate position to take on most coronavirus predictions is one of agnosticism. It’s not that epidemiologists and other health officials have all the data they need to make good decisions and projections about public health and are instead carefully considering ways to fake data to deceive the public and journalists. It’s that they don’t have all the data they need to make better predictions, and as more information comes in, the projections will get more accurate. The solution is not to vilify or demonize doctors and epidemiologists but instead to understand the limitations of science and the biases of news and social media.

 

This article first appeared at the Center for Inquiry Coronavirus Resource Page; please check it out for additional information. 

 

 

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *