May 152020
 

With all the recent news, here’s a timely passage from a recent article I wrote:

“One element of conspiracy thinking is that those who disagree are either stupid (gullible ‘sheeple’ who believe and parrot everything they see in the ‘mainstream media’) or simply lying (experts and journalists who know the truth but are intentionally misleading the public). This ‘If You Disagree with Me, Are You Stupid or Dishonest?’ worldview has little room for uncertainty or charity and misunderstands the situation. It’s not that epidemiologists and other health officials have all the data they need to make good decisions and projections about public health and are instead carefully considering ways to fake data to deceive the public. It’s that they don’t have all the data they need to make better predictions, and as more information comes in, the advice will get more accurate.”

You can read the piece HERE. 

 

You can find more on me and my work with a search for “Benjamin Radford” (not “Ben Radford”) on Vimeo, and please check out my podcast Squaring the Strange! 

May 142020
 

So this is cool… I’m quoted in a recent article in Rolling Stone about rumors of “coronavirus parties.” 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

On the surface, this sounds fairly straightforward: parties where people are intentionally trying to get a potentially deadly illness? Scary! They even used a Trump-esque exclamation point to drive the point home, so you know they mean business. And to be fair, the concept of “coronavirus parties” had previously gotten ink in none other than the New York Times, in an op-ed by epidemiologist Greta Bauer referring to “rumblings” about people hosting events “where noninfected people mingle with an infected person in an effort to catch the virus.” The piece enumerates the many reasons why such parties are a bad idea, including the fact that researchers know very little about coronavirus immunity, without citing direct evidence of the existence of these parties to begin with.

There’s good reason for this, says urban folklorist Benjamin Radford: “coronavirus parties” are probably BS. “They’re a variation of older disease urban legends such as the ‘bug chaser’ stories about people trying to get AIDS,” he tells Rolling Stone, referring to a brief spate in the early-aughts when so-called “bug-chasing” parties were subject to extensive media coverage (including a controversial story by this magazine). Such stories fed into a general sense of “moral panic” over the disease, resulting in it sticking around in the public imagination regardless of the lack of supporting evidence.

You can read the piece HERE. 

 

You can find more on me and my work with a search for “Benjamin Radford” (not “Ben Radford”) on Vimeo, and please check out my podcast Squaring the Strange! 

May 102020
 

This installment finishes our discussion on three missing persons cases that Ben, Celestia, and Kenny followed in real time and tracked with psychic detective predictions on how the cases would play out. Part 2 features Ben’s examination of the Harley Dilly case, a teenager who went missing in December of 2019. The same content warning applies: we must discuss some details of the case that may be disturbing to some listeners. With these three case studies, it becomes clear how well-meaning (and sometimes not-so-well-meaning) psychics gum up the works at police departments and cause distress to the families as tragedies occur. With social media, this effect is increased with the second-wave effect as followers on social media send and resend a psychic’s prediction to authorities. 

 

You can listen HERE!

 

May 012020
 

The current issue of Skeptical Inquirer magazine features an investigation I did into a famous mystery, the Chase Vault in Barbados. Coffins were said to have mysteriously moved while sealed in the vault, attributed to curses, ghosts, flooding, and more.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

I visited the site twice and solved the mystery; you can read about it, and listen to our episode of Squaring the Strange: http://squaringthestrange.libsyn.com/episode-97-the-dancing…

Apr 232020
 

My article examines uncertainties in covid-19 data, from infection to death rates. While some complain that pandemic predictions have been exaggerated for social or political gain, that’s not necessarily true; journalism always exaggerates dangers, highlighting dire predictions. But models are only as good as the data that goes into them, and collecting valid data on disease is inherently difficult. People act as if they have solid data underlying their opinions, but fail to recognize that we don’t have enough information to reach valid conclusion…

You can read Part 1 Here.

 

Certainty and the Unknown Knowns

The fact that our knowledge is incomplete doesn’t mean that we don’t know anything about the virus; quite the contrary, we have a pretty good handle on the basics including how it spreads, what it does to the body, and how the average person can minimize their risk. 

Humans crave certainty and binary answers, but science can’t offer it. The truth is that we simply don’t know what will happen or how bad it will get. For many aspects of COVID-19, we don’t have enough information to make accurate predictions. In a New York Times interview, one victim of the disease reflected on the measures being taken to stop the spread of the disease: “We could look back at this time in four months and say, ‘We did the right thing’—or we could say, ‘That was silly … or we might never know.’” 

There are simply too many variables, too many factors involved. Even hindsight won’t be 20/20 but instead be seen by many through a partisan prism. We can never know alternative history or what would have happened; it’s like the concern over the “Y2K bug” two decades ago. Was it all over nothing? We don’t know because steps were taken to address the problem. 

But uncertainty has been largely ignored by pundits and social media “experts” alike who routinely discuss and debate statistics while glossing over—or entirely ignoring—the fact that much of it is speculation and guesswork, unanchored by any hard data. It’s like hotly arguing over what exact time a great-aunt’s birthday party should be on July 4, when all she knows is that she was born sometime during the summer. 

So, if we don’t know, why do people think they know or act as if they know? 

Part of this is explained by what in psychology is known as the Dunning-Kruger effect: “in many areas of life, incompetent people do not recognize—scratch that, cannot recognize—just how incompetent they are … . Logic itself almost demands this lack of self-insight: For poor performers to recognize their ineptitude would require them to possess the very expertise they lack. To know how skilled or unskilled you are at using the rules of grammar, for instance, you must have a good working knowledge of those rules, an impossibility among the incompetent. Poor performers—and we are all poor performers at some things—fail to see the flaws in their thinking or the answers they lack.” 

Most people don’t know enough about epidemiology, statistics, or research design to have a good idea of how valid disease data and projections are. And of course, there’s no reason they would have any expertise in those fields, any more than the average person would be expected to have expertise in dentistry or theater. But the difference is that many people feel confident enough in their grasp of the data—or, often, confident enough in someone else’s grasp of the data, as reported via their preferred news source—to comment on it and endorse it (and often argue about it).  

Psychology of Uncertainty

Another factor is that people are uncomfortable admitting when they don’t know something or don’t have enough information to make a decision. If you’ve taken any standardized multiple-choice tests, you probably remember that some of the questions offered a tricky option, usually after three or four possibly correct specific answers. This is some version of “The answer cannot be determined from the information given.” This response (usually Option D) is designed in part to thwart guessing and to see when test-takers recognize that the question is insoluble or the premise incomplete. 

The principle applies widely in the real world. It’s difficult for many people—and especially experts, skeptics, and scientists—to admit they don’t know the answer to a question. Even if it’s outside our expertise, we often feel as if not knowing (or even not having a defensible opinion) is a sign of ignorance or failure. Real experts freely admit uncertainty about the data; Dr. Anthony Fauci has been candid about what he knows and what he doesn’t, responding for example when asked how many people could be carriers, “It’s somewhere between 25 and 50%. And trust me, that is an estimate. I don’t have any scientific data yet to say that. You know when we’ll get the scientific data? When we get those antibody tests out there.” 

Yet there are many examples in our everyday lives when we simply don’t have enough information to reach a logical or valid conclusion about a given question, and often we don’t recognize that fact. We routinely make decisions based on incomplete information, and unlike on standardized tests, in the real world of messy complexities there are not always clear-cut objectively verifiable answers to settle the matter. 

This is especially true online and in the context of a pandemic. Few people bother to chime in on social media discussions or threads to say that there’s not enough information given in the original post to reach a valid conclusion. People blithely share information and opinions without having the slightest clue as to whether it’s true or not. But recognizing that we don’t have enough information to reach a valid conclusion demonstrates a deeper and nuanced understanding of the issue. Noting that a premise needs more evidence or information to complete a logical argument and reach a valid conclusion is a form of critical thinking.

One element of conspiracy thinking is that those who disagree are either stupid (that is, gullible “sheeple” who believe and parrot everything they see in the news—usually specifically the “mainstream media” or “MSM”) or simply lying (experts and journalists across various media platforms who know the truth but are intentionally misleading the public for political or economic gain). This “If You Disagree with Me, Are You Stupid or Dishonest?” worldview has little room for uncertainty or charity and misunderstands the situation. 

The appropriate position to take on most coronavirus predictions is one of agnosticism. It’s not that epidemiologists and other health officials have all the data they need to make good decisions and projections about public health and are instead carefully considering ways to fake data to deceive the public and journalists. It’s that they don’t have all the data they need to make better predictions, and as more information comes in, the projections will get more accurate. The solution is not to vilify or demonize doctors and epidemiologists but instead to understand the limitations of science and the biases of news and social media.

 

This article first appeared at the Center for Inquiry Coronavirus Resource Page; please check it out for additional information. 

 

 

Apr 202020
 

My new article examines uncertainties in covid-19 data, from infection to death rates. While some complain that pandemic predictions have been exaggerated for social or political gain, that’s not necessarily true; journalism always exaggerates dangers, highlighting dire predictions. But models are only as good as the data that goes into them, and collecting valid data on disease is inherently difficult. People act as if they have solid data underlying their opinions, but fail to recognize that we don’t have enough information to reach valid conclusion…

 

There’s nothing quite like an international emergency—say, a global pandemic—to lay bare the gap between scientific models and the real world, between projections and speculations and what’s really going on in cities and hospitals around the world. 

A previous article discussed varieties of information about COVID-19, including information that’s true; information that’s false; information that’s trivially true (true but unhelpful); and speculation, opinion, and conjecture. Here we take a closer look at the role of uncertainty in uncertain times. 

Dueling Projections and Predictions

The record of wrong predictions about the coronavirus is long and grows by the hour. Around Valentine’s Day, the director of policy and emergency preparedness for the New Orleans health department, Sarah Babcock, said that Mardi Gras celebrations two weeks later should proceed, predicting that “The chance of us getting someone with coronavirus is low.” That projection was wrong, dead wrong: a month later the city would have one of the worst outbreaks of COVID-19 in the country, with correspondingly high death rates. Other projections have overestimated the scale of infections, hospitalizations, and/or deaths. 

It’s certainly true that many, if not most, news headlines about the virus are scary and alarmist; and that many, if not most, projections and predictions about COVID-19 are wrong to a greater or lesser degree. There’s a plague of binary thinking, and it’s circulating in many forms. One was addressed in the previous article: that of whether people are underreacting or overreacting to the virus threat. A related claim involves a quasi-conspiracy that news media and public health officials are deliberately inflating COVID-19 statistics. Some say it’s being done to make President Trump look incompetent at handling the pandemic; others say it’s being done on Trump’s behalf to justify coming draconian measures including Big Brother tracking. 

Many have suggested that media manipulation is to blame, claiming that numbers are being skewed by those with social or political agendas. There’s undoubtedly a grain of truth to that—after all, information has been weaponized for millennia—but there are more parsimonious (and less partisan) explanations for much of it, rooted in critical thinking and media literacy.

The Media Factors

In many cases, it’s not experts and researchers who skew information but instead news media who report on them. News and social media, by their nature, highlight the aberrant extremes. Propelled by human nature and algorithms, they selectively show the worst in society—the mass murders, the dangers, the cruelty, the outrages, and the disasters—and rarely profile the good. This is understandable, as bad things are inherently more newsworthy than good things.

To take one example, social media was recently flooded with photos of empty store shelves due to hoarding, and newscasts depict long lines at supermarkets. They’re real enough—but are they representative? Photos of fully stocked markets and calm shopping aren’t newsworthy or share-worthy, so they’re rarely seen (until recently when they in turn became unusual). The same happens when news media covers natural disasters; journalists (understandably) photograph and film the dozens of homes that were flooded or wrenched apart by a tornado, not the intact tens or hundreds of thousands of neighboring homes that were unscathed. This isn’t some conspiracy by the news media to emphasize the bad; it’s just the nature of journalism. But this often leads to a public who overestimates the terrible state of the world—and those in it—as well as fear and panic. 

Another problem are news stories (whether about dire predictions or promising new drugs or trends) that are reported and shared without sufficient context. An article in Health News Review discussed the problem of journalists stripping out important caveats: “Steven Woloshin, MD, co-director of the Center for Medicine and Media at The Dartmouth Institute, said journalists should view preprints [rough drafts of journal studies that have not been published nor peer-reviewed] as ‘a big red flag’ about the quality of evidence, similar to an animal study that doesn’t apply to humans or a clinical trial that lacks a control group. ‘I’m not saying the public doesn’t have the right to know this stuff,’ Woloshin said. ‘But these things are by definition preliminary. The bar should be really high’ for reporting them. In some cases, preprints have shown to be completely bogus … . Readers might not heed caveats about ‘early’ or ‘preliminary’ evidence, Woloshin said. ‘The problem is, once it gets out into the public it’s dangerous because people will assume it’s true or reliable.’”

One notable example of an unvetted COVID-19 news story circulating widely “sprung from a study that ran in a journal. The malaria medicine hydroxychloroquine, touted by President Trump as a potential ‘cure,’ gained traction based in part on a shaky study of just 42 patients in France. The study’s authors concluded that the drug, when used in combination with an antibiotic, decreased patients’ levels of the virus. However, the findings were deemed unreliable due to numerous methodological flaws. Patients were not randomized, and six who received the treatment were inappropriately dropped from the study.” Recently, a Brazilian study of the drug was stopped when some patients developed heart problems. 

Uncertainties in Models and Testing

In addition to media biases toward sensationalism and simplicity, experts and researchers often have limited information to work with, especially in predictions. There are many sources of error in the epidemiological data about COVID-19. Models are only as good as the information that goes into them; as they say: Garbage In, Garbage Out. This is not to suggest that all the data is garbage, of course, so it’s more a case of Incomplete Data In, Incomplete Data Out. As a recent article noted, “Models aren’t perfect. They can generate inaccurate predictions. They can generate highly uncertain predictions when the science is uncertain. And some models can be genuinely bad, producing useless and poorly supported predictions … .” But as to the complaint that the outbreak hasn’t been as bad as some earlier models predicted, “earlier projections showed what would happen if we didn’t adopt a strong response, while new projections show where our current path sends us. The downward revision doesn’t mean the models were bad; it means we did something.”

One example of the uncertainty of data is the number of COVID-19 deaths in New York City, one of the hardest-hit places. According to The New York Times, “the official death count numbers presented each day by the state are based on hospital data. Our most conservative understanding right now is that patients who have tested positive for the virus and die in hospitals are reflected in the state’s official death count.” 

All well and good, but “The city has a different measure: Any patient who has had a positive coronavirus test and then later dies—whether at home or in a hospital—is being counted as a coronavirus death, said Dr. Oxiris Barbot, the commissioner of the city’s Department of Health. A staggering number of people are dying at home with presumed cases of coronavirus, and it does not appear that the state has a clear mechanism for factoring those victims into official death tallies. Paramedics are not performing coronavirus tests on those they pronounce dead. Recent Fire Department policy says that death determinations on emergency calls should be made on scene rather than having paramedics take patients to nearby hospitals, where, in theory, health care workers could conduct post-mortem testing. We also don’t really know how each of the city’s dozens of hospitals and medical facilities are counting their dead. For example, if a patient who is presumed to have coronavirus is admitted to the hospital, but dies there before they can be tested, it is unclear how they might factor into the formal death tally. There aren’t really any mechanisms in place for having an immediate, efficient method to calculate the death toll during a pandemic. Normal procedures are usually abandoned quickly in such a crisis.”

People who die at home without having been tested of course won’t show up in the official numbers: “Counting the dead after most disasters—a plane crash, a hurricane, a gas explosion, a terror attack or a mass shooting, for example—is not complex. A virus raises a whole host of more complicated issues, according to Michael A.L. Balboni, who about a decade ago served as the head of the state’s public safety office. ‘A virus presents a unique set of circumstances for a cause of death, especially if the target is the elderly, because of the presence of comorbidities,’ he said—multiple conditions. For example, a person with COVID-19 may end up dying of a heart attack. ‘As the number of decedents increase,’ Mr. Balboni said, ‘so does the inaccuracy of determining a cause of death.’”

So while it might seem inconceivably Dickensian (or suspicious) to some that in 2020 quantifying something as seemingly straightforward as death is complicated, this is not evidence of deception or anyone “fudging the numbers” but instead an ordinary and predictable lack of uniform criteria and reporting standards. The international situation is even more uncertain; different countries have different guidelines, making comparisons difficult. Not all countries have the same criterion for who should be tested, for example, or even have adequate numbers of tests available. 

In fact, there’s evidence suggesting that if anything the official numbers are likely undercounting the true infections. Analysis of sewage in one metropolitan area in Massachusetts that officially has fewer than 500 confirmed cases revealed that there may be exponentially more undetected cases. 

Incomplete Testing

Some people have complained that everyone should be tested, suggesting that only rich are being tested for the virus. There’s a national shortage of tests, and in fact many in the public are being tested (about 1 percent of the public so far), but such complaints rather miss a larger point: Testing is of limited value to individuals.  

Testing should be done in a coordinated way, starting not with the general public but instead with the most seriously ill. Those patients should be quarantined until the tests come back, and if the result is positive, further measures should be taken including tracking down people who that patient may have come in contact with; in Wuhan, for example, contacts were asked to check their temperature twice a day and stay at home for two weeks. 

But testing people who may be perfectly healthy is a waste of very limited resources and testing kits; most of the world is asymptomatic for COVID-19. Screening the asymptomatic public is neither practical nor possible. Furthermore, though scientists are working on creating tests that yield faster and more accurate results, the ones so far have taken days. Because many people who carry the virus show no symptoms (or mild symptoms that mimic colds or even seasonal allergies), it’s entirely possible that a person could have been infected between the time they took the test and gotten a negative result back. So, it may have been true that a few days, or a week, earlier they hadn’t been infected, but they are now and don’t know it because they are asymptomatic or presymptomatic. The point is not that the tests are flawed or that people should be afraid, but instead that testing, by itself, is of little value to the patient because of these uncertainties. If anything, it could provide a false sense of security and put others at risk. 

As Dr. Paul Offit noted in a recent interview, testing for the virus is mainly of use to epidemiologists. “From the individual level, it doesn’t matter that much. If I have a respiratory infection, stay home. I don’t need to find out whether I have COVID-19 or not. Stay home. If somebody gets their test and they find out they have influenza, they’ll be relieved, as compared to if they have COVID-19, where they’re going to assume they’re going to die matter how old they are.” 

If you’re ill, on a practical level—unless you’re very sick or at increased risk, as mentioned above—it doesn’t really matter whether you have COVID-19 or not because a) there’s nothing you can do about it except wait it out, like any cold or flu; and b) you should take steps to protect others anyway. People should assume that they are infected and act as they would for any communicable disease: isolate, get rest, avoid unnecessary contact with others, wash hands, don’t touch your face, and so on. 

 

A version of this article appeared on the CFI Coronavirus Response Page, here.

Part 2 will be posted in a few days.

Apr 182020
 

The new episode of Squaring the Strange is now out!

We chat about various Covid-related topics and then dive into a few examples of bad or misleading polls. First we go over a couple that don’t really set off alarm bells, like whether beards are sexy or what determines people’s beer-buying habits.

Then I dissect some bad reporting on polls and surveys that relate to much more important topics like Native American discrimination or the Holocaust, and we see how a bit of media literacy on how polls can be twisted around is a vital part of anyone’s skeptical toolbox.

 

Listen HERE!

 

 

Apr 152020
 

For those who didn’t see it: A recent episode of Squaring the Strange revisited a classic mystery: The Bermuda Triangle! The Bermuda Triangle is a perennial favorite for seekers of the strange; Ben still gets calls regularly from students eager to ask him questions about this purported watery grave in the Caribbean. We look into the history of this mysterious place and a few factors that influenced its popularity. What does the Bermuda Triangle have to do with a college French class? And what does a new bit of 2020 shipwreck sleuthing have to do with the legend? The one thing the Bermuda Triangle does seem to suck in like a vortex is a kitchen sink of very weird theories, from Atlantis and UFOs to rogue tidal waves and magnetic time-space anomalies.

You can listen to it here! 

Apr 122020
 

In February a headline widely shared on social media decried poor reviews of the new film Birds of Prey and blamed it on male film critics hating the film for real or perceived feminist messages (and/or skewed expectations; it’s not clear). The article, by Sergio Pereira, was headlined “Birds of Prey: Most of the Negative Reviews Are from Men.”

The idea that the film was getting bad reviews because hordes of trolls or misogynists hated it was certainly plausible, and widely discussed for example in the case of the all-female Ghostbusters reboot a few years ago. As a media literacy educator and a film buff, I was curious to read more, and when I saw it on a friend’s Facebook wall I duly did what the writer wanted me (and everyone else) to do: I clicked on the link.

I half expected the article to contradict its own headline (a frustratingly common occurrence, even in mainstream news media stories), but in this case Pereira’s text accurately reflected its headline: “Director Cathy Yan’s Birds of Prey (and the Fantabulous Emancipation of One Harley Quinn), starring Margot Robbie as the Clown Princess of Crime, debuted to a Fresh rating on Rotten Tomatoes. While the score has dropped as more reviews pour in, the most noticeable thing is that the bulk of the negative reviews come from male reviewers. Naturally, just because the film is a first for superhero movies—because it’s written by a woman, directed by a woman and starring a mostly all-female cast—doesn’t absolve it from criticism. It deserves to be judged for both its strengths and weaknesses like any other piece of art. What is concerning, though, is how less than 10% of the negative reviews are from women.” In the article and later on Twitter Pereira attributed the negative reviews to an alleged disparity between what male film reviewers expected from the film and what they actually saw, describing it as “literally… where a bunch of fools got upset about the movie they THOUGHT it was, instead of what it ACTUALLY was.”

I was reminded of the important skeptical dictum that before trying to explain why something is the case, be sure that it is the case; in other words question your assumptions. This is a common error on social media, in journalism, and of course in everyday life. We shouldn’t just believe what people tell us—especially online. To be fair, the website was CBR.com (formerly known as Comic Book Resources) and not, for example, BBC News or The New York Times. It’s pop culture news, but news nonetheless.

Curious to see what Pereira was describing, I clicked the link to the Rotten Tomatoes listing and immediately knew that something wasn’t right. The film had a rating of 80% Fresh rating—meaning that most of the reviews were positive. In fact according to MSNBC, “The film charmed critics [and is] the third-highest rating for any movie in the DCEU, just behind Wonder Woman and Shazam.” Birds of Prey may not have lived up to its expectations, but the film was doing fairly well, and hardly bombing—because of male film reviewers or for any other reason.

I know something about film reviewing; I’ve been a film reviewer since 1994, and attended dozens of film festivals, both as an attendee and a journalist. I’ve also written and directed two short films and taken screenwriting courses. One thing I’ve noticed is that for whatever reason most film critics are male (a fact I double checked, learning that the field is about 80% male). So, doing some very basic math in my head, I knew there was something very wrong with the headline—and not just the headline, but the entire premise of Pereira’s article.

Here’s a quick calculation: Say there are 100 reviewers. 80 of them are men; 20 are not. If half of each gender give it a positive review, that’s 40 positive reviews from men and 10 from women, for a total of 50% approval (or “Fresh”) rating. If half of men (40) and 100% of the women (20) gave it a positive review, that’s a 60% Fresh rating. If three-quarters of men (60) and 100% of the women (20) gave it a positive review, that’s an 80% Fresh rating.

Birds of Prey had an 80% Fresh rating. So even if every single female reviewer gave the film a positive review—which we know didn’t happen just from a glance at the reviews on RottenTomatoes—then that means that at least three out of four men gave it a positive review. Therefore it might statistically be true that “most of the negative reviews are from men,” but the exact opposite is also true: most of the positive reviews are from men, simply because there are more male reviewers. The article’s claim that “the bulk of the negative reviews come from male reviewers” is misleading at best and factually wrong at worst.

I also thought it strange that Pereira didn’t specify which male reviewers he was talking about. By “fools” did he mean professional film critics such as Richard Roeper and Richard Brody were overwhelmingly writing scathing reviews of the film? Or did he mean reviews from random male film fans? And if the latter, how did he determine the gender of the anonymous reviewers? It was possible that his statistic was correct, but readers would need much more information about where he got his numbers. Did he take the time to gather data, create a spreadsheet, and do some calculations? Did he skim the reviews for a minute and do a rough estimate? Did he just make it up?

I was wary of assuming the burden of proof regarding this claim. After all, Sergio Pereira was the one who claimed that most of the negative reviews were by men. The burden of proof is always on the person making the claim; it’s not up to me to show he’s wrong, but up to him to show he’s right. Presumably he got that number from somewhere—but where?

I contacted Pereira via Twitter and asked him how he arrived at the calculations. He did not reply, so I later contacted CBR directly, emailing the editors with a concise, polite note saying that the headline and articles seemed to be false, and asking them for clarification: “He offers no information at all about how he determined that, nor that less than 10% of the negative reviews are from women. The RottenTomatoes website doesn’t break reviewers down by gender (though named and photos offer a clue), so Pereira would have to go through one by one to verify each reviewer’s gender. It’s also not clear whether he’s referring to Top Reviewers or All Reviewers, which are of course different datasets. I spent about 20 minutes skimming the Birds of Prey reviews and didn’t see the large gender imbalance reported in your article (and didn’t have hours to spend verifying Pereira’s numbers, which I couldn’t do anyway without knowing what criterion he used). Any clarification about Pereira’s methodology would be appreciated, thank you.”  They also did not respond.

Since neither the writer nor the editors would respond, I resignedly took a stab at trying to figure out where Pereira got his numbers. I looked at the Top Critics and did a quick analysis. I found 41 of them whose reviews appeared at the time: 26 men and 15 women. As I suspected, men had indeed written the statistical majority of both the positive and negative reviews.

Reactions

On my friend’s Facebook page where I first saw the story being shared I posted a comment noting what seemed to be an error, and offering anyone an easy way to assess whether the headline was plausible: “A quick-and-dirty way to assess whether the headline is plausible is to note that 1) 80% of film critics are male, and that 2) Birds of Prey has a 80% Fresh rating, with 230 Fresh (positive) and 59 Rotten (negative). So just glancing at it, with 80% of reviewers male, how could the film possibly have such a high rating if most of the men gave it negative reviews?”

The reactions from women on the post were interesting—and gendered: one wrote, “did you read the actual article or just the headline? #wellactually,” and “haaaaard fucking eyeroll* oh look y’all. The dude who thinks he’s smarter than the author admits its maybe a little perhaps possible that women know what they’re talking about.”

The latter comment was puzzling, since Sergio Pereira is a man. It wasn’t a man questioning whether women knew what they were talking about; it was a man questioning whether another man’s harmful stereotypes about women highlighted in his online article were true. I was reminded of the quote attributed to Mark Twain: “It’s easier to fool people than to convince them they’ve been fooled.” Much of the reaction I got was critical of me for questioning the headline and the article. I got the odd impression that some thought I was somehow defending the supposed majority male film critics who didn’t like the film, which was absurd. I hadn’t (and haven’t) seen the film and have no opinion about it, and couldn’t care less whether most of the male critics liked or didn’t like the film. My interest is as a media literacy educator and someone who’s researched misleading statistics.

To scientists, journalists, and skeptics, asking for evidence is an integral part of the process of parsing fact from fiction, true claims from false ones. If you want me to believe a claim—any claim, from advertising claims to psychic powers, conspiracy theories to the validity of repressed memories—I’m going to ask for evidence. It doesn’t mean I think (or assume) you’re wrong or lying, it just means I want a reason to believe what you tell me. This is especially true for memes and factoids shared on social media and designed to elicit outrage or scorn.

The problem is when the person does occasionally encounter someone who is sincerely trying to understand an issue or get to the bottom of a question, their knee-jerk reaction is often to assume the worst about them. They are blinded by their own biases and they project those biases on others. This is especially true when the subject is controversial, such as with race, gender, or politics. To them, the only reason a person would question a claim is if they are trying to discredit that claim, or a larger narrative it’s being offered in support of.

Of course that’s not true; people should question all claims, and especially claims that conform to their pre-existing beliefs and assumptions; those are precisely the ones most likely to slip under the critical thinking radar and become incorporated into your beliefs and opinions. I question claims from across the spectrum, including those from sources I agree with. To my mind the other approach has it backwards: How do you know whether to believe a claim if you don’t question it?

If the reviews are attributable to sexism or misogyny due to feminist themes in the script—instead of, for example, lackluster acting, clunky dialogue, lack of star power, an unpopular title (the film was renamed during its release, a very unusual marketing move), or any number of other factors unrelated to its content—then presumably that same effect would be clear in other similar films.

Ironically, another article on CBR by Nicole Sobon published a day earlier—and linked to in Sergio Pereira’s piece—offers several reasons why Birds of Prey wasn’t doing as well at the box office, and misogyny was conspicuously not among them: “Despite its critical success, the film is struggling to take flight at the box office…. One of the biggest problems with Birds of Prey was its marketing. The trailers did a great job of reintroducing Robbie’s Quinn following 2017’s Suicide Squad, but they failed to highlight the actual Birds of Prey. Also working against it was the late review embargo. It’s widely believed that when a studio is confident in its product, it will hold critics screenings about two weeks before the film’s release, with reviews following shortly afterward to buoy audience anticipation and drive ticket sales. However, Birds of Prey reviews didn’t arrive until three days before the film’s release. And, while the marketing finally fully kicked into gear by that point, for the general audience, it might’ve been too late to care. Especially when Harley Quinn’s previous big-screen appearance was in a poorly received film.”

The Cinematic Gender Divide

A gender divide in positive versus negative reviews of ostensibly feminist films (however you may want to measure that, whether by the Bechdel Test or some other way—such as an all-female cast, or female writer/directors), is eminently provable. It’s not a subject that I’ve personally researched and quantified, but since Pereira didn’t reference any of this in his article, I did some research on it.

For example Salon did a piece on gender divisions in film criticism, though not necessarily finding that it was rooted in sexism or a reaction to feminist messages: “The recent Ghostbusters reboot, directed by Paul Feig, received significantly higher scores from female critics than their male counterparts. While 79.3 percent of women who reviewed the film gave it a positive review, just 70.8 percent of male critics agreed with them. That’s a difference of 8.5 percent… In total, 84 percent of the films surveyed received more positive reviews from female reviewers than from men. The movies that showed the greatest divide included A Walk to Remember, the Nicholas Sparks adaptation; Twilight, the 2008 vampire romance; P.S. I Love You, a melodrama about a woman (Hilary Swank) grieving the loss of her partner; Divergent, the teen dystopia; and Divine Secrets of the Ya-Ya Sisterhood… Men tended to dislike Young Adult literary adaptations and most films marketed to teenage girls. Pitch Perfect, which was liked by 93.8 percent of female critics, was rated much lower by men—just 76.9 percent of male reviewers liked it.” (There was nothing supporting Pereira’s assertion that critics, male or female, didn’t like films marketed to the opposite gender because of a perceived gap between what the reviewers expected from a film versus what the film delivered.)

The phrasing “just 76.9 percent of male reviewers liked Pitch Perfect” of course invites an ambiguous comparison (how many should have liked the film? 90%? 95%? 100%?). More to the point, if over three-quarters of men liked the obviously female-driven film Pitch Perfect, that rather contradicts Pereira’s thesis. In fact Pitch Perfect has an 80% Fresh rating on RottenTomatoes—exactly the same score that Birds of Prey did. We have two female-driven films with the majority of male reviewers giving both films a positive review—yet Pereira suggests that male reviewers pilloried Birds of Prey.

Perpetuating Harmful Stereotypes

Journalists making errors and writing clickbait headlines based on those errors is nothing new, of course. I’ve written dozens of media literacy articles about this sort of thing. As I’ve discussed before, the danger is that these articles mislead people, and reinforce harmful beliefs and stereotypes. In some cases I’ve researched, misleading polls and surveys create the false impression that most Americans are Holocaust deniers—a flatly false and highly toxic belief that can only fuel fears of anti-Semitism (and possibly comfort racists). In other cases these sorts of headlines exaggerate fear and hatred of the transgender community.

As noted, Pereira’s piece could have been titled, “Birds of Prey: Most of the Positive Reviews Are from Men.” That would have empowered and encouraged women—but gotten fewer outrage clicks.

In many cases what people think other people think about the world us just as important as what they personally think. This is due to what’s called the third-person effect, or pluralistic ignorance. People are of course intimately familiar with our own likes and desires—but where do we get our information about the 99.99% of the world we don’t and can’t directly experience or evaluate? When it comes to our understanding and assumptions about the rest of the world, our sources of information quickly dwindle. Outside of a small circle of friends and family, much of the information about what others in the world think and believe comes from the media, specifically social and news media. These sources often misrepresent the outside world. Instead they distort the real world in predictable and systemic ways, always highlighting the bad and the outraged. The media magnifies tragedy, exploitation, sensationalism and bad news, and thus we assume that others embody and endorse those traits.

We’re seeing this at the moment with shortages of toilet paper and bottled water in response to Covid-19 fears. Neither are key to keeping safe or preventing the spread of the virus, yet people are reacting because other people are reacting. There’s a shortage because people believe there’s a shortage—much in the way that the Kardashians are famous for being famous. In much the same way, when news and social media exaggerate (or in some cases fabricate) examples of toxic behavior, it creates the false perception that such behavior is more pervasive (and widely accepted) than it actually is. Whether or not male film reviewers mostly hated Birds of Prey as Pereira suggestedand they didn’t—the perception that they did can itself cause harm.

I don’t think it was done intentionally or with malice. But I hope Pereira’s piece doesn’t deter an aspiring female filmmaker who may read his widely-shared column and assume that no matter how great her work is, the male-dominated film critic field will just look for ways to shut her out and keep her down merely because of her gender.

There certainly are significant and well-documented gender disparities in the film industry, on both sides of the camera, from actor pay disparity to crew hiring. But misogynist men hating on Birds of Prey simply because it’s a female-led film isn’t an example of that. I note with some irony that Pereira’s article concludes by saying that “Birds of Prey was meant to be a celebration, but it sadly experienced the same thing as every other female-driven film: a host of negativity about nothing.” That “host of negativity” is not reflected in male film reviews but instead in Sergio Pereira’s piece. His CBR article is itself perpetuating harmful stereotypes about female-driven films, which is unfortunate given the marginalization of women and minorities in comic book and gaming circles.

You can find more on me and my work with a search for “Benjamin Radford” (not “Ben Radford”) on Vimeo, and please check out my podcast Squaring the Strange! 

A longer version of this article appeared on my Center for Inquiry blog; you can read it here. 

Apr 082020
 

As the world enters another month dealing with the deadly coronavirus that has dominated headlines, killed hundreds, and sickened thousands, misinformation is running rampant. For many, the medical and epidemiological aspects of the outbreak are the most important and salient elements, but there are other prisms through which we can examine this public health menace. 

There are many facets to this outbreak, including economic damage, cultural changes, and so on. However, my interest and background is in media literacy, psychology, and folklore (including rumor, legend, and conspiracy), and my focus here is a brief overview of some of the lore surrounding the current outbreak. Before I get into the folkloric aspects of the disease, let’s review the basics of what we know so far. 

First, the name is a bit misleading; it’s a coronavirus, not the coronavirus. Coronavirus is a category of viruses; this one is dubbed “Covid-19.” Two of the best known and most deadly other coronaviruses are SARS (Severe Acute Respiratory Syndrome, first identified in 2003) and MERS (Middle East Respiratory Syndrome, identified in 2012). 

The symptoms of Covid-19 are typical of influenza and include a cough, sometimes with a fever, shortness of breath, nausea, vomiting, and/or diarrhea. Most (about 80 percent) of infected patients recover within a week or two, like patients with a bad cold. The other 20 percent contract severe infections such as pneumonia, sometimes leading to death. The virus Covid-19 is spreading faster than either MERS or SARS, but it’s much less deadly than either of those. The death rate for Covid-19 is 2 percent, compared to 10 percent for SARS and 35 percent for MERS. There’s no vaccine, and because it’s not bacterial, antibiotics won’t help. 

The first case was reported in late December 2019 in Wuhan, China. About a month later the Health and Human Services Department declared a U.S. public health emergency. The average person is at very low risk, and Americans are at far greater risk of getting the flu—about 10 percent of the public gets it each year.

The information issues can be roughly broken down into three (at times overlapping) categories: 1) Lack of information; 2) Misinformation; and 3) Disinformation. 

Lack of Information

The lack of information stems from the fact that scientists are still learning about this specific virus. Much is known about it from information gathered so far (summarized above), but much remains to be learned. 

The lack of information has been complicated by a lack of transparency by the Chinese government, which has sought to stifle early alarms about it raised by doctors, including Li Wenliang, who recently died. As The New York Times reported:

On Friday, the doctor, the doctor, Li Wenliang, died after contracting the very illness he had told medical school classmates about in an online chat room, the coronavirus. He joined the more than 600 other Chinese who have died in an outbreak that has now spread across the globe. Dr. Li “had the misfortune to be infected during the fight against the novel coronavirus pneumonia epidemic, and all-out efforts to save him failed,” the Wuhan City Central Hospital said on Weibo, the Chinese social media service. Even before his death, Dr. Li had become a hero to many Chinese after word of his treatment at the hands of the authorities emerged. In early January, he was called in by both medical officials and the police, and forced to sign a statement denouncing his warning as an unfounded and illegal rumor. 

Chinese officials were slow to share information and admit the scope of the outbreak. This isn’t necessarily evidence of a conspiracy—governments are often loathe to admit bad news or potentially embarrassing or damaging information (recall that it took nearly a week for Iran to admit it had unintentionally shot down a passenger airliner over its skies in January)—but part of the Chinese government’s long standing policies of restricting news reporting and social media. Nonetheless, China’s actions have fueled anxiety and conspiracies; more on that presently. 

Misinformation

There are various types of misinformation, revolving around a handful of central concerns typical of disease rumors. In his book An Epidemic of Rumors: How Stories Shape Our Perceptions of Disease, Jon D. Lee notes:

People use certain sets of narratives to discuss the presence of illness, mediate their fears of it, come to terms with it, and otherwise incorporate its presence into their daily routines … Some of these narratives express a harsher, more paranoid view of reality than others, some are openly racist and xenophobic, and some are more concerned with issues of treatment and prevention than blame—but all revolve around a single emotion in all its many forms: fear. (169) 

As Lee mentions, one common aspect is xenophobia and contamination fears. Many reports, in news media but on social media especially, focus on the “other,” the dirty aberrant outsiders who “created” or spread the menace. Racism is a common theme in rumors and urban legends—what gross things “they” eat or do. As Prof. Andrea Kitta notes in her book The Kiss of Death: Contagion, Contamination, and Folklore

The intriguing part of disease legends is that, in addition to fear of illness, they express primarily a fear of outsiders … Patient zero [the assumed origin of the “new” disease] not only provides a scapegoat but also serves as an example to others: as long as people do not act in the same way as patient zero, they are safe. (27–28)

In the case of Covid-19, rumors have suggested that seemingly bizarre (to Americans anyway) eating habits of Chinese were to blame, specifically bats. One video circulated allegedly showing Chinese preparing bat soup, suggesting it was the cause of the outbreak, though it was later revealed to have been filmed in Palau, Micronesia. 

The idea of disease and death coming from “unclean” practices has a long history. One well known myth is that AIDS originated when someone (presumably an African man) had sex with a monkey or ape. This linked moralistic views of sexuality with the later spread of the disease, primarily among the homosexual community. More likely, however, chimps with simian immunodeficiency virus were killed and eaten for game meat, which is documented, which in turn transferred the virus to humans and spawned HIV (human immunodeficiency virus), which in turn causes AIDS. 

The fear of foreigners and immigrants bringing disease to the country was of course raised a few years ago when a Fox News contributor suggested without evidence that a migrant caravan from Honduras and Guatemala coming through Mexico carried leprosy, smallpox, and other dreaded diseases. This claim was quickly debunked

Disinformation and Conspiracies

Then there are the conspiracies, prominent among them the disease’s origin. Several are circulating, claiming for example that Covid-19 is in fact a bioweapon that has either been intentionally deployed or escaped/stolen from a secure top secret government lab. Some have claimed that it’s a plot (by the Bill and Melinda Gates Foundation or another NGO or Big Pharma) to sell vaccines—apparently unaware that there is no vaccine available at any price. 

This is a classic conspiracy trope, evoked to explain countless bad things, ranging from chupacabras to chemtrails and diseases. This is similar to urban legends and rumors in the African American community, claiming that AIDS was created by the American government to kill blacks, or that soft drinks and foods (Tropical Fantasy soda and Church’s Fried Chicken, for example) contained ingredients that sterilized the black community (for more on this, see Patricia Turner’s book I Heard It Through the Grapevine: Rumor in African-America Culture.) In Pakistan and India, public health workers have been attacked and even killed trying to give polio vaccinations, rumored to be part of an American plot.

Of course such conspiracies go back centuries. As William Naphy notes in his book Plagues, Poisons, and Potions: Plague Spreading Conspiracies in the Western Alps c. 1530-1640, people were accused of intentionally spreading the bubonic plague. Most people believed that the plague was a sign of God’s wrath, a pustular and particularly punitive punishment for the sin of straying from Biblical teachings. “Early theories saw causes in: astral conjunctions, the passing of comets; unusual weather conditions … noxious exhalations from the corpses on battlefields” and so on (vii). Naphy notes that “In 1577, Claude de Rubys, one of the city’s premier orators and a rabid anti-Protestant, had openly accused the city’s Huguenots of conspiring to destroy Catholics by giving them the plague” (174). Confessions, often obtained under torture, implicated low-paid foreigners who had been hired to help plague victims and disinfect their homes. 

Other folkloric versions of intentional disease spreading include urban legends of AIDS-infected needles placed in payphone coin return slots. Indeed, that rumor was part of an older and larger tradition; as folklorist Gillian Bennett notes in her book Bodies: Sex Violence, Disease, and Death in Contemporary Legend, in Europe and elsewhere “Stories proliferated about deliberately contaminated doorknobs, light switches, and sandboxes on playgrounds” (115).

How to Get, Prevent, or Cure It

Various theories have surfaced online suggesting ways to prevent the virus. They include avoiding spicy food (which doesn’t work); eating garlic (which also doesn’t work); and drinking bleach (which really, really doesn’t work). 

In addition, there’s also something called MMS, or “miracle mineral solution,” and the word miracle in the name should be a big red flag about its efficacy. The solution is 28 percent sodium chlorite mixed in distilled water, and there are reports that it’s being sold online for $900 per gallon (or if that’s a bit pricey, you can get a four-ounce bottle for about $30).

The FDA takes a dim view of this, noting that it 

has received many reports that these products, sold online as “treatments,” have made consumers sick. The FDA first warned consumers about the products in 2010. But they are still being promoted on social media and sold online by many independent distributors. The agency strongly urges consumers not to purchase or use these products. The products are known by various names, including Miracle or Master Mineral Solution, Miracle Mineral Supplement, MMS, Chlorine Dioxide Protocol, and Water Purification Solution. When mixed according to package directions, they become a strong chemical that is used as bleach. Some distributors are making false—and dangerous—claims that Miracle Mineral Supplement mixed with citric acid is an antimicrobial, antiviral, and antibacterial liquid that is a remedy for autism, cancer, HIV/AIDS, hepatitis, flu, and other conditions. 

It’s true that bleach can kill viruses—when used full strength on surfaces, not when diluted and ingested. They’re two very different things; confuse the two at your great peril. 

Folk remedies such as these are appealing because they are something that victims (and potential victims) can do—some tangible way they can take action and assume control over their own health and lives. Even if the treatment is unproven or may be just a rumor, at least they feel like they’re doing something.

There have been several false reports and rumors of outbreaks in local hospitals across the country, including in Los Angeles, Santa Clarita, and in Dallas County, Texas. In all those cases, false social media posts have needlessly alarmed the public—and in some cases spawned conspiracy theories. After all, some random, anonymous mom on Facebook shared a screen-captured Tweet from some other random person who had a friend of a friend with “insider information” about some anonymous person in a local hospital who’s dying with Covid-19—but there’s nothing in the news about it! Who are you going to believe? 

Then there’s Canadian rapper/YouTube cretin James Potok, who stood up near the end of his WestJet flight from Toronto to Jamaica and announced loudly to the 240 passengers that he had just come from Wuhan, China, and “I don’t feel too well.” He recorded it with a cell phone, planning to post it online as a funny publicity stunt. Flight attendants reseated him, and the plane returned to Toronto where police and medical professionals escorted him off the plane. Of course he tested negative and was promptly arrested.

When people are frightened by diseases, they cling to any information and often distrust official information. These fears are amplified by the fact that the virus is of course invisible to the eye, and the fears are fueled by ambiguity and uncertainty about who’s a threat. The incubation period for Covid-19 seems to be between two days and two weeks, during which time asymptomatic carriers could potentially infect others. The symptoms are common and indistinguishable from other viruses, except when confirmed with lab testing, which of course requires time, equipment, a doctor visit, and so on. Another factor is that people are very poor at assessing relative risk in general anyway (for example, fearing plane travel over statistically far more dangerous car travel). They often panic over alarmist media reports and underestimate their risk of more mundane threats.

The best medical advice for dealing with Covid-19: Thoroughly cook meat, wash your hands, and stay away from sick people … basically the same advice you get for avoiding any cold or airborne virus. Face masks don’t help much, unless you are putting them on people who are already sick and coughing. Most laypeople use the masks incorrectly anyway, and hoarding has led to a shortage for medical workers. 

Hoaxes, misinformation, and rumors can cause real harm during public health emergencies. When people are sick and desperately afraid of a scary disease, any information will be taken seriously by some people. False rumors can not only kill but can hinder public health efforts. The best advice is to keep threats in perspective, recognize the social functions of rumors, and heed advice from medical professionals instead of your friend’s friend on Twitter. 

Further Reading

An Epidemic of Rumors: How Stories Shape Our Perceptions of Disease, Jon D. Lee

Bodies: Sex Violence, Disease, and Death in Contemporary Legend, Gillian Bennett

I Heard It Through the Grapevine: Rumor in African-America Culture, Patricia Turner

Plagues, Poisons, and Potions: Plague Spreading Conspiracies in the Western Alps c. 1530-1640, William Naphy

The Global Grapevine: Why Rumors of Terrorism, Immigration, and Trade Matter, Gary Alan Fine and Bill Ellis

The Kiss of Death: Contagion, Contamination, and Folklore, Andrea Kitta

 

 

A longer version of this article appeared in my CFI blog; you can find it here. 

Apr 052020
 

In recent weeks there’s been many rumors, myths, and misinformation about the current coronavirus pandemic, Covid-19. One of the most curious is the recent resurrection of a posthumous prediction by psychic Sylvia Browne.

In her 2008 book End of Days, Browne predicted that “In around 2020 a severe pneumonia-like illness will spread throughout the globe, attacking the lungs and the bronchial tubes and resisting all known treatments. Almost more baffling than the illness itself will be the fact that it will suddenly vanish as quickly as it arrived, attack again ten years later, and then disappear completely.”

This led to many on social media assuming that Browne had accurately predicted the Covid-19 outbreak, and Kim Kardashian among others shared such posts, causing the book to become a best-seller once again.

There’s a lot packed into these two sentences, and I recently did a deep dive into it. First, we have an indefinite date range (“in around 2020”), which depends on how loosely you interpret the word “around,” but plausibly covers seven (or more) years. Browne predicted “A severe pneumonia-like illness,” but Covid-19 is not “a severe pneumonia-like illness” (though it can in some cases lead to pneumonia). Most of those infected (about 80 percent) have mild symptoms and recover just fine, and the disease has a mortality rate of between 2 percent and 4 percent. Browne claims it “will spread throughout the globe, attacking the lungs and the bronchial tubes,” and Covid-19 has indeed spread throughout the globe, but Browne also says the disease she’s describing “resists all known treatments.” This does not describe Covid-19; in fact, doctors know how to treat the disease—it’s essentially the same for influenza or other similar respiratory infections. This coronavirus is not “resisting” all (or any) known treatments.

She further describes the illness: “Almost more baffling than the illness itself will be the fact that it will suddenly vanish as quickly as it arrived, attack again ten years later, and then disappear completely.” This is false; Covid-19 has not “suddenly vanished as quickly as it arrived,” and even if it eventually does, its emergence pattern would have to be compared with other typical epidemiology data to know whether it’s “baffling.”

You can read my full piece at the link above, but basically we have a two-sentence prediction written in 2008 by a convicted felon with a long track record of failures. Half of the prediction has demonstrably not happened. The other half of the prophecy describes an infectious respiratory illness that does not resemble Covid-19 in its particulars that would supposedly happen within a few years of 2020. At best, maybe one-sixth of what she said is accurate, depending again on how much latitude you’re willing to give her in terms of dates and vague descriptions.

Browne’s 2004 Prediction

But there’s more to the story, because as it turns out Browne made at least one other similar prediction with some significant differences. I discovered this a few days ago. I have several books by Browne in my library (mostly bought at Goodwill and library sales), among them Browne’s 2004 book Prophecy: What the Future Holds for You (written with Lindsay Harrison, from Dutton Publishing).

On p. 214, I found an earlier, somewhat different version of this same prophecy. Details and exact words matter, so here’s her 2004 prediction verbatim: “By 2020 we’ll see more people than ever wearing surgical masks and rubber gloves in public, inspired by an outbreak of a severe pneumonia-like illness that attacks both the lungs and the bronchial tubes and is ruthlessly resistant to treatment. This illness will be particularly baffling in that, after causing a winter of absolute panic, it will seem to vanish completely until ten years later, making both its source and its cure that much more mysterious.” 

Comparing this to her later 2008 version (“In around 2020 a severe pneumonia-like illness will spread throughout the globe, attacking the lungs and the bronchial tubes and resisting all known treatments. Almost more baffling than the illness itself will be the fact that it will suddenly vanish as quickly as it arrived, attack again ten years later, and then disappear completely”) we can see a few differences. 

It’s not uncommon for writers to revise and republish their work in different forms, sometimes changing or summarizing material for different formats and purposes. But in the case of predictions, it also serves an important, albeit unrecognized, purpose: It greatly increases the chances of a prediction—or, more accurately, some version of that prediction—being retroactively “right.” It’s one thing to make a single (seemingly specific) prediction about a future event; it’s another to make several different versions of that prediction so that your followers can pick and choose which details they think better fit the situation. 

Note that the earlier prediction—which said “By 2020” (a limited, much more specific date than “In around 2020,” which as I noted spans several years)—focused on “more people than ever wearing surgical masks and rubber gloves in public,” which was “inspired by an outbreak of a severe pneumonia-like illness that attacks both the lungs and the bronchial tubes and is ruthlessly resistant to treatment.”

It’s certainly true that surgical masks (and, to a much lesser extent, gloves) are much more common today than, say, in 2019, but that’s an obvious and predictable reaction—as Browne herself admits—to the outbreak she mentions. Had Browne predicted any respiratory disease outbreak and specified that more people would not react by wearing masks and gloves, that would itself be an amazing (if nonsensical) prophecy. While we’re on the topic of self-evident revelations, note that Browne’s phrase “Both the lungs and the bronchial tubes” is redundant and nonsensical, providing only the illusion of specificity, since bronchial tubes are inside the lungs; saying “both … and” is meaningless, like saying “both the face and the nose will be affected,” or “both the West Coast and California will be affected.” Either Browne doesn’t know where bronchial tubes are, or she assumes her readers don’t.

Note that “This illness will be particularly baffling in that, after causing a winter of absolute panic, it will seem to vanish completely until ten years later, making both its source and its cure that much more mysterious” was changed to “Almost more baffling than the illness itself will be the fact that it will suddenly vanish as quickly as it arrived, attack again ten years later, and then disappear completely.”

Note that the qualifier “after causing a winter of absolute panic” was dropped from the earlier version—which is convenient for Browne, because the widespread panic surrounding Covid-19 didn’t begin in winter but instead in mid-March. (Of course, I’m not suggesting that Browne predicted in 2008 that her 2004 prediction would be wrong and changed the phrasing!)

Another noteworthy prediction dropped in the later edition was that the disease would seem to vanish completely after ten years, “making both its source and its cure that much more mysterious.” But “seeming to vanish completely” for a decade has nothing to do with whether “its source and its cure” are mysterious. The source of the outbreak has been pretty well established: Likely a meat market in Wuhan, China. The exact source, the specific name of the very first person that first had it (and where he or she got it from), the so-called Patient Zero, may never be known—not because it’s inherently “mysterious” but merely because epidemiology is a difficult task. 

It’s not clear what Browne means by a “cure” because viruses themselves can’t be “cured,” though the diseases they lead to can be prevented with vaccination. Like the common cold, influenza, and most other contagious respiratory illnesses, people are “cured” of Covid-19 when they recover from it. In any event, neither the source nor the “cure” (whatever that would mean the context of Covid-19) are “mysterious” by medical and epidemiological standards.

Browne’s Other Predictions

After I wrote a piece about Browne’s failed predictions, I soon received hate mail from many of her fans who defended the accuracy of her prophecy and demanded that I take a closer look at her predictions. So I did; many of her predictions are set far in the future, but I did find a few dozen in her book Prophecy: What the Future Holds for You that referred to events between the time the book was published (2004) and this year. Here’s a sampling, in chronological order:

• “There will be a significant vaccine against HIV/AIDS in 2005” (p. 211).

•  “The common cold will be over with by about 2009 or 2010” (p. 204)

•  “By around 2010, law enforcement’s use of psychics will ‘come out of the closet’ and be a commonplace, widely accepted collaboration” (p. 180).

•  “By around 2010, it will be mandated that a DNA sample of every infant born in the United States is taken and recorded at the time of the baby’s birth” (p. 182).

•  “In about 2011, home security systems start becoming common … The windows are unbreakable glass, able to be opened only by the homeowner … Doors and windows will no longer have visible, traditional locks that can be picked or tampered with. Instead the security system allows access … by ‘eyeprints’” (p. 169).

•  “There won’t be a successful manned exploration of Mars until around 2012” (p. 128).

•  “In around 2012 or 2013 a coalition of five major international corporations … will combine their almost limitless resources and mobilize a vast, worldwide, ultimately successful movement to revitalize the rainforests” (p. 105).

•  “By around 2014, pills, capsules, and even most liquid medicine will be replaced by readily accessible vaporized heat and oxygen chambers that can infuse every pore of the body with the recommended medications” (p. 209). 

•  “By the year 2015 invasive surgery involving cutting and scalpels and stitches and scars will be virtually unheard of” (p. 205).

•  “Telemarketers will have long since vanished by 2015” (p. 171).

•  “To give law enforcement one more added edge, by 2015 their custom-designed high-speed vehicles will be atomically powered and capable of becoming airborne enough to fly several feet above other traffic” (p. 190).

•  “The search for extraterrestrials will ultimately end in around 2018 … because they begin stepping forward and identifying themselves to various international organizations and heads of state, particularly the United Nations, NATO, Scotland Yard, NASA, and a summit being held at Camp David” (p. 127).

•  “By the year 2020 researchers will have created a wonderful material … able to perfectly duplicate the eardrum and will be routinely implanted, to restore hearing to countless thousands … Long before 2020, blindness will become a thing of the past” (p. 203).

•  “By 2020 we’re going to see an end to the institution of marriage as we know it” (p. 255).

•  “By about 2020 we’ll see the end of the one-man presidency and the costly, seemingly perpetual cycle of presidential campaigns and elections” (p. 135). 

•  “The year 2020 will spark an amazing resurgence in the popularity of the barter system throughout the United States, with goods and services almost becoming a more common form of payment than cash” (p. 140–141).

 

There’s more—oh, so much more—but you get the idea. Hundreds of predictions, mostly wrong, vague, unverifiable, in the distant future, some right, and so on. On p. 97 of Prophecy: What the Future Holds For You Browne claims that “my accuracy rate is somewhere between 87 and 90 percent if I’m recalling correctly.” Yet another failed prediction. 

 

Mar 292020
 

The new episode of Squaring the Strange is out! The first of a two-parter, in this episode Celestia and Kenny Biddle each examined recent (then-current) missing person cases and closely examined how psychic detectives “helped” (or interfered) with each. A sobering topic but we think you’ll find it enlightening. Please check it out HERE!

 

Mar 232020
 

Our recent episode of Squaring the Strange is about literary hoaxes!

I discuss some “misery memoirs,” stories of victims triumphing over incredible hardships (Spoiler: “Go Ask Alice” was fiction). Celestia discusses newspaper reports of horny bat-people on the moon, and we break down the cultural factors that contribute to the popularity and believability of hoaxes. We end with the heart-wrenching story of a literary version of Munchausen by proxy, one that moved both Oprah and Mr. Rogers. Check it out HERE! 

 

Mar 082020
 

So this is cool: I’m quoted in an article on Bigfoot in The Mountaineer: 

If you wear a size 14 shoe, chances are some of your high-school classmates called you “Bigfoot.” But that doesn’t mean you are an ape-like beast who may — or may not — just be a myth. A 1958 newspaper column began the whole thing. The Humboldt Times received a letter from a reader reporting loggers in California who had stumbled upon mysterious and excessively large footprints. The two journalists who reported the discovery treated it as a joke. But to their great surprise, the story caught on and soon spread far and wide. Bigfoot was born. Of course, reports of large beasts were not exactly new. The Tibetans had a Yeti, familiarly known as the “Abominable Snowman,” and an Indian Nation in Canada had its “Sasquatch.”

Guess what? Cochran found out the hair did not belong to Bigfoot. It was sent back to Byrne, with the conclusion it belonged to the deer family. Four decades later, the FBI declassified the “bigfoot file” about having done this analysis.“Byrne was one of the more prominent Bigfoot researchers,” said Benjamin Radford, deputy editor of the Skeptical Inquirer magazine. “In the 1970s, Bigfoot was very popular.”

You can read the rest of the article HERE! 

 

As my awesome podcast Squaring the Strange (co-hosted by Pascual Romero and Celestia Ward) nears its three-year anniversary, I will be posting episode summaries from the past year to remind people some of the diverse topics we’ve covered on the show, ranging from ghosts to folklore to mysteries and topical skepticism. If you haven’t heard it, please give a listen!

Mar 032020
 

Did you catch our recent bonus episode of Squaring the Strange? I gather some myths and misinformation going round about Wuhan Coronovirus, aka Novel Coronavirus, aka “we’re all gonna die,” aka COVID-19. Then special guest Doc Dan breaks down some virus-busting science for us and talks about the public health measures in place. Check it out HERE! 

 

Feb 252020
 

As advertised, the Oscar-nominated World War I film 1917 takes place in April 1917, when two British soldiers, William Schofield (George MacKay) and Tom Blake (Dean-Charles Chapman), are rousted from a weary daytime slumber. They’re ordered to cross enemy territory (a no man’s land littered with death and decay) and deliver an urgent message to another brigade to call off an attack. It seems that the other soldiers—including the brother of one of the men—are falling into a trap set by their German enemies who have cut the lines of communication.

1917 is reminiscent of other war films such as Saving Private Ryan and Gallipoli, but I was also reminded of a Roger Waters song from his album Amused to Death titled “The Ballad of Bill Hubbard,” a spoken account by World War I veteran Alfred Razzell who describes finding a mortally injured soldier, Hubbard, on the battlefield and is forced to abandon him.

1917 is about many things, and like most films can be viewed through many prisms. It’s a war movie, of course, but it’s also about friendship, loyalty, sacrifice, and so on. But the theme I saw most clearly in 1917 was information: what it is, how it’s used, and the inherent difficulties in its transmission.

Too often in fictional entertainment information is treated as certain, easily accessed, and easily transferred. Countless films, and especially spy thrillers such as the Jason Bourne series, have scenes in which the hero needs to get some vital piece of information which is instantly produced with a few keyboard taps, in dramatic infographic fashion, usually on giant, easy-to-understand wall screens. The Star Trek Enterprise computers are notorious for this: they predict (seemingly with unerring accuracy) when, for example, a planet or ship will explode. It’s always annoyed me as a deus ex machina cheat.

I understand why screenwriters do that; they want to get the exposition and premises out of the way so we can move on with the plot. No need to question the accuracy or validity of the information; the characters—and by extension, the audience—just needs to accept it at face value and move on. (Imagine a dramatic countdown scene in which the hero fails to defuse a bomb at the last second—but it still doesn’t go off and everyone is saved simply because a wire got loose or a battery died. Such scenes, though realistic, are dramatically unsatisfying, and thus rarely if ever depicted. They’re certain to raise the ire of audiences as much as an “it was all a dream” conclusion—and for the same reasons.)

Whether it’s a character in a fantasy or horror film being told exactly what words to say or what to do when confronting some great evil at the film’s climax, as a natural skeptic, I’m often left wondering, “How exactly do you know that? Where did your information come from? Who told you that, and how do you know it’s true? What if they’re lying or just made a mistake?” (Or, in a Shakespearean context, “Um, so, MacBeth: How do you know those are prophetic witches, not just three crazy old ladies putting you on?”). Army of Darkness (1992) and The Woman in Black (2012) are two of the few movies that actually take this issue seriously.

1917 takes the matter deadly seriously, depicting the decidedly unglamorous horrors of warfare. Though the events depicted happened a century ago, the basics of war have not changed in millennia; the goal is still to defeat, maim, and kill the other bastards—often when implementing wrong or incomplete information. It’s been said that truth is the first casualty of war, though that’s not always by design. Sometimes truth (or, more broadly, true information) can’t get from those who know, to those who need to know, in time to save lives. Sometimes that’s by design, such as when enemies cut off communications (as in this case); other times the truth is hidden with encryption, such as in The Imitation Game (2014). Often it’s merely the result of chaos and miscommunication. Isolation (including isolation from information) is an effective tool for building dramatic tension; that’s why many horror films are set in remote areas out of cell phone service. When dialing 911 or just asking Siri or Alexa could presumably save the day, screenwriters need to find ways to keep the heroes vulnerable. 

The film—co-written by director Sam Mendes and dedicated to his grandfather, a veteran who “who told us the stories”—also doesn’t give much depth to the two soldiers. Blake and Schofield are given the barest of backstories, and the actors do what they can to flesh them out. The acting is good overall, but the real reason to see 1917 is the immersive and compelling filmmaking.

 

You can find more on me and my work with a search for “Benjamin Radford” (not “Ben Radford”) on Vimeo, and please check out my podcast Squaring the Strange! 

 

A longer version of this piece was published on my CFI blog. 

Feb 182020
 

I’ve investigated hundreds—probably thousands—of things in my career as a skeptic and researcher, from misleading polls to chupacabra vampire legends. Some investigations take hours or days; others take weeks or months, and a rare few take years. It all depends on the scope of the investigation and how much information you have to analyze. In some cases a mystery can’t be solved until some other information is released or revealed, such as a medical or forensics test.

However there are some mysteries that can be solved in less than a minute. This short piece offers one quick example.

I came across a “news” headline on several Facebook friend’s walls stating that “85% of People Hate Their Jobs, Gallup Poll Finds.”

It’s a false story. In this case, three clicks of a mouse, each on a different link seeking a source, led me to the origin of the myth. If you’re a slow reader, of course, it will take longer than if you can quickly skim the page or report, but with practice this could be done in just a few minutes.

The first step is a sort of skeptical sense that there’s maybe something to investigate, some claim that is false or exaggerated. After all, we see countless news stories and social media posts online daily, and the average person rarely if ever fact-checks them. One red flag is the source: where did the information come from? Is it a reputable, known news organization or is it some random website you’ve never heard of? To be clear: reputable news organizations sometimes get information wrong, and perfectly valid and accurate information often appears on obscure sites and blogs.

But in this case the information was clearly presented as a news story. It is formatted as a news headline and offers a surprising or alarming statistic from a reputable polling organization, Gallup.

When I clicked on the link I went to some site called Return to Now. The red flags popped up when the “news” article was revealed to be three years old. As I’ve written about before, old news is often fake news. But this “news” story was also uncredited. Someone wrote it, obviously, but who? A respected journalist or someone cranking out clickbait copy?

There’s no name attached to the piece, and the About section of the site isn’t any more helpful: “Return to Now is dedicated to helping humans live fully in the present, while gleaning tips on how to do so from our distant past. It’s a new kind of ‘news’ website, whose contributors are not as concerned with current events as we are with the whole of the human experience. Topics will include rewilding, primitivism, modern ‘tribal’ living, tips for getting off the grid, sustainability, natural health, peaceful parenting, and sexual and spiritual awakening.” It’s not clear what that means, though the fact that they put the word news in quote marks is revealing; I want—and readers deserve—news, not “news.”

In this case the piece offered a link to the Gallup poll it referenced. Many people would likely stop there and assume that the existence of the link meant that what the link contains had been accurately and fairly characterized. After all, someone must have at least looked at the content at that link in order to have written the sentence it contains, and the headline. Unless of course the person is lying to you, intentionally misleading their readers (or, perhaps has reading comprehension issues and badly misunderstood what it said).

If the article had not provided a link at all, that too would have been a red flag. Not everyone embeds links in their writing, but those who don’t at least provide a source or reference for their information, such as a book or published journal article. Otherwise it looks an awful lot like you’re just making it up.

In this case the click link was accurate and did work, and led me to the promised Gallup poll referenced in the headline and article. It’s a one-page blog and I skimmed it for the alleged statistic: that 85% of workers hate their jobs. It didn’t appear anywhere. Nor, for that matter, did any reference to “hate” or “hating” a job. Always be skeptical of polls and survey results reported in news pieces, and when possible consult the original reports; they often contradict the headlines they generate.

But I quickly realized that there was another prominent statistic that seemed to be the other half of the 85% figure: 15% (since 85% + 15% = 100% of a polled statistic). The Gallup poll found that 15% of the world’s one billion full-time workers “are engaged at work.”

But not being “engaged at work” is not at all the same thing as “hating your job.” You can love or hate your job, and be engaged or not engaged at it. The two measures have little or nothing to do with each other, and it’s clear that someone saw the poll and decided to mischaracterize its results and spin it into a clickbait article, recognizing that few would read a piece headlined, “15% of People Are Engaged At Work, Gallup Poll Says.”

The problem of misinformation and fake news on social media is of course made worse when people share the information without checking it. Those who share bogus stories like this are both victims of manipulation, and promoters of misinformation. It’s a good reminder to think before you share. You don’t need to invest hours fact-checking information; as this case shows in some cases you can do it in just seconds. Or better yet, avoid the problem entirely by only sharing news stories from reputable news organizations.

 

Note: This piece, originally appearing in a different form on my CFI blog, was inspired in part by a FB friend named Rich, one of those whose post caught my eye. After the quick search described above I diplomatically pointed this out to him, and Rich not only thanked me for doing the research, but quickly corrected the headline and vowed to do better. Be like Rich.

 

You can find more on me and my work with a search for “Benjamin Radford” (not “Ben Radford”) on Vimeo, and please check out my podcast Squaring the Strange! 

Feb 152020
 

My buddy Kenny Biddle recently did a great investigation into a new book by TV ghost hunter Zak Bagans. You can read it HERE. 

Below is an excerpt: 

I really did not want to read the book this article is about. I know that will likely give away the tone of this overall piece, but it’s just my honest reaction. When I saw the first announcements on social media that semi-celebrity Zak Bagans was releasing a new book titled Ghost Hunting for Dummies, I immediately groaned, deciding I’d pass on reviewing it. I’ve amassed quite a collection of “How to Ghost Hunt” type books since the 1990s, and I didn’t see any possibility of Bagans offering anything new—especially given his spotless track record of completely failing to find good evidence of ghosts during his decade-plus on television. At the time, I had no idea how right I’d be about that.

A close friend and colleague, Mellanie Ramsey, mentioned she was going to review the book on a podcast. After a brief conversation, she urged me to read it and participate in the podcast. I reluctantly agreed, placing an Amazon order and receiving my copy of Ghost Hunting for Dummies two days later. The book is over 400 pages and published by John Wiley & Sons, Inc., under the For Dummies brand, which boasts over 2,400 titles (Wiley 2020a). The “Dummies” books are meant to “transform the hard-to-understand into easy-to-use,” according to the company’s website (Wiley 2020b).

My first impression comes from the front cover, which I found to be an overall poor design compared to the Dummies format I was used to seeing: the slanted title, a pronounced and stylized Dummies logo, and either a character with a triangle-shaped head or a photo representing the content of the book. The cover of Ghost Hunting features the title printed straight across with a much smaller and less stylized version of the Dummies logo. The word for is so small that when I showed the book to my wife, she asked “Why did you buy a book called ‘Ghost Hunting Dummies’?” The cover also features a photograph of a basement stairway and door, along with an odd photograph of Bagans with his right hand extended toward the camera, like he’s reaching out to take your money. Overall, it’s just not an attractive cover.

Inside the book, the first thing I noticed was a lack of references; there are no citations or references listed anywhere and no bibliography at the end of the book. For me, this is a red flag; references tell us where the author obtained their information, quotes, study results, and so on. When a book is supposed to be educating you on a specific topic (or in this case, multiple topics), I expect to know the source material from which the information came. However, because this is the first book from the Dummies brand that I’ve purchased, I wasn’t sure if the lack of a bibliography was the standard format. I headed over to my local Barnes & Noble store and flipped through more than forty different Dummies titles, none of which contained references. I also noticed that all of the titles I checked, from Medical Terminology to 3D Printing, were copyrighted by Wiley Publishing/John Wiley & Sons. Ghost Hunting for Dummies is instead copyrighted by Zak Bagans.

There are several indications this book was rushed into publication for the 2019 holiday season. Chief among them are the extensive number of errors: typos, misspellings, repeated words, and missing words are littered throughout the pages. Another indication of premature release comes from the lack of the classic Dummies icons. On page 2, it’s explained that “Throughout the margins of this book are small images, known as icons. These icons mark important tidbits of information” (Bagans 2020). We are presented with four icons: the Tip (a lightbulb), the Remember (hand with string tied around one finger), the Warning (triangle with exclamation point inside), and the “Zak Says” (Zak’s face), which “Highlights my [Zak’s] words of wisdom or personal experiences” (Bagans 2020, 3). Over the 426 pages, there are only thirteen icons to be found throughout: five Tips, four Remembers, three Warnings, and one “Zak Says.” I guess Bagans didn’t have much wisdom to impart upon his readers.

Throughout much of the book, Bagans displays a strong bias against skeptics and scientists, even going as far as to claim to understand scientific concepts better than actual scientists. For example, while relating why he believes human consciousness can exist outside of the body, Bagans mentions Albert Einstein’s well-known quote, “Energy cannot be created or destroyed; it can only be changed from one form to another.” Bagans follows this with, “it’s baffling why this concept is so easy to understand for a paranormal investigator but not for a mainstream scientist” (Bagans 2020, 108). It’s actually mainstream scientists who understand this and Bagans who’s confused. The answer is very simple. Ben Radford addressed this common mistake in his March/April 2012 Skeptical Inquirer column “Do Einstein’s Laws Endorse Ghosts?”:

 

You can find more on me and my work with a search for “Benjamin Radford” (not “Ben Radford”) on Vimeo, and please check out my podcast Squaring the Strange! 

Feb 052020
 

This week we were joined by Erik Kristopher Myers to discuss a short history of a particular sort of easter egg: the dreaded “hidden subversive element” stuck into a kids’ show or game, either by a perverse animator or a much more sinister coalition bent on corrupting the youth of America. Disney has made a cottage industry of hiding adult content in cartoons–whether real or simply rumored. And the rumors of subversive dangers in D&D both plagued and popularized the once-obscure RPG. From pareidolia to pranks to the people who wring hands over such dangers, we break down a long list of memorable legends.

You can listen to it HERE. 

Feb 032020
 

This is the second of a two-part piece. 

The recent Clint Eastwood film Richard Jewell holds interesting lessons about skepticism, media literacy, and both the obligations and difficulties of translating real events into fictional entertainment.

Reel vs. Real

The film garnered some offscreen controversy when the Atlanta Journal-Constitution issued a statement complaining about the film, specifically how it and its journalism were portrayed. They and other critics complained particularly that the film unfairly maligns Scruggs, who (in real life) co-wrote the infamous AJC newspaper article that wrongly implicated Jewell in the public’s mind based on unnamed insider information. Scruggs, who isn’t alive to respond, is depicted as sleeping with FBI agent Shaw—with whom she had a previous relationship, at least according to Wilde—in return for information about Jewell.

The AJC letter to Warner Bros. threatened legal action and read in part, “Significantly, there is no claim in Ms. Brenner’s Vanity Fair piece on which the film is based that the AJC’s reporter unethically traded sex for information. It is clear that the film’s depiction of an AJC reporter trading sex for stories is a malicious fabrication contrived to sell movie tickets.” Such a depiction, the newspaper asserts, “makes it appear that the AJC sexually exploited its staff and/or that it facilitated or condoned” such behavior.

The newspaper’s response was widely seen in the public (and by many journalists) as a full-throated defense of Scruggs’s depiction in the film as being baseless and a sexist trope fabricated by Clint Eastwood and screenwriter Billy Ray to bolster the screenplay.

Richard Brody of The New Yorker writes that “It’s implied that she has sex with a source in exchange for a scoop; those who knew the real-life Scruggs deny that she did any such thing. It’s an ignominious allegation, and one that Eastwood has no business making, particularly in a movie about ignominious allegations.”

Becca Andrews, assistant news editor at Mother Jones, had a similar take: “Wilde plays Kathy Scruggs, who was, by all accounts, a hell-on-wheels shoe-leather reporter who does not appear to have any history of, say, sleeping with sources…. Despite Scruggs’ standing as a respected reporter who, to be clear, does not seem to have screwed anyone for a scoop over the course of her career, the fictional version of her in the film follows the shopworn trope.”

It all seems pretty clear cut and outrageous: the filmmakers recklessly and falsely depicted a female reporter (based on a real person, using her real name) behaving unethically, in a way that had no basis in fact.

A Closer Look

But a closer look reveals a somewhat different situation. It is true, as the AJC letter to Warner Bros. states several times, that the film was based on Brenner’s Vanity Fair article. However the letter conspicuously fails to mention that the film was not based only on Brenner’s article: There was a second source credited in the film—one which does in fact suggest that Scruggs had (or may have had) sex with her sources.

Screenwriter Ray didn’t make that detail up; one of the sources the film credits, The Suspect, by two respected journalists, Kent Alexander and Kevin Salwan, specifically refers to Scruggs’s “’reputation’ for sleeping with sources” (though not necessarily in the Jewell case specifically) according to The New York Times. Ray fictionalized and dramatized that part of the story, in the same way that all the events and characters are fictionalized to some degree. This explains why Scruggs was depicted as she was: that’s what the source material suggested.

The defense that, well, while it may be true that she was thought by colleagues to have had affairs with some of her sources—but not necessarily in that specific case—is pretty weak. It’s not as if there was no basis whatsoever for her depiction in the film, with Eastwood and Ray carelessly and maliciously manufacturing a sexist trope out of thin air. Ironically this book—the one that refers to Scruggs’s reputation for sleeping with her sources—was described by the Atlanta Journal-Constitution itself as “exhaustively researched” and “unsparing but not unfair.” It’s not clear why mentioning her reputation for sleeping with sources was “not unfair” when Alexander and Salwan did it in their (nonfiction) book about Richard Jewell, but is “false” and “extraordinarily reckless” when Ray and Eastwood did it in their (fictional) screenplay based in part on that very book.

True Stories in Fiction

The issues surrounding the portrayal of Scruggs in Richard Jewell—just like the portrayal of Jewell himself in the film—are more nuanced and complex than they first appear. Eastwood and Ray were not accused of tarnishing a dead reporter’s image by including a true-but-unseemly aspect of her real life in her fictional depiction. Nor were they accused of failing to confirm that information contained in one of their sources. Instead they were accused of completely fabricating that aspect of Scruggs’s life to sensationalize their film—which is demonstrably not true.

More fundamentally, complaints that the film isn’t the “real story” miss the point. It is not—and was never claimed to be—the “real story.” The film is not a documentary, it’s a scripted fictional narrative film (as it says on posters) “based on the true story.” (The full statement that appears in the film reads, “The film is based on actual historical events. Dialogue and certain events and characters contained in the film were created for the purposes of dramatization.”) That is, the film is based on some things that actually happened; that doesn’t mean that everything that really happened is in the film, and it doesn’t mean that everything in the film really happened. It means exactly what it says: the movie is “based on actual historical events.” Complaints about historical inaccuracy are of course very common in movies about real-life people and events.

Similar complaints were raised about Eastwood’s drama American Sniper about the film’s historical accuracy as it relates to the true story of the real-life Chris Kyle; these pedantic protests rather miss the point. Much of the “controversy” over whether it’s a 100% historically accurate account of Kyle’s life is a manufactured controversy sown of a misunderstanding, a straw man argument challenge to a strict historicity no one claimed.

In an interview with The New York Times, “Kelly McBride, a onetime police reporter who is the senior vice president of the Poynter Institute, a nonprofit organization that supports journalism, said the portrayal of Ms. Scruggs did not reflect reality” (emphasis added). It’s not clear why McBride or anyone else would believe or assume that a scripted film would “reflect reality.” There is of course no reason why fictional entertainment should necessarily accurately reflect real life–in dialogue, plot, or in any other way. Television and film are escapist entertainment, and the vast majority of characters in scripted shows and films lead far more interesting, dramatic, and glamorous lives than the audiences who watch them. While fictional cops on television shows regularly engage in gunfire and shootouts, in reality over 90% of police officers in the United States never fire their weapons at another person during the course of their career. TV doctors seem to leap from one dramatic, life-saving situation to another, while most real doctors spend their careers diagnosing the flu and filling out paperwork. I wrote about this a few years ago.

Richard Jewell is one of many “based on a true story” films currently out, including BombshellFord v. FerrariA Beautiful Day in the NeighborhoodSebergDark WatersMidwayHoney BoyHarriet, and others. Every one of these has scenes, dialogue, and events that never really happened, and characters that either never existed or existed but never did some of the specific things they’re depicted as having done on the screen.

It’s understandable for audiences to wonder what parts of the film are historically accurate and which parts aren’t, but making that distinction and parsing out exactly which characters are real and which are made up, and which incidents really happened and precisely when and how, is not the responsibility of the film or the filmmakers. The source material is clearly and fully credited and so anyone can easily see for themselves what the true story is. There are many books (such as Based on a True Story—But With More Car Chases: Fact and Fiction in 100 Favorite Movies, by Jonathan Vankin and John Whalen) and websites devoted specifically to parsing out what’s fact and what’s fiction in movies. There are also a handful of online articles comparing the true story of Richard Jewell with the fictional one.

There’s no deception going on, no effort to “trick” audiences into mistaking the film for a documentary. It is a scripted drama, with events carefully chosen for dramatic effect and dialogue written by a screenwriter and performed by actors. It’s similar in some ways to the complaint that a film adaptation of a book doesn’t follow the same story. That’s because books and films are very different media that have very different storytelling structures and demands. It’s not that one is “right” and the other is “wrong;” they’re different ways of telling roughly the same story.

Similarly, to ask “how accurate” a film is doesn’t make sense. A fictional film is not judged based on how “accurate” it is (whatever that would mean) but instead how well the story is told. Screenwriters taking dramatic license with bits and pieces of something that happened in real life in order to tell an effective story is their job. Writers can add characters, combine several real-life people into a single character, play with the chronology of events, and so on.

Ray certainly could—and arguably should—have changed the name of the character, but since in real life it was Scruggs specifically who broke the news about Jewell, and it was Scruggs specifically who in real life was rumored to have been romantically involved with sources, the decision not to do so is understandable. It’s likely, of course, that complaints would still have arisen even if her name had been changed, since Scruggs’s name is so closely connected to the real story.

The question of fictional representation is a valid and thorny one. Films and screenplays based (however loosely) on real events and people are, by definition, fictionalized and dramatized (this seems obvious, but may be more clear to me, as I have attended several screenwriting workshops taught by Hollywood screenwriters). Plots need conflict, and in stories based on things that actually happened, there will be heroes (who really existed in some form) and there will be villains (who also really existed in some form). The villains in any story will, by definition—and rightly or wrongly—typically not be happy with their depiction; villains are heroes in their own story.

The question is instead what obligations a screenwriter has to the real-life people cast in that villain role—keeping in mind of course that interesting fictional characters are a blend of hero and villain, good and bad. Heroes will have flaws and villains will have positive attributes, and may even turn out to be heroes in some cases.

You can argue that if Ray was going to suggest that Scruggs’s character slept with an FBI agent (as The Suspect suggested), that he should have confirmed it. But screenwriters, like non-fiction writers, typically don’t fact-check the sources of their sources. In other words they assume that the information in a seemingly reputable source (such as a Vanity Fair article or a well-reviewed book by the U.S. Attorney for the Northern District of Georgia and a former Wall Street Journal columnist, for example), is accurate as written. If they report that Scruggs had a reputation for sleeping with sources, or hid in the back of Jewell’s lawyer’s car hoping for an interview, or met with FBI agents in a bar, or any number of other things, then the screenwriter believing that she did so—or may have done so—is not unreasonable nor malicious.

In the end, the dispute revolves around a minor plot point in a single scene, and the sexual quid pro quo is implied, not explicit. Reasonable people can disagree about whether or not Scruggs was portrayed fairly in the film (and if not, where the blame lies) as well as the ethical limits of dramatic license in portraying real historical events and figures in fictional films, but the question here is more complex than has been portrayed—about, ironically, a film with themes of rushing to judgment and binary thinking—and should not detract from what is overall a very good film.

For those interested in the real, true story of how Richard Jewell was railroaded, bullied, and misjudged—instead of the obviously fictionalized version portrayed in the film—people can consult Marie Brenner’s book Richard Jewell: And Other Tales of Heroes, Scoundrels, and Renegades, based on her 1997 Vanity Fair article; and The Suspect, mentioned above.

The Social Threat of Richard Jewell

In addition to the potential harm to Scruggs’s memory, several critics have expressed concern about presumed social consequences of the film, suggesting, for example that Richard Jewell could potentially change the way Americans think about journalism (and female journalists in particular), as well as undermine public confidence in investigative institutions such as the FBI.

There is of course a long history of fears about the consequences of fictional entertainment on society. I’ve previously written about many examples, such as the concern that the 50 Shades of Grey book and film franchise would lead to harmful real-world effects, and that the horror film Orphan, about a murderous dwarf posing as a young girl, would literally lead to a decline in international adoptions. Do heavy metal music, role-playing games, and “occult” Halloween costumes lead to Satanism and drug use? Does exposure to pornography lead to increased sexual assault? Does seeing Richard Jewell decrease trust in journalism and the FBI? All these are (or were) plausible claims to many.

The public need not turn to a fictional film—depicting events that happened nearly 25 years ago—to find reasons to be concerned with the conduct of (today’s) Federal Bureau of Investigations. Earlier this month, a story on the front page of The New York Times reported that “The Justice Department’s inspector general… painted a bleak portrait of the F.B.I. as a dysfunctional agency that severely mishandled its surveillance powers in the Russia investigation, but told lawmakers he had no evidence that the mistakes were intentional or undertaken out of political bias rather than ‘gross incompetence and negligence.’”

No one would suggest that fictional entertainment have no effect at all on society, of course—there are clear examples of copycat acts, for example—and the topic of media effects is far beyond the scope here. I’ll just note that the claim that Richard Jewell (or any other film) affects public opinions about its subjects is a testable hypothesis, and could be measured using pre- and post-exposure measures such as questionnaires. This would be an interesting topic to explore, and of course it’s much easier to simply assume that a film has a specific effects than to go to the considerable time, trouble, and expense of actually testing it. Who needs all the hassle of creating and implementing a scientific research design (and tackling thorny causation issues) when you can just baldly assert and assume that they do?

There are certainly valid reasons to criticize the film, including its treatment of Scruggs, the FBI, and Jewell himself (who is also not alive to comment or defend himself). Good films provoke conversation, and those conversations should be informed by facts and thoughtful analysis instead of knee-jerk reactions and unsupported assumptions. Richard Jewell is a moving, important, and powerful film about a rush to judge and an otherwise ordinary guy—flawed and imperfect, just like the rest of us—who was demonized by institutional indifference and a slew of well-meaning but self-serving people in power.

 

A longer version of this piece appeared on my CFI blog; you can read it HERE. 

 

Jan 302020
 

The recent Clint Eastwood film Richard Jewell holds interesting lessons about skepticism, media literacy, and both the obligations and difficulties of translating real events into fictional entertainment.

It’s no secret that non-police security officers get little or no respect. They’re universally mocked and ignored in malls, security checkpoints, and airports. The stereotype is the self-important, dim, chubby ones, typified by Kevin James in Paul Blart: Mall Cop and—shudder—its sequel. Of course the stereotype extends to sworn officers as well, from rotund doughnut aficionado Chief Wiggum in The Simpsons to Laverne Hooks in the Police Academy franchise. They’re usually played for laughs, but there’s nothing funny about what happened to Richard Jewell.

Richard Jewell tells the story of just such a security guard who found a bomb at the 1996 Atlanta Olympics celebration. He spots a suspicious bag underneath a bench and alerts authorities, helping to clear the area shortly before the bomb goes off. The unassuming Jewell (played by a perfectly-cast Paul Walter Hauser) is soon seen as a hero and asked to make the media rounds of TV talk shows and possible book deals. There’s no evidence connecting Jewell to the crime, but the FBI, without leads and under increasing public pressure to make an arrest, turns its attention to Jewell. Things take a turn when Jewell is named in the press as being the FBI’s main suspect, a tip leaked by agent Tom Shaw (Jon Hamm) to hard-driving Atlanta Journal-Constitution reporter Kathy Scruggs (Olivia Wilde). But when he becomes the target of unrelenting attacks as an unstable and murderous “wannabe cop” he seeks out a lawyer named Watson Bryant (Sam Rockwell) to defend him.

What’s the case against him? FBI “experts” assured themselves (and the public) that the bomber fit a specific profile—one that Jewell himself fit as well (a loner with delusions of grandeur and a checkered past; the fact that he was single and living with his mother didn’t help). Psychological profiling is inherently more art than science, and to the degree to which it can be called a science, it’s an inexact one. At best it can provide potentially useful (if general and somewhat obvious) guidelines for who investigators should focus on, but cannot be used to include or exclude anyone from a list of suspects.

Bob Carroll, in his Skeptics Dictionary, notes that “FBI profiles are bound to be inaccurate. I noted some of these in a newsletter five years ago. Even if the profilers got a representative sample of, say, serial rapists, they can never interview the ones they don’t catch nor the ones they catch but don’t convict. Also, it would be naive to believe that serial rapists or killers are going to be forthright and totally truthful in any interview.” For more on this see “Myth #44: Criminal Profiling is Helpful in Solving Cases,” in 50 Great Myths of Popular Psychology, by Scott Lilienfeld, Steven Jay Lynn, John Ruscio, and Barry Beyerstein; and Malcolm Gladwell’s New Yorker article “Dangerous Minds: Criminal Profiling Made Easy.”

Psychologists will readily acknowledge these caveats, and their assessments are typically heavily qualified—much in the way that a good science journal report about an experiment will be candid about its limitations.

Journalists, however, are less interested in important nuances and caveats, and their readers are exponentially less so. The public wants binary certainty: Is this the bomber, or not? If not, why is the FBI investigating him, and why wouldn’t they explicitly announce that he wasn’t a suspect? Complicating matters, the public often misunderstands criminal justice issues and procedures. They widely assume, for example, that lie detectors actually detect lies (they don’t); or that an innocent person would never confess to a crime he or she didn’t really commit (they do). (In the film Jewell passes a polygraph, though little is made of it.)

When agent Shaw is confronted with evidence suggesting that Jewell does not, in fact, fit the profile and is likely innocent, instead of questioning his assumptions he doubles down, rationalizing away inconsistencies and stating that no one is going to fit the profile perfectly.

Jewell, a by-the-books type, is especially heartbroken to realize that his faith in the FBI’s integrity was sorely misplaced. All his life he’d looked up to federal law enforcement, until they turned on him. He isn’t angry or upset that he’s being investigated; he’s familiar enough with law enforcement procedures to understand that those closest to a murder victim (or a bomb) will be investigated first. But his initial openness and cooperation wanes as he sees FBI agents attempting to deceive and entrap him.

As Bryant tells Jewell, every comment he makes, no matter how innocuous or innocent, can be twisted into something nefarious that will put him in a bad light, and provide dots for others to (mis)connect. The fact that a friend as a teenager built homemade pipe bombs for throwing down gopher holes (long before he met Jewell) could be characterized as either a piece of evidence pointing to his guilt—or completely irrelevant. The fact that he has an impressive stash of weapons in his home could similarly be seen (if not by a jury, then certainly by a story-hungry news media) as being evidence of an obsession with guns—or, as he says with a shrug to Bryant, “This is Georgia.”

The film doesn’t paint the villains with too broad a brush; before an interview with the FBI Bryant reminds Jewell that the handful of agents harassing and persecuting him don’t represent the FBI in general; the entire U.S. government isn’t out to get him—no matter what it feels like. The news media is seen as a pack of vultures, camping out in front of his house, robbing him and his mother of privacy and dignity. You can probably guess what would have happened to Jewell in today’s age of internet-driven social media outrage; if not, see Jon Ronson’s book So You’ve Been Publicly Shamed. Shaw and the other FBI agents, as well as Scruggs (presumably) sincerely believed they’d named the right man—at least until a more thorough investigation reveals otherwise. The film is not anti-FBI, anti-government, nor anti-press; it is pro-due process and sympathetic to those who are denied it.

Ironically but predictably, even not talking to the police can be seen as incriminating. Those ignorant of the criminal justice system may ask, “What do you have to hide?” or even “Why do you need a lawyer if you’re innocent?” These are the sorts of misguided souls who would presumably be happy to let police search their property without a warrant because, well, a person should be fine with it if they have nothing to hide.

The result is a curious and paradoxical situation in which a completely innocent person is (rightfully) afraid to speak openly and honestly. Not out of fear of self-incrimination but out of fear that those with agendas will take anything they say out of context. This is not an idle fear; it happens on a daily basis to politicians, movie stars, and anyone else in the spotlight (however tangentially and temporarily). Newspaper and gossip reporters salivate, waiting for an unguarded moment when—god forbid—someone of note express an opinion. A casual, honest, and less-than-charitable but otherwise mild remark about a film co-star can easily be twisted and turned into fodder for a Twitter war. For example Reese Witherspoon laughing and reminiscing casually in an interview that, years ago, at a dinner party Jennifer Aniston’s steak was “tasty but a bit overcooked” can easily spawn headlines such as “Reese Witherspoon Hates Jennifer Aniston’s Cooking.” A flustered Oscar winner who forgets to thank certain people (such as a mentor or spouse) can set tongues wagging about disrespect or even infidelity—which is one reason why nominees write out an acceptance speech ahead of time, even if they don’t expect to win. The fewer things you say, the fewer bits of information you provide, the less fodder you give those who would do you harm. As Richard Jewell demonstrates, this is, ironically, a system that prevents people from being totally open and forthcoming.

Eastwood’s past half-dozen or so films have been based on real events and actual historical people: American Sniper (about Navy Seal sniper Chris Kyle); The 15:17 to Paris (Spencer Stone, Anthony Sadler, and Alek Skarlatos, who stopped a 2014 train terrorist attack); The Mule (Leo Sharp, a World War II veteran-turned-drug mule); Jersey Boys (the musical group The Four Seasons) and J. Edgar (as in FBI director Hoover). The complex, sometimes ambiguous nature and myriad facets of heroism clearly interest Eastwood, arguably dating back over a half century to his spaghetti Westerns (and, later, Unforgiven) where he played a reluctant gunslinger.

This is not the first biographical film that Eastwood has done about a falsely accused hero. His 2016 film Sully, for example, was about Chesley “Sully” Sullenberger (played by Tom Hanks), who became a hero after landing his damaged plane on the Hudson river and saving lives. Where Jewell was lauded—and then demonized—in public, Sullenberger was a hero in public but behind closed doors was suspected of having made poor decisions. National Transportation Safety Board (NTSB) officials second-guessed his actions based, as it turned out, in part on flawed flight simulator data, and Sullenberger was eventually cleared. (In another parallel, just as the Atlanta Journal-Constitution complained about its portrayal in Richard Jewell—more on that later—the NTSB complained about its portrayal in Sully.)

Just as we have imperfect victims, we have imperfect heroes. Bryant eventually realizes that Jewell has an admittedly spotty past, including impersonating an officer and being overzealous in enforcing rules on a campus. Jewell, like many social heroes, humbly denies he’s a hero; he was just doing his job. And he is exactly correct: Jewell didn’t do anything particularly heroic. He didn’t use his body to shield anyone from the bomb; he didn’t bravely charge at an armed gunman, or risk his life rushing to pull a stranded motorist from an oncoming train (as happened recently in Utah).

He’s not a chiseled and battle-hardened Navy SEAL; he’s an ordinary guy who did what he was trained and encouraged to do in all those oft-ignored public security PSAs: he saw something, and he said something. This is not to take anything away from him but instead to note that mundane actions can be heroic. Any number of other security guards and police officers could have been the first to spot the suspicious package; he just happened to be the right guy at the right (or wrong) time. One theme of the film is rule following; Jewell saved many people by following the rules and insisting that the backpack be treated as a suspicious package instead of another false alarm. But the FBI did not follow the rules in either its pursuit of Jewell or its leaking information to a reporter.

Jewell’s life was turned upside down, and if not destroyed at least severely damaged. That didn’t end some three months later when he was finally formally cleared. The news media had spent many weeks saturating the country with his name and face, strongly suggesting—though not explicitly saying, for legal reasons—that he was a domestic terrorist bomber.

Who’s responsible for an innocent man being falsely accused, bullied, and harassed? In the real case, apparently no one: though in real life an FBI agent was briefly disciplined for misconduct in connection to the case, the agency insisted that it had done nothing wrong; after all, Jewell was a suspect and the investigation did eventually clear him. The Atlanta Journal-Constitution also got off scot-free, with a judge later determining (in dismissing a defamation suit filed by Jewell) that its reporting, though ultimately flawed, was “substantially true” given the information known at the time it was published. Richard Jewell is having none of it, and points fingers at misconduct in both law enforcement and news media (though the film depicts no consequences for anyone responsible).

A longer version of this article first appeared on my CFI blog; you can read it HERE. 

Part 2 will follow soon…

Jan 222020
 

A June 6, 2018, article from ChurchandState.org titled “Propaganda Works – 58 Percent of Republicans Believe Education Is Bad” was shared on social media by liberals and Democrats, gleeful that their assumptions about conservative anti-intellectualism had been borne out in objective, quantifiable data from a respected polling organization.

The widely-shared article states that “Fox News, like Republicans and Trump, could never succeed as an ultra-conservative propaganda outlet without an ignorant population. The only way to continue having success at propagandizing is convincing Americans that being educated and informed is detrimental to the nation and its citizens; something Fox and Republicans have been very successful at over the past two years. The idea that education is bad for the country is contrary to the belief of the Declaration of Independence’s author, Founding Father and third American President Thomas Jefferson…. Obviously, the current administration and the Republican Party it represents wholly disagrees with the concept of a well-educated citizenry, or that it is beneficial to America if the populace is educated and informed. Likely because the less educated the people are, the more electoral support Republicans enjoy and the more success Trump has as the ultimate purveyor of ‘fake news.’”

There are a few things to unpack here. The first is that, like UFO coverups, this blanket anti-education effort allegedly spans many administrations and generations. Democrat and Republican alike are allegedly participants in this institutional conspiracy. The effects of Republican education cuts would not be seen in general education levels for years—if at all—and thus any “benefit” to keeping the public ignorant would of course boost the goals of future Democratic presidents and administrations as well. If true, it’s heartening to see this common ground and shared agenda between otherwise deeply polarized political parties.

Americans are in fact better educated than at any other time in history. In July 2018, the U.S. Census Bureau revealed that 90% of Americans twenty-five and older completed at least high school—an all-time high and a remarkable increase from 24% in 1940. Education has risen dramatically in the population as a whole, across genders, races, and economic statuses. In 1940, 3.8% of women and 5.5% of men had completed four years or more of college; by 2018 it was 35.3% for women and 34.6% for men. The United States spends $706 billion on education, according to the U.S. Department of Education (2019), which comes to about $13,850 per public school student. Not only does the government provide free, mandatory grade school education, but it also offers low-interest student loans for those who wish to pursue higher education. All this is puzzling behavior for a government that wants to keep its citizens ignorant, but perhaps someone didn’t get the memo. I discussed—and debunked—this idea in my recent column in Skeptical Inquirer magazine (see “Is America a Sheeple Factory?” in the January/February 2020 issue).

The article states, “Seriously, it is beyond the pale that anyone in this country thinks education is bad for America, but nearly 60 percent of Republicans is a mind boggling number. It is not entirely unexpected, but it is stunning that they have no issue telling a pollster that they believe education is a negative for the country.”

It seems damning indeed… but did 60% of Republicans really tell pollsters that? As always, it’s best to consult original sources, and in this case we can easily look at the Pew poll to find out. The question the 2017 Pew poll asked was not “Do you believe education is bad?,” but instead “Do you believe that _____ (colleges and universities / churches / labor unions / banks / news media) have a positive or negative effect on the country?”

These are, obviously, very different measures; you can’t generalize “colleges and universities” to “education” and “having a negative effect on the country” to “bad.” If you’re going to report on how people respond to certain questions in a poll or survey, you can’t retroactively change the question—even slightly—and keep the same results. That is, in a word, misleading and dishonest. Curiously, the fact that one in five Democrats (and left-leaning independents) believed that higher education has a negative effect on the country was completely ignored. That’s a third the number of Republicans who responded that way, but still a surprisingly high rate for self-identified social progressives (depending, of course, on how they interpreted the question).

I have often written about the perils of bad journalism in reporting poll and survey results. Having researched and written about misleading polls and news articles on many topics, including hatred of transgendered people (see, for example, my article  “Do 60% Of People Misgender Trans People To Insult Them?”); Holocaust denial (see, for example, my article “Holocaust Denial Headlines: Hatred, Ignorance, Or Innumeracy?”); I also recently debunked an article by Global News reporter Josh K. Elliott who wrote an article falsely claiming that “Nearly 50% of Canadians Think Racist Thoughts Are Normal.”

A closer look at this poll finds that the most likely reason for this response is not that Republicans think education is inherently bad (as the headline suggests) but instead that colleges and universities (which is what the question asked about) are bastions of liberal politics. (The original piece eventually and briefly acknowledges that—contrary to its clickbait headline—Republicans don’t actually think education is inherently bad.)

One recent example is Tennessee senator Kerry Roberts, who on a conservative radio talk show stated that higher education should be eliminated because it would cut off the “breeding ground” of ideas, a proposal that he said would “save America.” When asked about his comments Roberts later stated that “his listeners understood it was hyperbole” and that he was “was clearly joking” in his comments that eliminating higher education would save the country from ruin. Sen. Roberts has neither proposed nor supported any legislation that would in fact eliminate higher education. He said, however, that he absolutely stood by his criticism of American liberal arts education “one hundred percent.” So it’s pretty clear both that: a) he was indeed joking about wanting to abolish higher education in America (and that his words were mischaracterized for political gain, a routine occurrence); and b) that nevertheless he does believe that colleges and universities spawn liberal beliefs anathema to his own.

There is a very real and demonstrable anti-education and anti-science streak among many conservatives, on topics ranging from creationism to medical research (see for example, Chris Mooney’s book The Republican War on Science). But this headline is not clear evidence of that.

While it may be fun for liberals and progressives to paint conservatives as inherently opposed to education, doing so creates an intellectually dishonest straw man fallacy. It is also ultimately counterproductive, obscuring the real forces behind those beliefs. If you approach the subject wrongly believing that many Republicans fundamentally disapprove of the idea of education, then you will badly misunderstand the problem. You can’t hope to meaningfully address a problem if you’re operating on flawed assumptions.

The marketplace of ideas is served when data and polls are presented honestly, and opposing views are portrayed fairly. Skepticism and media literacy are more important than ever, so check your sources—especially when you agree with the message—to avoid sharing misinformation.

 

A version of this article first appeared on my Center for Inquiry blog; you can see it HERE. 

Dec 282019
 

In a previous blog I discussed my research into an ugly episode of racial hatred that tainted the 2016 holiday season. The Mall of America hired its first African-American Santa Claus, an Army veteran named Larry Jefferson. A local newspaper, the Minneapolis Star Tribune, carried a story about it on Dec. 1. Later that night an editorial page editor for the Tribune, Scott Gillespie, tweeted: “Looks like we had to turn comments off on story about Mall of America’s first black Santa. Merry Christmas everyone!” Overnight and the next morning his tweet went viral and served as the basis for countless news stories with headlines such as “Paper Forced to Close Comments On Mall Of America’s First Black Santa Thanks to Racism” (Jezebel) and “Racists Freak Out Over Black Santa At Mall Of America” (Huffington Post).

George Takei responded the next day via Twitter: “Watching people meltdown over a black Santa in the Mall of America. ‘Santa is white!’ Well, in our internment camp he was Asian. So there.” It was also mocked by Trevor Noah on Comedy Central, and elsewhere.

Yet every major news outlet missed the real story. They failed to check facts. My research (including an interview with Gillespie) eventually revealed that the racial incident never actually occurred, and that–despite public opinion and nearly two million news articles to the contrary–the Star Tribune did not receive a single hate-filled message in the comments section of its story on Jefferson. What happened was the product of a series of misunderstandings and a lack of fact-checking, fueled in part by confirmation bias and amplified by the digital age (for a detailed look at the case see my CFI blog “The True, Heartwarming Story of the Mall of America’s Black Santa.”)

I’ve been writing about journalism errors and media literacy for two decades (including in my book Media Mythmakers: How Journalists, Activists, and Advertisers Mislead Us), and usually there’s relatively little pushback (except, perhaps, from journalists reluctant to acknowledge errors). However a curious part of this story was the criticism I received on social media for even researching it. Perhaps the best example was when I responded to a post about the initial story on a fellow skeptic’s Facebook page. She and all of her friends on the thread took the erroneous news story at face value (which didn’t surprise me, as virtually everyone did) but what did surprise me was the suggestion that trying to uncover the truth was unseemly or even “a distraction tactic.”

One person wrote, “I actually can’t believe that a self proclaimed skeptic is even having this argument in a country that just elected Donald Trump. It’s not skepticism when it disregards the proven fact that a great deal of the country, enough to elect a president, are straight up racist.” Of course I never questioned whether many or most Americans were racist. My question was very specific, clear, and about the factual basis for this one specific incident. Neither Trump’s election nor the existence of racism in America are relevant to whether or not the Tribune had to shut down its comments section in response to a deluge of hatred against a black Santa.

The ‘Distraction’ Tactic

One person wrote that me asking how many people objected to the black Santa was “a distraction tactic–now we can talk about how most people are not racist and change the subject from racism.” I was stunned. I had no idea that asking if anyone knew how many people complained would or could be construed as somehow trying to distract people (from what to what?). I replied, “Trying to quantify and understand an issue is not a ‘distraction tactic.’ I have no interest in distracting anyone from anything.’” No one–and certainly not me–was suggesting that a certain number of racists upset over a black Santa was okay or acceptable. I never suggested or implied that if it was “only” ten or twenty or a hundred, that everyone should be fine with it.

But knowing the scope of the issue does help us understand the problem: Is it really irrelevant whether there were zero, ten, or ten thousand racist commenters? If Trump can be widely (and rightly) criticized for exaggerating the crowd at his inauguration speech as “the largest audience to ever witness an inauguration–period” when in fact it was several orders of magnitude smaller, why is asking how many people complained about a mall Santa so beyond the pale?

Usually when I encounter claims of investigating being a distraction in my research it was itself a distraction tactic, an attempt to head off inquiry that might debunk a claim or show that some assumption or conclusion was made in error–not unlike the Wizard of Oz pleading for Dorothy and her gang not to look behind the curtain. (“Why are you asking questions about where I suddenly got this important UFO-related document?” or “Asking for evidence of my faith healer’s miracle healings is just a distraction from his holy mission” doesn’t deter any journalist or skeptic worth his or her salt.) If a claim is valid and factual, there’s no reason why anyone would object to confirming that; as Thomas Paine noted, “It is error only, and not truth, that shrinks from inquiry.”

I tried to remember where else I’d heard the phrase used, when someone who was asked about something called the questions a “distraction.” Finally I realized where that tactic had become common: In the Trump administration. When Donald Trump was asked about a leaked Access Hollywood recording of him bragging about groping women sexually, he dismissed the questions–and indeed the entire issue–as “nothing more than a distraction from the important issues we’re facing today.”

Similarly, when Vice-President Pence was asked in January 2017 about whether the Trump campaign had any contacts with Russia during the campaign, he replied, “This is all a distraction, and it’s all part of a narrative to delegitimize the election.” Others in the Trump administration (including White House spokespeople) have repeatedly waved off journalists’ questions as distractions as well.

This is not particularly surprising, but it was odd to see some of my most virulent anti-Trump friends (and Facebook Friends) using and embracing exactly the same tactics Trump does to discourage questions.

There is one important difference: In my judgment Trump and his surrogates use the tactic cynically (knowing full well that the issues and questions being asked are legitimate), while those who criticized me were using the tactic sincerely; being charitable, I have no reason to think that they realized that the black Santa story and reportage had been widely (if not universally) misunderstood. But the intention and effect were the same: An attempt to discourage someone from looking beyond the surface to see what’s really going on, and attempt to separate truth from fact.

Importance of Due Diligence

A recent news story highlights the value and importance of bringing at least some skepticism to claims: Recently a woman approached reporters at The Washington Post with a potentially explosive story: that embattled Republican Senate candidate Roy Moore had impregnated her as a teenager and forced her to have an abortion. This would of course be a potentially devastating revelation for the conservative Moore, already under fire for dating (and allegedly sexually assaulting) teenagers.

According to the Post, “In a series of interviews over two weeks, the woman [Jaime T. Phillips] shared a dramatic story about an alleged sexual relationship with Moore in 1992 that led to an abortion when she was 15. During the interviews, she repeatedly pressed Post reporters to give their opinions on the effects that her claims could have on Moore’s candidacy if she went public. The Post did not publish an article based on her unsubstantiated account. When Post reporters confronted her with inconsistencies in her story and an Internet posting that raised doubts about her motivations, she insisted that she was not working with any organization that targets journalists. Monday morning, Post reporters saw her walking into the New York offices of Project Veritas, an organization that targets the mainstream news media and left-leaning groups. The organization sets up undercover ‘stings’ that involve using false cover stories and covert video recordings meant to expose what the group says is media bias.”

The Post reporter, Beth Reinhard, “explained to Phillips that her claims would have to be fact-checked. Additionally, Reinhard asked her for documents that would corroborate or support her story.” Reinhard and the Washington Post did not ask for evidence to establish the truth of Phillips’s account because they doubted that sexual assaults occur, or that Phillips may indeed have been sexually assaulted by Moore–in fact quite the opposite, since the Post was the first to break the story and publish accusations by Moore’s accusers–but instead because they were doing their due diligence as journalists. Investigative journalists and skeptics don’t question claims and ask for evidence because they necessarily doubt what they’re being told; they do it because they want to be sure they understand the facts.

Had The Washington Post not questioned the story–or been deterred by accusations that trying to establish the truth of Phillips’s claims was some sort of “distraction” tactic–the paper’s credibility would have been damaged when Phillips’s false accusation would have quickly been revealed, and the Post’s failure to do basic research used to cast doubt on the previous women’s accusations against Moore. Martin Baron, the Post‘s executive editor, said that the false accusations were “the essence of a scheme to deceive and embarrass us. The intent by Project Veritas clearly was to publicize the conversation if we fell for the trap. Because of our customary journalistic rigor, we weren’t fooled.”

What Happened?

There are several critical thinking and media literacy failures here. Perhaps the most basic is where the burden of proof lies: with the person making the claim. In fact I wasn’t making a claim at all; I was merely asking for evidence of a widely-reported claim. I honestly had no idea how many or how few Tribune readers had complained about Jefferson, and I wouldn’t have even thought to question it if Gillespie hadn’t issued a tweet that contradicted the thesis of the then-viral news story.

The black Santa outrage story is full of assumptions, mostly about the bad intentions of other people. To the best of my knowledge I’m the only person who dug deeper into the story to uncover what really happened–and for that I was told that I was causing a “distraction” and even hints that I had some unspecified unseemly motive.

It’s also important to understand why a person’s questions are being challenged in the first place. It’s often due to tribalism and a lack of charity. CSCIOP cofounder Ray Hyman, in his influential short piece titled “Proper Criticism discusses eight principles including the principle of charity. “The principle of charity implies that, whenever there is doubt or ambiguity about a paranormal claim, we should try to resolve the ambiguity in favor of the claimant until we acquire strong reasons for not doing so. In this respect, we should carefully distinguish between being wrong and being dishonest. We often can challenge the accuracy or validity of a given paranormal claim. But rarely are we in a position to know if the claimant is deliberately lying or is self-deceived. Furthermore, we often have a choice in how to interpret or represent an opponent’s arguments. The principle tell us to convey the opponent’s position in a fair, objective, and non-emotional manner.”

To scientists, journalists, and skeptics, asking for evidence is an integral part of the process of parsing fact from fiction, true claims from false ones. If you want me to believe a claim–any claim, from advertising claims to psychic powers, conspiracy theories to the validity of repressed memories–I’m going to ask for evidence. It doesn’t mean I think (or assume) you’re wrong or lying, it just means I want a reason to believe what you tell me. This is especially true for memes and factoids shared on social media and designed to elicit outrage or scorn.

But to most people who don’t have a background in critical thinking, journalism, skepticism, or media literacy, asking for evidence is akin to a challenge to their honesty. Theirs is a world in which personal experience and anecdote are self-evidently more reliable than facts and evidence. And it’s also a world in which much of the time when claims are questioned, it’s in the context of confrontation. To a person invested in the truth of a given narrative, any information that seems to confirm that idea is much more easily seen and remembered than information contradicting the idea; that’s the principle of confirmation bias. Similarly, when a person shares information on social media it’s often because they endorse the larger message or narrative, and they get upset if that narrative is questioned or challenged. From a psychological point of view, this heuristic is often accurate: Much or most of the time when a person’s statement or claim is challenged (in informal settings or social media for example), the person asking the question does indeed have a vested interest.

The problem is when the person does encounter someone who is sincerely trying to understand an issue or get to the bottom of a question, their knee-jerk reaction is often to assume the worst about them. They are blinded by their own biases and they project those biases on others. This is especially true when the subject is controversial, such as with race, gender, or politics. To them, the only reason a person would question a claim is if they are trying to discredit that claim, or a larger narrative it’s being offered in support of.

Of course that’s not true; people should question all claims, and especially claims that conform to their pre-existing beliefs and assumptions; those are precisely the ones most likely to slip under the critical thinking radar and become incorporated into your beliefs and opinions. I question claims from across the spectrum, including those from sources I agree with. To my mind the other approach has it backwards: How do you know whether to believe a claim if you don’t question it?

My efforts to research and understand this story were borne not of any doubt that racism exists, nor that Jefferson was subjected to it, but instead of a background in media literacy and a desire to reconcile two contradictory accounts about what happened. Outrage-provoking stories on social media–especially viral ones based on a single, unconfirmed informal tweet– should concern all of us in this age of misinformation and “fake news.”

The real tragedy in this case is what was done to Larry Jefferson, whose role as the Mall of America’s first black Santa has been tainted by this social media-created controversy. Instead of being remembered for bringing hope, love, and peace to girls and boys, he will forever be known for enduring a (fictional) deluge of bilious racist hatred.

The fact that Jefferson was bombarded by love and support from the general public (and most whites) should offer hope and comfort this holiday season. A few anonymous cranks, trolls, and racists complained on social media posts from the safety of their keyboards, but there was very little backlash–and certainly nothing resembling what the sensational headlines originally suggested.

The true story of Jefferson’s stint as Santa is diametrically the opposite of what most people believe: He was greeted warmly and embraced by people of all colors and faiths as the Mall of America’s first black Santa. I understand that “Black Santa Warmly Welcomed by Virtually Everyone” isn’t a headline that any news organization is going to see as newsworthy or eagerly promote, nor would it go viral. But it’s the truth–and the truth matters.

This piece appeared in a slightly different form in my Center for Inquiry blog. 

 

You can find more on me and my work with a search for “Benjamin Radford” (not “Ben Radford”) on Vimeo, and please check out my podcast Squaring the Strange! 

Dec 242019
 

Amid the encroaching commercialization of Christmas, Black Friday sales, and annual social media grumblings about the manufactured controversy over whether “Merry Christmas” or “Happy Holidays” is appropriate, an ugly episode of racial hatred tainted the beginning of the 2016 holiday season.

blacksantatweet

It began when the Mall of America hired a jolly bearded man named Larry Jefferson as one of its Santas. Jefferson, a retired Army veteran, is black–a fact that most kids and their parents neither noticed nor cared about. The crucial issue for kids was whether a Playstation might be on its way or some Plants vs. Zombies merchandise was in the cards given the particular child’s status on Santa’s naughty-or-nice list. The important thing for parents was whether their kids were delighted by the Santa, and all evidence suggests that the answer was an enthusiastic Yes. “What [the children] see most of the time is this red suit and candy,” Jefferson said in an interview. “[Santa represents] a good spirit. I’m just a messenger to bring hope, love, and peace to girls and boys.”

The fact that Santa could be African-American seemed self-evident (and either an encouraging sign or a non-issue) for all who encountered him. Few if any people at the Mall of America made any negative or racist comments. It was, after all, a self-selected group; any parents who might harbor reservations about Jefferson simply wouldn’t wait in line with their kids to see him and instead go somewhere else or wait for another Santa. Like anything that involves personal choice, people who don’t like something (a news outlet, brand of coffee, or anything else) will simply go somewhere else–not erupt in protest that it’s available to those who want it.

However a black Santa was a first for that particular mall, and understandably made the news. On December 1 the local newspaper, the Minneapolis Star Tribune, carried a story by Liz Sawyer titled “Mall of America Welcomes Its First Black Santa.

Scott Gillespie, the editorial page editor for the Tribune, tweeted later that night (at 9:47 PM): “Looks like we had to turn comments off on story about Mall of America’s first black Santa. Merry Christmas everyone!” The tweet’s meaning seemed both clear and disappointing: On a story that the Star Tribune posted about an African-American Santa, the racial hostility got so pervasive in the comments section that they had to put an end to it, out of respect for Jefferson and/or Star Tribune readers. He ended with a sad and sarcastic, “Merry Christmas” and sent the tweet into cyberspace.

Overnight and the next morning his tweet went viral and served as the basis for countless news stories with titles such as “Paper Forced to Close Comments On Mall Of America’s First Black Santa Thanks to Racism” (Jezebel); “Santa is WHITE. BOYCOTT Mall of America’: Online Racists Are Having a Meltdown over Mall’s Black Santa” (RawStory); “Racists Freak Out Over Black Santa At Mall Of America” (Huffington Post); “Mall of America Hires Its First Black Santa, Racists of the Internet Lose It” (Mic.com), and so on. If you spend any time on social media you get the idea. It was just another confirmation of America’s abysmal race relations.

There’s only one problem: It didn’t happen.

At 1:25 PM the following day Gillespie, after seeing the stories about the scope and nature of the racist backlash the Tribune faced, reversed himself in a follow-up tweet. Instead of “we had to turn off comments,” Gillespie stated that the commenting was never opened for that article in the first place: “Comments were not allowed based on past practice w/stories w/racial elements. Great comments on FB & Instagram, though.”

This raised some questions for me: If the comments had never been opened on the story, then how could there have been a flood of racist comments? Where did that information come from? How many racist comments did the paper actually get? Fewer than a dozen? Hundreds? Thousands? Something didn’t add up about the story, and as a media literacy educator and journalist I felt it was important to understand the genesis of this story.

It can serve as an object lesson and help the public understand the role of confirmation bias, unwarranted assumptions, and failure to apply skepticism. In this era of attacks on “fake news” it’s important to distinguish intentional misinformation from what might be simply a series of mistakes and assumptions.

While I have no doubt that the Tribune story on Jefferson would likely have been the target of some racist comments at some point, the fact remains that the main point of Gillespie’s tweet was false: the Tribune had not in fact been forced to shut down the comments on its piece about the Mall of America’s black Santa because of a deluge of racist comments. That false information was the centerpiece of the subsequent stories about the incident.

The idea that some might be upset about the topic is plausible; after all, the question of a black Santa had come up a few times in the news and social media (perhaps most notably Fox News’s Megyn Kelly’s infamous incredulity at the notion three years earlier–which she later described as an offhand jest). Racist, sexist, and otherwise obnoxious comments are common in the comments section of many articles online on any number of subjects, and are not generally newsworthy. There were of course some racists and trolls commenting on the secondary stories about the Star Tribune‘s shutting down its comment section due to racist outrage (RawStory collected about a dozen drawn from social media), but fact remains that the incident at the center of the controversy that spawned outrage across social media simply did not happen.

A few journalists added clarifications and corrections to the story after reading Gillespie’s second tweet or being contacted by him. The Huffington Post, for example, added at the bottom of its story: “CLARIFICATION: This story has been updated to reflect that the Minneapolis Star Tribune‘s comment section was turned off when the story was published, not in response to negative comments.” But most journalists didn’t, and as of this writing nearly two million news articles still give a misleading take on the incident.

The secondary news reports could not, of course, quote from the original non-existent rage-filled comments section in the Star Tribune, so they began quoting from their own comments sections and those of other news media. This became a self-fulfilling prophecy, wherein the worst comments from hundreds of blogs and websites were then selected and quoted, generating another round of comments. Many people saw racist comments about the story and assumed that they had been taken from the Star Tribune page at the center of the story, and couldn’t be sure if they were responding to the original outrage or the secondary outrage generated by the first outrage. As with those drawn to see and celebrate Jefferson as the mall’s first black Santa, this was also a self-selected group of people–namely those who were attracted to a racially charged headline and had some emotional stake in the controversy, enough to read about it and comment on it.

Unpacking the Reporting

I contacted Gillespie and he kindly clarified what happened and how his tweet inadvertently caused some of the world’s most prominent news organizations to report on an ugly racial incident that never occurred.

Gillespie–whose beat is the opinion and editorial page–was at home on the evening of December 1 and decided to peruse his newspaper’s website. He saw the story about Larry Jefferson and clicked on it to see if the black Santa story was getting any comments. He noticed that there were no comments at all and assumed that the Star Tribune‘s web moderators had shut them off due to inflammatory posts, as had happened occasionally on previous stories.

Understandably irritated and dismayed, he tweeted about it and went to bed, thinking no more of it. The next day he went into work and a colleague noticed that his tweet had been widely shared (his most shared post on social media ever) and asked him about it. Gillespie then spoke with the newspaper’s web moderators, who informed him that the comments had never been turned on for that particular post–a practice at the newspaper for articles on potentially sensitive subjects such as race and politics, but also applied to many other topics that a moderator for whatever reason thinks might generate comments that may be counterproductive.

“I didn’t know why the comments were off,” he told me. “In this case I assumed we followed past practices” about removing inflammatory comments. It was a not-unreasonable assumption that in this case just happened to be wrong. Gillespie noted during our conversation that a then-breaking Star Tribune story about the death of a 2-year-old girl at a St. Paul foster home also had its commenting section disabled–presumably not in anticipation of a deluge of racist or hateful comments.

“People thought–and I can see why, since I have the title of editorial page editor–that I must know what I’m talking about [in terms of web moderation],” Gillespie said. He was commenting on a topic about his newspaper but outside his purview, and to many his tweet was interpreted as an official statement and explanation of why comments did not appear on the black Santa story.

When Gillespie realized that many (at that time dozens and, ultimately, millions) of news stories were (wrongly) reporting that the Star Tribune‘s comments section had been shut down in response to racist comments based solely on his (admittedly premature and poorly phrased) Dec. 1 tweet, he tried to get in touch with some of the journalists to correct the record (hence the Huffington Post clarification), but by that time the story had gone viral and the ship of fools had sailed. The best he could do was issue a second tweet trying to clarify the situation, which he did.

“I can see why people would jump to the conclusion they did,” he told me. Gillespie is apologetic and accepts responsibility for his role in creating the black Santa outrage story, and it seems clear that his tweet was not intended as an attempt at race-baiting for clicks.

In the spirit of Christmas maybe one lesson to take from this case is charity. Instead of assuming the worst about someone or their intentions, give them the benefit of the doubt. Assuming the worst about other people runs all through this story. Gillespie assumed that racists deluged his newspaper with racist hate, as did the public. The web moderator(s) at the Star Tribune who chose not to open the comments on the Santa story may (or may not) have assumed that they were pre-empting a deluge of racism (which may or may not have in fact followed). I myself was assumed to have unsavory and ulterior motives for even asking journalistic questions about this incident (a topic I’ll cover next week).

In the end there are no villains here (except for the relative handful of racists and trolls who predictably commented on the secondary stories). What happened was the product of a series of understandable misunderstandings and mistakes, fueled in part by confirmation bias and amplified by the digital age.

The Good News

Gillespie and I agreed that this is, when fact and fiction are separated, a good news story. As noted, Gillespie initially assumed that the newspaper’s moderators had been inundated with hostile and racist comments, and finally turned the comments off after having to wade through the flood of hateful garbage comments to find and approve the positive ones. He need not have feared, because exactly the opposite occurred: Gillespie said that the Star Tribune was instead flooded with positive comments applauding Jefferson as the Mall of America’s first black Santa (he referenced this in his Dec. 2 tweet). The tiny minority of nasty comments were drowned out by holiday cheer and goodwill toward men–of any color. He echoed Jefferson, who in a December 9 NPR interview said that the racist comments he heard were “only a small percentage” of the reaction, and he was overwhelmed by support from the community.

The fact that Jefferson was bombarded by love and support from the general public (and most whites) should offer hope and comfort. Gillespie said that he had expected people to attack and criticize the Mall of America for succumbing to political correctness, but the imagined hordes of white nationalists never appeared. A few anonymous cranks and racists complained on social media posts from the safety of their keyboards, but there was very little backlash–and certainly nothing resembling what the sensational headlines originally suggested.

The real tragedy is what was done to Larry Jefferson, whose role as the Mall of America’s first black Santa has been tainted by this social media-created controversy. Instead of being remembered for, as he said, bringing “hope, love, and peace to girls and boys,” he will forever be known for enduring a (fictional) deluge of bilious racist hatred. The true story of Jefferson’s stint as Santa is diametrically the opposite of what most people believe: He was greeted warmly and embraced by people of all colors and faiths as the Mall of America’s first black Santa.

Some may try to justify their coverage of the story by saying that even though in this particular case Jefferson was not in fact inundated with racist hate, it still symbolizes a very real problem and was therefore worthy of reporting if it raised awareness of the issue. The Trump administration adopted this tactic earlier this week when the President promoted discredited anti-Muslim videos via social media; his spokeswoman Sarah Huckabee Sanders acknowledged that at least some of the hateful videos Trump shared were bogus (and did not happen as portrayed and described), but insisted that their truth or falsity was irrelevant because they supported a “larger truth”–that Islam is a threat to the country’s security: “I’m not talking about the nature of the video,” she told reporters. “I think you’re focusing on the wrong thing. The threat is real, and that’s what the President is talking about.”

This disregard for truth has been a prominent theme in the Trump administration. Yes, some tiny minority of Muslims are terrorists; no one denies that, but that does not legitimize the sharing of bogus information as examples supposedly illustrating the problem. Similarly, yes, some tiny minority of Americans took exception to Jefferson as a black Santa, but that does not legitimize sharing false information about how a newspaper had to shut down its comments because of racist rage. There are enough real-life examples of hatred and intolerance that we need not invent new ones.

In this Grinchian and cynical ends-justifies-the-means worldview, there is no such thing as good news and the import of every event is determined by how it can be used to promote a given narrative or social agenda–truth be damned.

I understand that “Black Santa Warmly Welcomed by Virtually Everyone” isn’t a headline that any news organization is going to see as newsworthy or eagerly promote, nor would it go viral. But it’s the truth.

Merry Christmas.

 

This piece originally appeared on my Center for Inquiry blog in 2017; you can see it HERE! 

 

 

Dec 072019
 

The issue of racism in Canada was recently brought into sharp focus when, shortly before the Canadian election, photos and videos of Prime Minister Justin Trudeau in blackface and brownface emerged. They had been taken on at least three occasions in the 1990s and early 2000s. Trudeau—widely praised for his socially progressive agendas—quickly apologized and promised to do better. 

Trudeau’s repeated use of blackface (and his subsequent re-election despite public knowledge of it) angered many and left Canadians wondering just how common racism is in their country. Veteran hockey commentator Don Cherry was recently fired by Sportsnet following contentious comments about immigrants. The broadcaster issued a statement that “Following further discussions with Don Cherry after Saturday night’s broadcast, it has been decided that it is the right time for him to immediately step down. During the broadcast, he made divisive remarks that do not represent our values or what we stand for.” 

Americans—and the Trump administration specifically—are often characterized as inherently racist; New York Times writer Brent Staples, for example, wrote on Twitter (on January 12, 2018) that “Racism and xenophobia are as American as apple pie.” Whether racism and xenophobia are as Canadian as poutine is of course another question. Earlier this year, on May 21, 2019, Canadian news organization Global News reported on a survey that seemed to shed light on that question. The article was titled “Nearly 50% of Canadians Think Racist Thoughts Are Normal: Ipsos poll.” 

The article began, “Almost half of Canadians will admit to having racist thoughts, and more feel comfortable expressing them today than in years past, a new Ipsos poll reveals … The poll, conducted exclusively for Global News, found that 47 per cent of respondents thought racism was a serious problem in the country, down from 69 per cent in 1992. More than three-quarters of respondents said they were not racist, but many acknowledged having racist thoughts they did not share with others. (All of the Ipsos poll data is available online.) ‘We found that (almost) 50 per cent of Canadians believe it’s OK and actually normal to have racist thoughts,’ said Sean Simpson, vice-president of Ipsos Public Affairs.” 

Having researched and written about misleading polls and news articles on many topics, including hatred of transgendered people (see, for example, my article  “Do 60% Of People Misgender Trans People To Insult Them?”); Holocaust denial (see, for example, my article “Holocaust Denial Headlines: Hatred, Ignorance, Or Innumeracy?”); and even whether or not the public believes that Native Americans exist, something about that headline struck me as off. I didn’t necessarily doubt the statistic—racism is a serious problem in Canada, America, and elsewhere—but my journalistic skeptical sense urged a closer look. The poll was conducted between April 8 and 10, 2019, sampling 1,002 Canadian adults and had a margin of error of ±3.5 percent. 

I clicked through the link to the original poll by the Ipsos organization. Their About Us page explains that “In our world of rapid change, the need for reliable information to make confident decisions has never been greater. At Ipsos we believe our clients need more than a data supplier, they need a partner who can produce accurate and relevant information and turn it into actionable truth.” 

The Ipsos page referencing the poll displayed a large headline “Nearly half (47%) of Canadians think racism is a serious problem in Canada today, down 22 points since 1992 (69%).” Just below this, in much smaller size, was the line “Even so, almost half (49%) admit to having racist thoughts.” 

That seemed to provide a clue, as of course 49 percent may be the “nearly half” referred to in the Global News headline, but I noticed that the wording had changed: The headline stated that about half of “Canadians think racist thoughts are normal”—not that half of Canadians say they have racist thoughts. Just because you acknowledge having a racist thought does not logically mean that you think it’s “normal” or acceptable to do so; plenty of surveys and polls ask about socially and morally unacceptable behavior, ranging from infidelity to murder (a 2018 survey in Japan found that more than one in four Japanese workers admitted that the thought of killing their boss had crossed their mind on at least one occasion). 

But I know that sometimes headlines are misleading, and I assumed that the statistic was contained in the poll. Many people of course don’t read past the headline; of those who do read the full article, very few will bother to click on the link to the polling organization’s data page; even fewer will actually open the original report; of those who do, most will read only the executive summary or highlights section. Vanishingly few people—if anyone—will read the full report. 

This is understandable, as audiences naturally assume that a journalist, news organization, or pollster is accurately reporting the results of a poll or survey. If a news headline says that 40 percent of hockey fans drink beer during games or 85 percent of airplane pilots have college degrees, we assume that’s what the survey or research found. As I discuss in my media literacy book Media Mythmakers: How Journalists, Activists, and Advertisers Mislead Us, that’s not always the case. 

Like a game of Telephone, each step away from the original findings may change (usually toward simplifying and/or sensationalizing) that information. Whether intentionally or accidentally, errors can creep in every time the data are explained, summarized, or “clarified.” Usually these changes are minor and go unnoticed, because of course a person would have to check the original report to catch any discrepancies. But now and then another journalist, pedant, or researcher will take the time to check and see that something’s amiss.

Because the poll is available online, I read through it. There were many questions about many facets of racism among the Canadian respondents, but I found no reference whatsoever to the statistic mentioned in the headline. I checked again and still found nothing. 

I reached out to the author of the piece, Global News Senior National Online Journalist Josh K. Elliott, and the author of the report, Sean Simpson, the Ipsos vice-president of public affairs, asking for clarifications, including which specific question item was referred to in the article. 

I wrote, in part:

I read through the original Ipsos report but was unable to find the poll results you referenced in the headline, and that Sean Simpson references in your quote. I did a document search for the specific term used, “normal,” assuming that it would appear in the survey question. I found three matches, on pages 3, 19, and 20, but in none of the cases was I able to find results suggesting that “nearly 50% of Canadians think racist thoughts are normal.”

I have been unable to find that data anywhere in the Ipsos report. The closest I could find was the statistic that half of Canadians say they sometimes have racist thoughts (Question 7). But of course just because you acknowledge having racist thoughts does not logically mean that you think it’s “normal” or acceptable to do so; plenty of surveys and polls ask about socially and morally unacceptable behavior, ranging from infidelity to murder. Question wording is of course critically important in interpreting polls and surveys, and I’m concerned that “having racist thought” was mistakenly mistranslated to “think it’s normal to have racist thought” in your piece. If that statistic appears in the Ipsos report cited, please direct me to it, either by question or page number. If that statistic does not appear in the report, please clarify where it came from. Thank you.  

After repeated inquiries, I was informed that Mr. Elliott no longer worked at that desk, but I got a response from Drew Hasselback, a copy editor at GlobalNews (and, eventually, a cursory and seemingly reluctant reply from Mr. Simpson).

I was directed to four questions that they said were used as the basis for the headline. I looked again at each of them.

• The first, Question 7.6, asks “To what extent do you agree or disagree that racism is a terrible thing?” In response, nearly nine in ten (88 percent) of Canadians agree that racism is terrible. It didn’t speak to whether Canadians think racist thoughts are normal, but if anything seemed to contradict the claim. 

• The second, Question 7.5, asked “To what extent do you agree or disagree with the following: I can confidently say that I am not racist.” Of those polled, over three quarters (78 percent) agree that they can confidently say they’re not racist. Again, this hardly suggests that racism is considered normal among the respondents, and it contradicts the reporting and the headline associated with it.

Frankly, I’m surprised the number is that high. Why might a minority of otherwise non-racist Canadians not be able to “confidently” say that they are not racist? In part because there is a presumption that everyone is racist, whether they realize it or not. This is a widely held view among many, especially progressives and liberals (it’s so common in fact that it serves as the basis for Question 7 in the poll). In other words, even if they sincerely and truly don’t consider themselves racist and have no racist thoughts ever, they would be reluctant to go so far as to state categorically and confidently to others that they are not at all racist. (You see the same issue with polls asking women if they would use the word beautiful to describe themselves; very few do, though they will call themselves prettyattractive, etc. Doing so is seen as vain, just as stating “I’m confident I’m not racist” would be considered by many to be boasting or virtue signaling.)

• The third was Question 7.3, which asks to what extent people agree or disagree with the statement, “While I sometimes think racist thoughts, I wouldn’t talk about them in public.” This, once again, does not support the news headline. It is vitally important when interpreting polls and surveys to parse out the precise question asked. Note that it is a compound question framed in a very specific way (asking about whether one would express a thought in public); the question was not “Do you sometimes think racist thoughts?” But even if it were, you cannot generalize “people sometimes do X” to “it’s normal for people to do X.” Merriam-Webster, for example, defines normal as “average” or “a widespread or usual practice.” Thus, a poll or survey question trying to capture the incidence of a normal behavior or event would use the word usually instead of sometimes

• Finally, we came Question 7.1, the only question that specifically uses the word normal and asks if Canadians agree that “It’s perfectly normal to be prejudiced against people of other races.” 

As I noted, this question and its response do not accurately capture the question of whether or not “X% of Canadians think racist thoughts are normal” (as the Global News headline reads), but even if it did, we find that the headline is still wrong. From this statistic alone, the correct headline would be “22% of Canadians think racist thoughts are normal”—which is less than half the number reported in the headline. About one in five whites and one in three minorities said that it’s normal to be prejudiced against people of other races, as did one in four men and one in five women. Instead of nearly half of Canadians thinking racism is normal, nearly 70 percent of Canadians disagreed that racial prejudice is normal

The Ipsos poll itself seems well-researched, sound, and contains important information. Unfortunately, its conclusions got mangled along the way. The question is not whether specific Canadians (such as Trudeau or Cherry) are racist but instead whether or not those views are widely held; it’s the difference between anecdote and data. Polls and surveys can provide important information about the public’s beliefs. But to be valid, they must be based on sound methodologies, and media-literate news consumers should always look for information about the sample size, representativeness of the population, whether the participants were random or self-selected, and so on. And, when possible, read the original research data. News reports, such as the one I’ve focused on here, leave the false impression that racism is more widespread and socially acceptable than it really is. Racism is a serious issue, and understanding its nature is vital to stemming it; indeed, as Iposos notes, “In our world of rapid change, the need for reliable information to make confident decisions has never been greater.” 

 

 

You can find more on me and my work with a search for “Benjamin Radford” (not “Ben Radford”) on Vimeo, and please check out my podcast Squaring the Strange! 

This article has been adapted from my Center for Inquiry blog, available HERE. 

Dec 042019
 

Last month a Maine man made national news for finding tampered Halloween candy. He posted on social media that he found a needle in candy his son had bitten into. Police investigated and found he lied, hoaxing the whole thing (probably for attention).

He’s now been charged according to news reports

 

 

 

 

Dec 022019
 

When families of missing people are desperate they often listen to psychics.

Here’s a new article on the topic: 

“The psychic, Juanita Szafranski, led a search effort focused on a five-mile area surrounding the East Rock neighborhood. Peter Recchia, 59, went missing seven weeks ago, and was last seen in the area where Szafranski led the search. Szafranski says she got a strong feeling Peter, who suffers from mental illness, may no longer be alive, and she gave police specific areas to search in the coming days.”

I hope her information is accurate (it’s obvious to search near where he was last seen), but psychic detective success rates are at chance levels. I’ll keep tabs on this case to see what comes of it. I’ve previously followed real-time searches for missing persons, such as in the Holly Bobo case; we discussed it on a recent episode of Squaring the Strange

 

 

Nov 232019
 

In a recent episode of Squaring the Strange we look back at pop culture aspects of the Satanic Panic of the 1980s and 1990s, including Dungeons & Dragons, Geraldo Rivera, heavy metal, “Satanic Yoda,” and how technology influenced the panic…You can find it HERE. 

Oct 252019
 

In case you missed it, our recent episode of Squaring the Strange has all sorts of weirdness!

 

From Celestia:

 Ben recounts his latest TV appearance and chupacabra follow-up. The AlienStock / Storm Area 51 thing happened, or tried to happen. And two movies open this week that are unsettling audiences due to clown content–one of the films contains Ben! Lastly, we take a cursory look at a tabloid story that mirrors the film Orphan. Then, for the last half of the episode, Ben takes us on a deep dive into the Ica Stones, a hoax wrapped in a riddle tucked into a quaint little museum-shrine in Peru. What impressed a doctor so much that he gave up medicine to collect these peculiar little tchotchkes, believing them to be proof of aliens, or a Biblical young earth, or both?

For those who love skeptical ear candy, you can listen HERE!

Oct 222019
 

I’m quoted in a new article on the true stories behind many classic horror films… 

How do you make a horror tale scarier? Just say it’s “based on a true story.” That’s a technique book publishers and movie producers have been using for decades, whether or not the supposedly “true story” adds up.Some movies are inspired by what might be called “real hoaxes”—made-up stories that people have believed. Others draw inspiration from unexplained behavior or folklore. Read about how the story of a troubled teen inspired a movie about demon possession, how a series of hoaxes launched a major movie franchise and how centuries-old folklore about disease gave way to a classic Hollywood villain…

The Amityville Horror tale raised the profile of Ed and Lorriane Warren, a couple who got involved with the Amityville story and helped promote it. “They set themselves up as psychics and clairvoyants who investigate ghosts and hauntings,” says Benjamin Radford, deputy editor of Skeptical Inquirer magazine. “They would hear about stories either in the news or just sort of through the grapevine, and they would sort of introduce themselves into the story.” But more on them later.

You can read the rest HERE!

 

You can find more on me and my work with a search for “Benjamin Radford” (not “Ben Radford”) on Vimeo, and please check out my podcast Squaring the Strange! 

Oct 182019
 

Earlier this year a West Virginia mother called 911 to report that an Arab man tried to abduct her five-year-old daughter at a mall. Police and mall security arrested the man (an engineer from Egypt who spoke little English), but surveillance footage showed no abduction attempt, nor even any interaction between the man and the girl. The mother was arrested for making a false report, and her trial date has now been set.

You can read my original article on it below and HERE. 

Social and news media have unfortunately seen a rise in two distinct toxic phenomena over the past year or so.

The first is a steady stream of white women calling police on minorities minding their own business in public spaces, in dozens of cases including a Starbucks, a public park, swimming pools, streetcorners, and the common area of university housing.

The second is a series of false rumors of child abductions, both across the United States and around the world; for more on this see my piece “Social Media-Fueled Child Abduction Rumors Lead to Killings” in the January/February 2019 issue of Skeptical Inquirer.

For example in June 2018, Joshua Hatley, a Kansas man, posted a message on Facebook with information claiming a black woman attempted to abduct his child at a local Walmart. Police first heard about the incident not from the panicked mother or father but instead from concerned citizens who shared the urgent warning on Facebook over 11,000 times and wanting to know if their children were also in danger. Police investigated the attempted abduction and reviewed the store’s surveillance camera footage but were unable to find any attempted abduction at all. Detectives showed the footage to Hatley, who eventually admitted that he hadn’t personally witnessed the incident—that it was reported to him by his sister-in-law. As more and more questions arose, police became concerned about the woman photographed and publicly accused on social media of trying to abduct a child. For more on this see my blog on the topic. 

Another recent incident with lessons about eyewitness testimony, social media rumor, and racial bias has surfaced. Santana Renee Adams, 24, a mother in Barboursville, West Virginia, called 911 to report that an Egyptian man tried to abduct her five-year-old daughter at a mall.

According to a news story,

“WSAZ reported that 54-year-old Mohamed Fathy Hussein Zayan, of Alexandria, Egypt, tried to grab the young girl by her hair while at an Old Navy store inside the mall at around 6 pm on Monday. The girl ‘dropped to the floor with the male still pulling her away,’ prompting the child’s mother to pull out a handgun and warning Zayan to let her daughter go. Zayan subsequently let go of the girl and ran out of the store into the mall. The Barboursville Police, who were called to the scene following the incident, said that a short time later, deputies and mall security spotted the 54-year-old walking near the food court area in the mall. After the mother confirmed that he was the man who tried to nab her daughter, the deputies moved in and arrested him.”

 

Police, however, could find no witnesses to the incident, and there were inconsistencies in Adams’s statements about the incident when they interviewed her a second time. After being confronted by police with inconsistencies, Adams conceded that what she interpreted as an attempted abduction may in fact have simply been a cultural misunderstanding… He had maybe simply touched her daughter on the head—instead of grabbing it and throwing her to the floor as she’d described—in a display that, while inappropriate, was neither an assault or an attempt at an abduction.

However that, too, was a lie. Zayan’s attorney, Michelle Protzman, reviewed security footage obtained from Old Navy and found “absolutely no evidence that Zayan touched the girl.” In fact Zayan and the girl weren’t even near each other in the store—and furthermore the mother was not seen pulling out a gun, as she’d claimed. Video surveillance showed Adams and Zayan walking out of the store, calmly and seemingly unperturbed, about half a minute apart and walking in opposite directions. Soon after that, however, Adams apparently—and retroactively—decided that the foreign man had (a few minutes earlier) tried to abduct her daughter, and called police.

The accused man is an engineer employed at a local construction job and speaks little English. After Zayan’s mug shot and the accusations against him were shared widely on news and social media, the charges were eventually dropped. “Instead of caring about facts and caring about evidence and the truth, I think the court of public opinion and social media don’t care about innocent until proven guilty and everyone jumps right on as soon as somebody makes an accusation,” Protzman said.

So we have an innocent Muslim man who never even touched the girl being falsely accused of an attempted kidnapping by the girl’s mother. Why would anyone—especially a mother—make up a false accusation of attempted abduction against a total stranger?

It’s not clear; the motivation could be racism, a misunderstanding caused by drugs or mental illness, or maybe just a desire to get attention and sympathy by casting herself as a heroic mom bravely brandishing a gun in defending her child from a stranger abduction (on social media she was hailed as a hero and as an example of why guns are needed when in public).

Whatever the motivation, last week Adams was arrested for filing a false report, a misdemeanor. Most people who make false accusations are not charged; of those who are charged, most are dismissed (the Jussie Smollett case being a recent example); and of those that are not dismissed, the penalties are usually very light, such as a fine or probation.

Though false accusations (of all crimes) are rare, they are especially egregious when they are used as a weapon against minorities, and a measure of skepticism is always important when facts don’t add up.

 

Oct 022019
 

I was recently on “Expedition Unknown” with Josh Gates on the Discovery Channel, talking about my chupacabra research in Puerto Rico. Watch for dead fowl, vampire legends, and roaches!

You can find it HERE! 

You can find more on me and my work with a search for “Benjamin Radford” (not “Ben Radford”) on Vimeo, and please check out my podcast Squaring the Strange! 

Sep 302019
 

There are many scientific and skeptical objections to astrology, including the fact that the constellations have shifted since astrology was devised, that many real-world tests have failed to find statistically meaningful patterns in the lives of people born under certain zodiac signs, and that there are multiple—and in fact contradictory—versions of astrology that adherents fervently believe. For more information see the Skeptoid podcast, the Skeptics Dictionary, and of course many articles for Skeptical Inquirer.

But what may be even more disturbing is astrology’s close similarity to racism. The basic premise of astrology is that people who were born at certain times and places share specific, distinguishing personality characteristics. Libras like myself, for example, are said to be diplomatic, refined, idealistic, and sociable; Cancers are emotional, sensitive, and domestic; those born under the Taurus sign are stubborn, analytical, and methodical, and so on. Hundreds of millions of people read their daily horoscopes, or at least know something about their sun signs.

Astrology and racism share many of the same ideas. For one thing, in both cases a person is being judged by factors beyond their control. Just as a person has no control over their race or skin color, they also have no control over when and where they were born. Both astrology and racial stereotypes are based on a framework of belief that basically says, “Without even meeting you, I believe something about you: I can expect this particular sort of behavior or trait (stubbornness, laziness, arrogance, etc.) from members of this particular group of people (Jews, blacks, Aries, Pisces, etc.)”

When an astrologer finds out a person’s astrological sign, he or she will bring to that experience a pre-existing list of assumptions (prejudices) about that person’s behavior, personality, and character. In both cases, the prejudices will cause people to seek out and confirm their expectations. Racists will look for examples of characteristics and behaviors in the groups they dislike, and astrologers will look for the personality traits that they believe the person will exhibit. Since people have complex personalities (all of us are lazy some of the time, caring at other times, etc.), both racists and astrologers will find evidence confirming their beliefs.

As Carl Sagan wrote, “It’s like racism or sexism: you have twelve little pigeonholes, and as soon as you type someone as a member of that particular group… you know his characteristics. It saves you the effort of getting to know him individually.” Others, of course, have noted the same thing, including The Friendly Atheist blog.

Astrology has long been used to discriminate against people. According to a job listing in in Wuhan, China, a language training company there sought qualified applicants—as along as they’re not Scorpios or Virgos. The Toronto Sun reported that Xia, a spokeswoman for the company, said that in her experience Scorpios and Virgos are often “feisty and critical.” Xia said, “I hired people with those two star signs before, and they either liked quarrelling with colleagues or they could not do the job for long.” She preferred potential applicants who were Capricorns, Libras and Pisces. To some it may seem like a bad joke, but it’s not funny to qualified applicants desperate for a job who get turned away because of the company’s credence in astrology. In 2009 an Austrian insurance company advertised, “We are looking for people over 20 for part-time jobs in sales and management with the following star signs: Capricorn, Taurus, Aquarius, Aries and Leo.”

Of course, astrologers are not necessarily racists. But the belief systems underlying both viewpoints are identical: prejudging individuals based on general beliefs about a group. If we do not assume that African-Americans are lazy, Arabs are terrorists, or Asians are scholastic geniuses, why would we assume that Cancers are emotional, Aries are born leaders, or Geminis are optimistic non-conformists? People should be judged as unique, individual persons, not based on what arbitrary group they belong to. To paraphrase Martin Luther King, Jr., a person should be judged not by the color of their skin—nor the date and time of their birth—but by the content of their character.

I wrote about this topic for Discovery News in 2011, and it caused quite a stir. It generated a then-new record for the number of comments (I even received a t-shirt from my editors in honor of the occasion; see below).

In honor of over 100 comments, most of them cranky.

Astrologers, as you can imagine, were not happy with me either. One responded:

The deputy editor of the Skeptical Inquirer, Benjamin Radford penned an article entitled “How Astrology is like Racism.” He justifies this claim by arguing that people use astrology to classify individuals according to stereotypes based on their Star Sign (Sun Sign) and therefore “a person is being judged by factors beyond their control” … Radford’s claim rests on a belief that people are being judged. However, modern astrologers don’t consider signs, planets or even horoscopes to be ‘good’ or ‘bad’. Certainly some horoscopes are more challenging than others, but this can drive a person towards a successful and fulfilling life.

After a weak-sauce and largely strawmanned rebuttal (sample: “Astrologers do not make moral judgements or assumptions about people based on their birth data”—I never claimed they did), the article turned ad hominem:

“And who is Benjamin Radford? The deputy editor of Skeptical Inquirer self-styled ‘science’ magazine and a so-called ‘Research Fellow’ with the Committee for Skeptical Inquiry (formerly CSICOP). The Skeptical Inquirer is simply not scientific and copying the term ‘research fellow’ could seem like a ploy to make this vigilante operation look more ‘sciency’ and their eyes respectable. A research fellow is an academic research position at a university and CSI is not an academic body or even a research institution. CSI abandoned all attempts at scientific research after their disastrous investigation into the work of Michel Gauquelin that ended up supporting astrology. They later wisely dropped the word ‘scientific’ from their name (previously CSICOP) to become Committee for Skeptical Investigation. So their focus is not on critical thinking and research, but on preaching and promoting their beliefs. Is it appropriate for a senior member of a predominantly male and almost exclusively white sceptical group (CSI/CSICOP) to use the “racist card” to justify his personal beliefs? This seems hypocritical when the sceptical movement has been widely criticised for being sexist and patriarchal. Distancing themselves from this type of unfounded nonsense would help to clean up their collective act and from the author a retraction and an apology to all those who have suffered and still suffer from racial abuse for trying to hijack and downgrade racism.

Yeah, I think I hit a nerve.

But all the hand waving and goalpost moving in the world will not erase the parallels between racism and astrology.

 

You can see the original article on the CFI website HERE. 

You can find more on me and my work with a search for “Benjamin Radford” (not “Ben Radford”) on Vimeo, and please check out my podcast Squaring the Strange! 

Sep 252019
 

The new episode of Squaring the Strange is out! First we talk with Dr. Hans House about infectious diseases (flu, ebola, measles, etc.), as well as how to deal with vaccine deniers. Then we’re joined by Kenny Biddle to talk about faked credentials, and I talk about an undercover investigation I did exposing a Canadian college professor who faked his diploma!

You can listen HERE! 

Sep 212019
 

How a fictional missing girl led to a massive police search that terrified a community. My recent article takes a closer look at a recent child abduction rumor and panic in England…

 

A mother recently claimed in a Facebook post that an eight-year-old girl had been abducted by two men in a white van in Felling, England, and asked those in her neighborhood to be on alert. It understandably caused alarm—despite the fact that the victim did not exist. 

Rumors about the kidnapping were widely shared on social media; for example, one post began with the requisite plea for viral status: “XXXXX URGENT SHARE SHARE SHARE XXXXX: A 7-year-old girl has been kidnap from outside of the church next to Carl gills in Felling, Gateshead. Police have helicopters and officers in the area. She is wearing black leggings and pink purple top hair is blonde brown. She has been taken by two men in a white van. Registration is for a black car but it is a white van using false plates  AP04 USH” 

 

Fig 1

Indeed, thirty police officers swarmed the area in cars and helicopters looking for the missing girl but found nothing. Police spent hours reviewing CCTV footage from the area—England is the most heavily surveilled country in the world, with four to six million CCTV cameras in public places—and saw no abduction or even any attempted abduction. Even more puzzling, no children were reported missing from the area.

While police investigated, social media buzzed with thoughts and prayers for the girl and her (nonexistent) distraught family, with hundreds of sentiments such as “Hope she’s found soon” and “I hate this world and the horrid people who live in it.” 

Fig 2

Tracing the Social Media Abduction

The incident is an interesting one from folkloric and investigational points of view and merits a brief analysis. Reporting from The ChronicleLive website is especially useful, as it provides a rough timeline of its coverage

  • “There is a police presence in the area and widespread reports circulating on social media about a young girl being ‘kidnapped’ in a van outside a church near Coldwell Street.”
  • “One man told us: ‘I’ve been told they’re looking for an eight year old girl who has gone missing.’ Most people don’t seem to know what has happened but many reference a post on Facebook which claims a girl was kidnapped from near the Felling Methodist Church… A mother finishing her shopping said: ‘We don’t know what’s happened, just hearsay about a little girl. The police seem to be talking to people.’” 
  • “The police then released a brief statement: ‘Police are carrying out enquiries after reports of a suspicious incident in Gateshead. At about 8.15pm police received a report of a girl being forced into a white van on Coldwell Street, Felling.’”
  • “Duty Inspector Pete Dedes spoke to the crowd: ‘We have 30 officers actively looking behind the scenes, more doing community reassurance. We are trying to get to the bottom of this. At present there is not a child reported missing in the Northumbria Police area’ Minutes later the police clarify the vehicle they’re searching for, in response to a query: he said there had been rumours of a black car, but that their report had been about a white van.” 
  • “Gina Willison says police have taken CCTV from her newsagents’ shop. She’s one of the many people who have been really shaken up by this. She said: ‘I’ve got a son myself and I texted his dad saying Don’t even let him into the garden—feels like you’re not safe on your own doorstep. On the one hand I hope to god it’s a hoax, but on the other hand if it is a hoax it’s sick.’”
  • “Rumours are flying at the scene and on social media; there are some people online claiming they know someone who saw what happened, others say they’ve heard that the girl who was reported to be ‘abducted’ has been found. It’s a panicky atmosphere at the scene and people are desperate to know what has happened, but no one does yet. We’ll update you as soon as we know anything concrete and official, but in the meantime an extensive police investigation remains ongoing and it is important to continue to note that no child has been confirmed as missing at this stage. We are reporting only what has been confirmed from official sources and will continue to do so.” 

 

A Closer Look

The mother who posted the item, Angela Wilson, eventually admitted to police that she didn’t know if anyone had been abducted or not but had merely repeated what her young daughter told her: “My child was outside playing with three friends, they all thought they saw, or heard, an abduction…. She didn’t see it herself, but that she heard the screaming.” 

Thus screaming was interpreted as an abduction. Shouting and yelling children are of course common in playgrounds, parks, and anywhere else children gather to play, and the mere sound of a scream—if indeed it was a scream—would not necessarily indicate that anything bad was going on. A shriek of terror can sound exactly like a shriek of glee. But Wilson’s daughter and her friends duly mentioned it to her, and she in turn took to social media to warn others. 

This solves one mystery but raises additional questions: If the children didn’t actually see any abduction—or even anything that could have been mistaken for an abduction such as perhaps a man putting his tantruming child into a vehicle—then how could they (or anyone else) have described the girl or her abductors and their vehicle? A shriek may or may not indicate an abduction, but it doesn’t offer any description. 

It’s not clear who introduced the “black leggings and pink purple top” and “blonde brown hair” descriptions, much less the license plate number of the abducting vehicle. Thankfully it seems that no actual specific person was falsely accused in this incident—though it does happen; more on that later—but what if that license plate happened to be registered to a van (especially a white or black one) and a mob surrounded its presumed pedophilic, child-snatching occupants? 

While some people (including journalists) recognized that much of what was circulating was unverified rumor, misleading news headlines seemed to officially confirm that a kidnapping had indeed occurred, regardless of whether the particulars were correct. For example, one headline read “Police Confirm Report of ‘Girl Being Forced Into Van’ On Street,” which surely led many readers to believe that the abduction they’d been hearing about had been “confirmed.” Yet a closer reading notes merely that the police “confirmed” that they were investigating the incident (which frankly was obvious from the police presence)—not affirming that it happened. 

Fig 3

 

Once the report was determined to be false, a predictable mixture of relief and anger flooded social media. The abduction (seemingly validated by headlines and the very public police presence) had caused considerable alarm in the community, and the news that evil men in vans weren’t lurking in the neighborhood to snatch children was widely welcomed. But others criticized Wilson for lying or perpetrating a hoax. As is often the case, people fell into the false-choice fallacy of assuming that the abduction either a) was real and had happened more or less as described or b) that someone was intentionally lying about it or hoaxing. Yet there is a third option—one that’s more common than either of the others: a misperception or misunderstanding, amplified and twisted by social media. 

It’s interesting to note that no one involved felt they had done anything wrong. The daughter and her friends were, understandably, not punished for making a false report. Wilson received some criticism on social media for her role but defended her actions: “We were all very worried, obviously I’ve panicked and, thinking I’m doing a good thing, put it on Facebook, because at that time I was convinced it had happened. Imagine if something had happened and I hadn’t done anything. I wouldn’t have been able to forgive myself… I didn’t expect it to get out of hand so quickly… I’m not lying, I would not make that up, it would be absolutely sick for somebody to do that. I’ve got my own children, I was doing what I thought was right, any mother would do the same. We were out in the back lane searching like everybody else.”

Wilson’s error was not in reporting it to police (apparently she herself did not contact the police with the report; someone else who had seen her post on social media decided to do it for her) but instead in posting the warning on social media, essentially circumventing proper investigational procedures. Rumors and gossip have always circulated informally, outside official channels of information; the fact that it now appears in typed words on a smartphone or computer screen instead of whispered over a backyard fence or a round of beers lends it undeserved credibility—and gives it an unprecedented potential audience. Had there been an actual abduction, Wilson’s actions would likely have hindered the effort to recover the little girl because police had to dedicate resources to pursuing spurious reports, rumors, and dead ends. 

At each stage people justified their lack of skepticism by erring on the side of caution; even those who had some reason to doubt that anyone had been kidnapped likely took a “better safe than sorry” approach, seeing no harm in sharing the information on social media—and potentially saving a girl’s life if the information was true and the right person happened to see it and be in the right place at the right time. 

But like all social media posts, people should exercise critical thinking and judgment before sharing information. Some of it may be true, but often seemingly credible information—especially “breaking news updates” about child abductions—is false and in many cases has led to innocent people being accused or even attacked by vigilantes. In 2018 and 2019, dozens of people were killed in India when mobs set upon suspected child abductors they’d been warned about in bogus messages on social media (for more on this, see my article “Social Media–Fueled Child Abduction Rumors Lead to Killings” in the January/February 2019 issue of Skeptical Inquirer magazine). The ChronicleLive newspaper, to its credit, recognized that much of the information being circulated was either rumor, unverified, or flat-out false and stated as much. 

In classic rumor and moral panic pattern, the specifics of the story constantly changed, sometimes by the minute. Was it a white van or a black sedan? Was it one or two men? Were the men white in a white van, or black in a white van, or white men in a white sedan, or black men in a black sedan? Such crucial details can and do easily become confused: urgency, not accuracy, is the mandate in such circumstances. Fact-checking be damned, we’ve got vans of child-snatching pedophiles to be alert for, and often one scapegoat looks as good as the next. 

 

False Rumors Often Target Minorities

False rumors of child abductions have become increasingly common over the past few years as more and more people turn to social media to share warnings. 

Fig 4

Unfortunately these false warnings often target racial and religious minorities. In June 2018, a Kansas man posted a message on Facebook with information claiming that an African American woman had attempted to abduct his child at a local Walmart. When contacted by police and shown video evidence that nothing had happened, he admitted that he hadn’t personally witnessed the attempted kidnapping he described but was merely reporting what his wife had told him about what his sister-in-law had told her about what she claimed she saw. This game of rumor telephone might be cute in a classroom but had real consequences: photos of the falsely accused African American (and her vehicle) were widely shared on social media, branding her as a potential child abductor (or worse). 

A year later, in April 2019, a mother called 911 to report that an Egyptian man had tried to abduct her five-year-old daughter at a mall in West Virginia. The man, Mohamed Fathy Hussein Zayan, was confronted at gunpoint by police and mall security and arrested. Further investigation and review of video surveillance revealed that Zayan had never even touched the girl, much less tried to abduct her. Thus an innocent Muslim man who never even touched the girl was falsely accused of an attempted kidnapping by the girl’s Caucasian mother. There are countless other examples, but it’s important to recognize the harm that false reports can do to innocent people, and especially people of color. 

There are many other cases, and these are not isolated incidents. Despite common “Stranger Danger” warnings, child abductions are very rare. Not only are children rarely kidnapped, but the vast majority of abductions are carried out by one of the child’s parents, relatives, or a caregiver. The image of white vans carrying men (people of color or otherwise) lurking around town to abduct kids is more of a social Boogeyman than a reality, and false abduction rumors only fuel fear and panic. As always, the best defense against misinformation is skepticism and media literacy. 

Adapted from my CFI blog “A Skeptic Reads the Newspaper.”

Aug 122019
 

This is part two of a three-part series. You can read the rest of the series here.

With the recent tragic attacks in El Paso and Dayton, the world once again turned its attention to mass shootings. It’s a subject that has captivated America for years with little progress in understanding the nature of the problem.

The topic of mass shootings is fraught not only with political agendas but also with rampant misinformation. Facile comparisons and snarky memes dominate social media, crowding out objective, evidence-based evidence and analysis. This is effective for scoring political points but wholly counterproductive for understanding the nature of the problem and its broader issues.

The public’s perception of mass shootings is heavily influenced by mass media, primarily news media and social media. In my capacity as a media literacy educator (and author of several books on the topic, including Media Mythmakers: How Journalists, Activists, and Advertisers Mislead Us), I have in past articles for the Center for Inquiry attempted to unpack thorny and contentious social issues such as the labeling of terrorists (see, for example, my April 2, 2018, Special Report “Why ‘They’ Aren’t Calling It ‘Terrorism’–A Primer”) and the claim that “the media” isn’t covering certain news stories because of some social or political agenda (see my November 9, 2018, piece “’Why Isn’t The Media Covering This Story?’—Or Are They?”).

In this three-part series I focus on myths about mass shootings in America, as they represent a common concern. My focus is not on the politics of gun control or criminology but instead misinformation and media literacy, specifically as it is spread through news and social media (“the media” in this article). A comprehensive analysis of the phenomenology of mass shootings is beyond the scope of this short article series; my goal is to help separate facts from myths about mass shootings so that the public can better understand the true nature of the problem.

Specifically, in this series I tackle 1) the nature and frequency of mass shootings, 2) the demographics of mass shooters, concluding with 3) applying media literacy to mass shooting statistics. You can find Part 1 here.

In this part, I examine truths and myths about the demographics of mass shooters. In the previous article I discussed why mass shootings statistics can be contradictory and confusing, especially because of differing definitions of what constitutes a mass shooting (for example numbers of victims involved).

Different Types of Mass Shootings

Just as there are differing definitions of mass shootings, there are different types of mass shootings. One recent analysis by Emma Fridel in the Journal of Interpersonal Violence (discussed in more depth later) identified the three most common types of mass shootings: Family killings, felony killings, and public mass killings.

 

Fridel cover ILLO
Fridel

 

  • Familicides represent the most common form of mass murder and are principally defined by a close victim-offender relationship. Perpetrators are typically White, middle-aged males who target their spouse or intimate partner, children, and other relatives (Fridel 2017, 3).
  • Felony killings are distinguished by motive. Murder is used to achieve some primary criminal objective, typically involving financial gain. … Due to their general lack of sensationalism, felony killings are not widely publicized despite representing the second largest category of mass murder. Perpetrators of felony mass murders tend to be young black or Hispanic males with extensive criminal records (Fridel 2017, 7).
  • Despite their extreme visibility, public mass killings account for the smallest proportion of all mass murders. Formally, these incidents are defined by attack location. Public mass killers are a heterogeneous group and are frequently delineated into several subtypes. Public murderers are often stereotyped as middle-aged white men who have suffered a series of failures in different areas of life, though some research indicates a disproportionate number of immigrants commit public massacres (Fridel 2017, 5). These public mass shootings are what most people (wrongly) consider as typical of mass shootings.

Fridel found that blacks commit twice as many felony mass shootings as whites (50.49 percent versus 22.33 percent), so it’s not surprising that blacks are overrepresented in this group:

In most instances, the murders serve to eliminate witnesses of a robbery, drug crime, or gang-related attack. Due to their general lack of sensationalism, felony killings are not widely publicized despite representing the second largest category of mass murder (Krouse & Richardson, 2015). Perpetrators of felony mass murders tend to be young Black or Hispanic males with extensive criminal records (Lankford, 2016b). With frequent ties to the drug trade or gangs, they operate in pairs or small groups in urban areas (Fox & Levin, 2015; Petee et al., 1997). As the primary purpose of murder is to cover up another crime, felony killers leave few survivors and generally claim four or five victims on average, similar to family killers (Duwe, 2007). … As with homicide in general, most victims are the same race as the offender(s). [References can be found in the original article.]

 

Fridel table ILLO

 

One of the highest-profile recent mass shootings was a felony killing, the murder of a young African American girl, Jazmine Barnes. On December 30, 2018, the seven-year-old Houston girl was killed when a gunman drove up next to the vehicle she was in and opened fire on its occupants. Her mother, LaPorsha Washington, was wounded; Jazmine was struck in the head and died on the way to the hospital. The investigation carried over into the new year as the public and police searched desperately for her killers. Harris County Sheriff’s Office announced that Eric Black Jr., a twenty-year-old black man, had been arrested for the shooting. Black admitted to being the driver in the car, while Larry Woodruffe—also a black man in his twenties—fired the fatal shots into the Barnes’s vehicle. It was a gang-related drive-by shooting, and the pair had mistaken Washington’s vehicle for their intended target.

More than 80 percent of all crime involves victims and perpetrators of the same race. Whites and African Americans of course can and do attack each other, but they are the exception, not the rule. As Lois Beckett noted in The Guardian:

A new analysis of 358 mass shootings in America in 2015 found that three-quarters of the victims whose race could be identified were black. Roughly a third of the incidents with known circumstances were drive-by shootings or were identified by law enforcement as gang-related. Another third were sparked by arguments, often among people who were drunk or high. The analysis, conducted by the New York Times with data collected by Reddit’s mass shooting tracker and the Gun Violence Archive, used law enforcement reports on shootings that left four or more people injured or dead in 2015. Few of the incidents resembled the kinds of planned massacres in schools, churches and movie theaters that have attracted intense media and political attention. Instead, the analysis, defined purely by the number of victims injured, revealed that many were part of the broader burden of everyday gun violence on economically struggling neighborhoods. … Many gang-related mass shootings began as fights over small incidents of perceived disrespect.

As noted, truly random violence (involving mass murder or otherwise) is quite rare; shootings almost always emerge from personal conflicts and grievances, between friends, lovers, coworkers, and so on.  

Dueling Demographics

But that doesn’t tell the whole story. Many news headlines suggest instead that white males account for most mass shootings. Newsweek, for instance, ran a story with the headline, “White men have committed more mass shootings than any other group.”

Politifact examined this claim and found it be technically true, with some important caveats:

Newsweek based its claim on data from Mother Jones, which defines a public mass shooting as an incident in which the motive appeared to be indiscriminate killing and a lone gunman took the lives of at least three people. Under this definition, Mother Jones found that non-Hispanic white men have been responsible for 54 percent of mass shootings since August 1982. Another tally, with a longer timeline and a different definition of mass shooting, found non-Hispanic white men make up 63 percent of these attacks. Under both definitions and datasets, white men have committed more mass shootings than any other ethnicity group. Newsweek’s claim is literally accurate. But it’s worth noting the imprecision of this data, and the percentage of mass shootings by white men is lower than their share of the male population, according to Mother Jones.

It’s also important to note that the Newsweek and Mother Jones analysis only examined one of the three types of mass shootings—public mass killings—which also happens to be the rarest type, though the kind most conforming to social assumptions and expectations.

Despite the widespread perception that mass shooters are overwhelmingly white males, researchers have found that white men are not overrepresented among mass shooters. In other words, white men are no more likely than other male demographic to engage in a mass shooting. Daniel Engber, writing for Slatenoted that mass shooters are not disproportionately white male. He writes that “the notion that white men of privilege are disproportionately represented among mass shooters—indeed, that they make up ‘nearly all’ of them—is a myth.” A widely referenced analysis by Mother Jones (mentioned earlier) found that “white people weren’t overrepresented among mass shooters. The media outlet had found that roughly 70 percent of the shooters in mass killings were white—certainly a majority. But according to Census Bureau estimates for 2012, whites accounted for 73.9 percent of all Americans.” In other words, there are more white men in America than there are Asian, black, or Hispanic men, and therefore there are more white shooters. This, too, is unremarkable and expected, though the nuance is lost on many who claim, for example, that “90% of mass shootings are committed by whites.”

The Slate article goes into some detail about differing statistical analyses, and I recommend it for an insightful glimpse into just how different methodologies—each as valid as the next—can result in different numbers. In the end, Engber notes:

The whites-are-overrepresented-among-mass-shooters meme does serve a useful purpose in that it helps displace another myth about mass shootings: that they’re most often perpetrated by angry immigrants from travel-banned countries, and that nothing is more dangerous to America that the scourge of Islamic terrorism. … These are worthy ends, but we shouldn’t have to build another myth to reach them.”

In other words, as skeptics and critical thinkers know, debunking a myth with another myth is a problematic path. We can all agree that mass shootings are a serious social problem—and that the threat posed by immigrants and Muslims are often greatly exaggerated—without fabricating factoids about how common white (or black) male mass shooters are. It’s not a zero-sum game.

Men in general and across cultures commit more violence than women do—whether in the context of a mass shooting or a fistfight—so that’s no surprise. Beyond that, the collective data suggest that, across all three types of mass shootings, the races commit mass shootings at about what we’d expect based on their representative demographics. No single race emerges as an obvious mass shooter threat.

Nevertheless, some memes circulating on social media go so far as to claim that white males are solely responsible for mass shootings; one from Occupy Democrats circulating in July 2018 claimed “154 mass shootings this year and not one committed by a black man or an illegal alien. Let that sink in.” It’s a bold and damning claim—and it’s also completely false.

 

Mass misinformation on mass shootings

 

As we saw in the first article in this series, there is no single universal definition of “mass shooting,” so there is not a single “correct” number of mass shootings in America. As with “school shooting,” it depends on how you count them. Do you mean armed adults or teenagers showing up at a school with the intent to kill students, or do you mean a police officer’s accidental weapon discharge after hours in an empty college parking lot in which no one was injured? Or gunfire at a bar near campus in a drunken altercation?

Looking at school shootings specifically, a recent New York Times analysis identified 111 cases since 1970 “that met the F.B.I.’s definition for an active-shooter scenario, in which an assailant is actively engaged in killing or attempting to kill people, on school property or inside school buildings. It excluded episodes that fit more typical patterns of gun violence such as targeted attacks, gang shootings and suicides.” It also excluded incidents at colleges and universities.

It found that the majority of shooters were young white males (average age about fifteen), many of them current or former students of the schools where they opened fire. The analysis noted that such “active shooter” incidents, though generating much media coverage, “account for only a small fraction of the episodes of gun violence that children experience in American schools. Other cases might include a student showing off a gun to friends in the hallway, the accidental discharge of a school resource officer’s gun, or a gang-related drive-by shooting at a school bus stop.”

Examining January 2019 Mass Shootings

To independently investigate a limited sample of mass shooter demographics, I chose a widely referenced database, the Gun Violence Archive. The Gun Violence Archive (GVA) is “an online archive of gun violence incidents collected from over 2,500 media, law enforcement, government and commercial sources daily in an effort to provide near-real time data about the results of gun violence. GVA is an independent data collection and research group with no affiliation with any advocacy organization.” I chose GVA for several reasons: it is continually updated and provides not just a summary of incidents but links to original news reports, which can be analyzed for additional information about locations, circumstances, demographics, and so on. In addition, the GVA is open-sourced, so anyone can easily confirm the results.

GVA ILLO

A full year of mass shootings would be too many to quickly and efficiently analyze, so I chose the most recent full month (in this case, January 2019), which would presumably be fairly representative of other months. The crime rates for many specific offenses vary by season (for example, summer nights provide more hours of social interactions—and by extension robberies and assaults—than winter nights), but there seemed no reason why the number and nature of mass shootings in January, for example, would be dramatically different than those in March or May. (Should other researchers believe that month was unrepresentative for some reason I welcome similar analyses of other months or the full year.)

I found a total of twenty-seven American mass shootings in January 2019. Of those, two were home invasion shootings in Houston, Texas: one in which several would-be robbers breaking into a home were shot by the homeowner, and the second when police raided the wrong house and came under fire from the (innocent) occupants within. Neither of these fit the typical image of a “mass shooter” threat or categories, so both were omitted from the dataset, bringing the total to twenty-five. I read news reports about the incidents and recorded when the race of the suspect was mentioned. There were four categories: white, black, other (Hispanic, Asian, etc.), and unknown.

Of the twenty-five mass shootings in the Gun Violence Archive database for January 2019, 16 percent (four) of them were committed by white males; 4 percent (one) was committed by a Hispanic man; 64 percent (sixteen) were committed by African Americans; and in 16 percent, or four cases, the attacker’s race is unknown. As described by Fridel, most of these incidents fell into the felony and familicide categories, and the profile of perpetrators seems to track well with those demographics.

Interestingly, a meme circulating January 27, 2019, highlighted three mass shooters that month—all of whom were white males, in fact three of the four that month. They were likely chosen to make a specific political point—in service of debunking myths about “dangerous” immigrants and minorities—but they were cherry picked and not representative of mass shooters generally. Thus, it’s not surprising why social media users are misled; they are seeing intentionally misleading information.

Mislead meme ILLO

There Is No ‘Typical’ Mass Shooter

There is no single accurate profile of a mass shooter. It really depends on what type of mass shooting you’re talking about. Several of the highest-profile mass shootings in recent memory (the rare “public mass killing” category) were committed by white males, such as the 2017 Las Vegas attack by Stephen Paddock. But much beyond that, the stereotype breaks down; Muslim man Omar Mateen killed forty-nine people at a Florida nightclub in 2016 on behalf of a terrorism group; white male Adam Lanza killed twenty-seven people in 2012 at an elementary school, though Asian student Seung-Hui Cho killed thirty-two people on the Virginia Tech campus in 2007. And so on.

The New York Times noted that “As convenient as it would be, there is no one-size-fits-all profile of who carries out mass shootings in the United States. About the only thing almost all of them have in common is that they are men. But those men come from varying backgrounds, with different mental health diagnoses and criminal histories.” Mass shootings with white victims tend to get more attention, both from journalists and those on social media, than those with victims who are people of color. This is a well-known pattern and explains why the public is quicker to react to a missing young blonde girl than a missing young black girl (for more on this see my book Media Mythmakers).

Focusing on the statistically rare but high-profile mass shootings makes for sensational news coverage and concern but doesn’t address far greater dangers. Similarly, focusing on the handful of high-profile mass shootings in which dozens are killed at a time—or for that matter serial killers, who prey on multiple victims over months, years, or decades—doesn’t help the public determine their individual risk. Any one of us could be killed at any moment by a mass shooter or serial killer, but the chances of it happening are so remote that it’s pointless to worry about, and there’s not much we can do to prevent it anyway.

The question of the “typical mass shooter profile” is a red herring. As simplistic and satisfying as it would be, no single demographic emerges from the data as “the typical mass shooter.” It depends on what type of mass shooting you’re looking at, but in any event, focusing on the race or gender of mass shooters is not helpful for the general public; it is not predictive of who is likely to engage in gun violence. Singling out any specific race as being dangerous—or, worse yet, highlighting rare anecdotal violent incidents as representative of larger groups—is more likely to fuel racism than help the public. Unless you’re a criminologist or social scientist aggregating data, it doesn’t really tell you anything useful. It doesn’t help you decide who to watch out for and who to avoid. The percentage of mass shooters in any demographic is vanishingly small, and the chances of being killed in a mass shooting is even smaller.

In the last of this series I’ll examine the ways in which media literacy and critical thinking can help the public sort fact from fiction regarding mass shootings.

Reference

Fridel, Emma E. 2017. A multivariate comparison of family, felony, and public mass murders in the United States. Journal of Interpersonal Violence (November 1).

 

Part 3 will appear soon. 

Aug 102019
 

This is the first part of a three-part series examining mass shootings from a critical thinking and media literacy perspective.

With the recent tragic attacks in Dayton and El Paso, the world once again turns its attention to mass shootings. It’s a subject that has captivated America for years, with little progress in understanding the nature of the problem.

The topic of mass shootings is fraught, not only with political agendas but also with rampant misinformation. Facile comparisons and snarky memes dominate social media, crowding out objective, evidence-based evidence and analysis. This is effective for scoring political points but wholly counterproductive for understanding the nature of the problem and its broader issues.

The public’s perception of mass shootings is heavily influenced by mass media, primarily news media and social media. In my capacity as a media literacy educator (and author of several books on the topic including Media Mythmakers: How Journalists, Activists, and Advertisers Mislead Us), I have in past articles for the Center for Inquiry attempted to unpack thorny and contentious social issues such as the labeling of terrorists (see, for example, my April 2, 2018 Special Report “Why ‘They’ Aren’t Calling It ‘Terrorism’–A Primer”) and the claim that “the media” isn’t covering certain news stories because of some social or political agenda (see my November 9, 2018 piece “‘Why Isn’t The Media Covering This Story?’—Or Are They?”).

In this three-part series, I will focus on myths about mass shootings in America specifically. My focus is not on the politics of gun control or criminology but instead misinformation and media literacy, specifically spread through news and social media (“the media” in this article). A comprehensive analysis of the phenomenology of mass shootings is beyond the scope of this short article series; my goal is to help separate facts from myths about mass shootings so that the public can better understand the true nature of the problem.

Specifically, in this series I will tackle 1) the nature and frequency of mass shootings, 2) the demographics of mass shooters, and concluding with 3) applying media literacy to mass shooting statistics. As with any topic, the best place to start is with definitions, so I will begin by taking a closer look at the nature and frequency of mass shootings.

How Common Are Mass Shootings?

Mass shootings, and especially the subset of shootings at schools, are often portrayed in the media as “horrifyingly common” and “the new normal.” Sarcastic phrases and memes such as “another day, another school shooting” reinforce the idea that they happen all the time. Following many outrages—ranging from school shootings to real or perceived un-American actions by Donald Trump and others—it’s common to hear concerns that Americans are “numb” to terrors and that the transgressions are becoming so routine and “normal” that citizens have lost their ability to be outraged.

However, the reaction to school shootings suggests that Americans are anything but numb or indifferent to the violence. People do not protest against events, situations, and conditions that they consider normal or ones that they are numb to. Protests and boycotts have become common following school shootings (whether those have resulted in political action is another question).

The concern that Americans are numb to violence is widespread and often shared on social and news media. It’s a common claim among pundits and politicians. For example, in an October 1, 2015, speech shortly after a shooting in Eugene, Oregon, President Obama said that given the frequency of mass shootings, people had “become numb to this. … And what’s become routine, of course, is the response of those who oppose any kind of common-sense gun legislation.”

The Washington Post followed up two months later with an article titled “President Obama’s Right: Americans Might Be Growing Numb to Mass Shootings. Here’s Why.” The piece explores a few reasons a steady stream of violence could desensitize the public. The author, Colby Itkowitz, did himself no favors by referencing dubious and discredited theories about the influence of video game violence on real-world violence (Donald Trump was widely and rightly ridiculed for suggesting just such a link).

So are mass shootings common or not?

Dueling Headlines

The public is understandably confused about how common mass shootings are because they get their information about such events from the media, which distorts the true nature and frequency of these attacks.

Most of us, thankfully, have no direct experience with mass shootings or school shootings; they happen occasionally and result in dead bodies, trials, news coverage, and often convictions—but there are also 325 million people in America. The chance of some person, or a few dozen people, being a victim of a mass shooting somewhere in the country on any given day is nearly 100 percent, but the chance of any given specific person—say you or me—being a victim is remote.

Let’s briefly sample prominent headlines from the past few years describing the frequency of mass shootings.

2015

The Washington Post’s Christopher Ingraham wrote on August 26, 2015, that “We’re now averaging more than one mass shooting per day in 2015.” The New York Times headlined on December 2, 2015, “How often do mass shootings occur? On average, every day, records show.”

The verdict: about one each day, or 365 per year.

2016

In 2016 The Economist, using information from Mother Jones, determined that there were fifty mass shootings through June 2016, which would come to about 100 for the year. Mark Hay, a writer for Vice.com, tracked American mass shootings for 2016 and concluded it was over three times as many, 370. (Note that the Pulse nightclub shooting, which occurred in 2016, is treated as a single mass shooting despite its then-unprecedented number of victims.)

The verdict: between one every third day to one each day, or 100 to 370 per year.

2017

mass shooting cbs headline

A CBS News headline from October 2, 2017, by Graham Kates stated “Report: U.S. averages nearly one mass shooting per day so far in 2017.” Newsweek’s John Haltiwanger echoed the statistic the same day with the headline “There’s a mass shooting almost every day in the U.S.”

mass shooting common

The verdict: about one each day, or 365 per year.

Which brings us to last year, when on November 29, Meghan Keneally of ABC News noted that “2018 has seen more than 1 mass shooting per month in the US.” This is of course startlingly good news. It means that mass shootings dropped by about 70 percent from the previous years, from about 365 per year to about thirteen per year.

Except that the numbers are misleading.

The Washington Post’s Christopher Ingraham, who had reported in 2015 that mass shootings were happening about once a day, revisited the subject the following year, taking a closer look at the numbers. He offered an insightful analysis:

On Thursday, a gunman shot and killed three people and injured 14 more in Hesston, Kan., before he was killed by police.

It was the 49th mass shooting of 2016.

No scratch that, it was the 33rd mass shooting.

Actually, wait: It was only the second mass shooting this year, and it barely made the cut.

It’s said that the Inuit people have 50 words for snow. Sometimes it seems like Americans have nearly as many definitions for “mass shooting.” Which definition is correct? They all are—it just depends on what you want to measure.

Limiting mass shootings in this way is useful because it tends to filter out all but the big, headline-grabbing incidents that most people think of when they think “mass shooting”: Kalamazoo, Charleston, Umpqua.

But the definition omits a number of shootings that many reasonable people would consider a mass shooting. The man who shot up a theater in Lafayette, La., last summer killed only two people and wounded nine others—not a mass shooting, per Mother Jones’ definition. The killing of three people and shooting of 16 others at Fort Hood in 2014 isn’t included because not enough people died. Ditto the rampage at a Colorado Springs Planned Parenthood clinic last year.

WP mass shooting count
Screen capture from Ingraham article

Vice’s Mark Hay agrees:

It seem that many mass shootings are an extension of other types of violence. Some of the bloodiest stem from domestic violence incidents, while some of the most common occur in the tight confines of nightclubs or just outside their doors. Many more stem from drive-bys or other street or home shootings, frequently pegged as gang related but often just interpersonal conflicts carried out on an opportunistic basis (often on holidays and weekends when people are out and about—and perhaps angry and liquored up) and made disproportionally deadly by the spray-and-pray style and culture of much of our gun violence. Only a few incidents fall under the indiscriminate rampage category, with which we often associate mass shootings in the US … Yet the only mass shootings that regularly grab our attention and drive national conversations are the indiscriminate public rampages. And when we talk about them, we focus on the perpetrators … This focus makes sense. Humans are drawn to the unusual—news isn’t news unless there’s something new about it, and common forms of gun violence don’t hack it compared to boogeymen we can project all our fears onto. However this focus has a nasty habit, in many jurisdictions, of increasing gun sales and loosening gun laws, and may in fact contribute to the ongoing increase in rampage shootings by giving perpetrators the infamy so many seem to be seeking.

Why Mass Shootings Seem More Common Than They Are

Why do shootings seem so common? Much of the answer lies in the news media and psychology. John Ruscio, a social psychologist at Elizabethtown College in Pennsylvania, describes “the media paradox”: The more we rely on the popular media to inform us, the more apt we are to misplace our fears. The paradox is the combined result of two biases, one inherent in the news-gathering process, the other inherent in the way our minds organize and recall information. As Ruscio explains:

For a variety of reasons—including fierce competition for our patronage within and across the various popular media outlets—potential news items are rigorously screened for their ability to captivate an audience. … The stories that do make it through this painstaking selection process are then often crafted into accounts emphasizing their concrete, personal, and emotional content.

In turn, the more emotional and vivid the account is, the more likely we are to remember the information. This is the first element, the vividness bias: our minds easily remember vivid events. The second bias lies in what psychologists term the availability heuristic: our judgments of frequency and probability are heavily influenced by the ease with which we can imagine or recall instances of an event. So the more often we hear reports of plane crashes, school shootings, or train wrecks, the more often we think they occur. But the bias that selects those very events makes them appear more frequent than they really are.

Imagine, for example, that a consumer group dedicated to travel safety established a network of correspondents in every country that reported every train and bus wreck, no matter how minor, and broadcast daily pictures. Anyone watching that broadcast would see dozens of wrecks and crashes every day, complete with mangled metal and dead bodies, and would likely grow to fear such transportation. No matter that in general trains and buses are very safe; if you screen the news to emphasize certain vivid events, accidents will seem more dangerous and common than they actually are. That explains, in part, why many people fear flying even though they know that statistically it’s one of the safest modes of transport. Though crashes are very rare, the vividness and emotion of seeing dramatic footage of crashed planes drowns out the rational knowledge of statistical safety.

As The New York Times reported:

James Alan Fox, a criminologist at Northeastern University, said his research showed the number of such shootings has roughly held steady in recent decades. He said that if analysts added a single year, 2014, and looked at four-year intervals instead of five-year intervals, the average number of annual mass shootings actually declined slightly from 2011 to 2014, compared with the previous four-year period. … While the numbers shift from year to year, there has been no discernible trend in the numbers or in the characteristics of the assailants, said Professor Fox, who is also a co-author of Extreme Killing: Understanding Serial and Mass Murder. “The only increase has been in fear, and in the perception of an increase,” he said. “A lot of that has been because of the nature of media coverage.”

School Shootings

Another aspect of the phenomenon is that people see (and share) misleading statistics. For example, a widely shared meme circulating in mid-February 2018 stated that there had been eighteen “school shootings” so far in 2018. This may help explain the sentiment that Americans have gotten used to these school shootings or have become “numb” to them. It’s easy to think that when you hear an alarming statistic like “a dozen school shootings already this year,” and you’re wondering why you didn’t hear about more of them or how so many shootings could have escaped your attention or not had more emotional impact on you.

Both USA Today and a researcher for the Snopes website investigated and debunked the claim of eighteen school shootings, noting that:

When we looked into it, we found that although all the incidents involved the firing of weapons on school grounds, some bore little resemblance to what most of us would think of when we hear that a school shooting has taken place. Two were solely suicides, for example (one of which Everytown retracted on 15 February after the Washington Post pointed out that it occurred at a school that had been closed for several months). Three involved the accidental firing of a weapon. Eight resulted in no injuries. Only seven were intentional shootings that occurred during normal school hours.

When we examine this feeling, however, the fact that such a meme can elicit this (intended) effect undermines the notion of our numbness: the meme’s message is startling—as it was designed to be—because viewers are alarmed when confronted with the fact that so many shootings escaped their notice. This meme would have no effect at all if, indeed, viewers did not care about shootings. It would be met with a shrug and scrolled past rather than induce self-reflection. Instead, the meme caused many to wonder how they missed so many important news events—but did they?

It’s important to understand that the number reflects a very broad definition of “school shooting.” When you look at the breakdown of “school shootings,” you realize that many were not incidents you’re likely to have heard about on national news or really cared about if you had: a suicide in a school parking lot, a gun that accidentally went off into a wall, a school bus window shot out with no injuries, etc. The phrase, as defined by the organization Everytown for Gun Safety—whose statistics are widely quoted—includes not only active shooters targeting students at school (i.e., what most people think of when they hear that phrase) but also accidents, suicides, events that didn’t happen at a school, non-injury incidents, and so on. People shouldn’t feel badly that they don’t remember details of events they likely never heard about.

Some have suggested that it doesn’t matter whether there were one, three, eleven, or twenty shootings at schools or cities over the first two months of 2018; “even one is too many.” This is a common retort, but it is misguided; quantifying a threat is important to understanding it. That’s the position that Trump has taken on many threats to make Americans fearful, including attacks by Muslim extremists, and that’s the basis for his statements such as Mexicans are “bringing crime. They’re rapists. And some, I assume, are good people.” Framing the scenario dishonestly as “one Mexican rapist is too many” clouds the issue rather than clarifying it with reliable data (such as the fact that immigrants are far less likely to commit a serious crime than natural-born Americans). Putting threats in perspective is one role of journalists and skeptics. A first step in trying to address or solve a problem is determining its scope and nature.

In Part 2 of this series I will examine the different types of mass shootings and the demographics of mass shooters.