I’m a guest on The Skeptic Zone podcast #604. This week we hear from me, Mandy-Lee Noble with a Pete Evans update, Michelle Bijkersma with logical fallacies, and News from Australian Skeptics; check it out here: www.skepticzone.tv￼
I’m a guest on The Skeptic Zone podcast #604. This week we hear from me, Mandy-Lee Noble with a Pete Evans update, Michelle Bijkersma with logical fallacies, and News from Australian Skeptics; check it out here: www.skepticzone.tv￼
In the recent episode, we discuss a few pandemic-related things that set off some skeptical alarms over social media this past week. Then we are joined by Southern California-based comedian and film editor Emery Emery to talk about his soon-to-be-released project with Brian Dunning. With the help of many science communicators and experts (me among them), Emery and Dunning have crafted a documentary called Science Friction, revealing the myriad ways experts have been manipulated, maligned, and misrepresented by producers of questionable documentaries.
You can listen HERE.
With all the recent news, here’s a timely passage from a recent article I wrote:
“One element of conspiracy thinking is that those who disagree are either stupid (gullible ‘sheeple’ who believe and parrot everything they see in the ‘mainstream media’) or simply lying (experts and journalists who know the truth but are intentionally misleading the public). This ‘If You Disagree with Me, Are You Stupid or Dishonest?’ worldview has little room for uncertainty or charity and misunderstands the situation. It’s not that epidemiologists and other health officials have all the data they need to make good decisions and projections about public health and are instead carefully considering ways to fake data to deceive the public. It’s that they don’t have all the data they need to make better predictions, and as more information comes in, the advice will get more accurate.”
You can read the piece HERE.
I recently wrote a column Russ Dobler at Adventures in Poor Taste…
When I encounter people raising questions or researching topics such as ghosts or Bigfoot, I’m often disappointed (and a bit baffled) by their seeming lack of genuine interest in establishing the truth behind these claims. They act as if the topic is urgent and important, that establishing the truth behind them is paramount and worthy of devoting lives and fortunes to, but when it comes down to implementing good practice or doing scientific research, they lose interest.
I’ve done hundreds of investigations over the years: journalistic investigations, folkloric research, paranormal investigations, and so on. Though topics have varied greatly, from mass hysteria to chupacabras, media literacy to psychic detectives, the common theme is that my goal is to solve the mystery and understand what’s going on. If I’m going to devote time and effort into looking into a subject, I want to take it seriously and investigate it to the best of my ability (within financial and other practical constraints).
These topics — if real — are important. If psychic powers exist, that would be incredibly important to know, and for science to understand (not to mention put to practical use finding missing persons and preventing pandemics, for example). If Bigfoot exist, that too is of legitimate interest to biologists, zoologists, and others; we’d need to understand what these animals are, where they fit into the tree of life, and how thousands of them have somehow managed to exist into the modern day leaving no physical traces of their presence.
You can read the rest HERE!
In our recent episode special guest Kenny Biddle joins Ben and Celestia (remotely!) to look at some current missing person cases as they unfold in real time, and how psychics have “helped” (or interfered) in the progress of each case. This turned into a rather long study, so this part has cases brought by Celestia and Kenny, with some discussion of why checking and debunking this type of psychic activity necessarily falls to grassroots skeptical activists. Content warning: missing child cases, as well as psychic pronouncements on what happened to these victims, are discussed in detail.
You can listen HERE!
For anyone who has been craving an episode with far less research, facts, or formality, this is the episode for you! Ben, Pascual, and Celestia, reflect on their various circumstances during the coronavirus national emergency, and then we talk about this, our third year in podcasting. We dish on past guests and future guests we have in the works, answer a couple of listener questions, and Ben quizzes Pascual about the finer points of air guitar. Enjoy the podcast–it’s almost as fun as other people!
Check it out HERE!
The current issue of Skeptical Inquirer magazine features an investigation I did into a famous mystery, the Chase Vault in Barbados. Coffins were said to have mysteriously moved while sealed in the vault, attributed to curses, ghosts, flooding, and more.
I visited the site twice and solved the mystery; you can read about it, and listen to our episode of Squaring the Strange: http://squaringthestrange.libsyn.com/episode-97-the-dancing…
There’s a natural—almost Pavlovian—tendency to follow the news closely, especially during times of emergency such as wars, terrorism, and natural disasters. People are understandably desperate for information to keep their friends and family safe, and part of that is being informed about what’s going on.
News and social media are awash with information about the COVID-19 pandemic. But not all the information is equally valid, useful, or important. Much of what’s shared on social media about COVID-19 is false, misleading, or speculative. That’s why it’s important to get information from reputable sources such as the Center for Inquiry (CFI), not random YouTube videos, health bloggers, conspiracy theorists, and so on.
It’s easy to become overwhelmed, and science-informed laypeople are likely suffering this information overload keenly, as we absorb the firehose of information from a wide variety of sources: from the White House to the CDC, and from conspiracy cranks to Goop contributors. It’s a never ending stream—often a flood—of information, and those charged with trying to sort it out are quickly inundated. As important as news is, there is such a thing as medical TMI.
We have a Goldilocks situation when it comes to COVID-19 material. There’s too little, too much, and just the right amount of information about the COVID-19 virus in the news and social media. This sounds paradoxical until we break down each type of information.
In thinking about the COVID-19 outbreak and the deluge of opinion, rumor, and news out there, it’s helpful to parse out the different types of information.
1) Information that’s true
This includes the most important, practical information—how to avoid it: Wash your hands, avoid crowds, don’t touch your face, sanitize surfaces, and so on. This type of information has been proven accurate and consistent since the outbreak began. This is of course the smallest category of information: mundane but vital.
2) Information that’s false
Information that’s false includes a wide variety of rumors, miracle cures, misinformation, and so on. The Center for Inquiry’s COVID-19 Resource Center has been set up precisely to help journalists and the public debunk this false information. The problem is made worse by the fact that Russian disinformation organizations—which have a long and proven history of sowing false and misleading information in social media around the world, and particularly in the United States—have seized on the COVID-19.
As CNN reported recently, “Russian state media and pro-Kremlin outlets are waging a disinformation campaign about the coronavirus pandemic to sow ‘panic and fear’ in the West, EU officials have warned. … The European Union’s External Action Service, which researches and combats disinformation online, said in an internal report that since January 22 it had recorded nearly 80 cases of disinformation about the COVID-19 outbreak linked to pro-Kremlin media. ‘The overarching aim of Kremlin disinformation is to aggravate the public health crisis in Western countries, specifically by undermining public trust in national healthcare systems—thus preventing an effective response to the outbreak,’ according to the report. … The disinformation has targeted international audiences in English, Italian, Spanish, Arabic as well as Russian and other languages, the report states. European Commission spokesperson Peter Stano said the center has seen a ‘flurry’ of disinformation about the spread of novel coronavirus over the past weeks.”
3) Speculation, opinion, and conjecture
In times of uncertainty, prediction and speculation are rampant. Dueling projections about the outbreak vary by orders of magnitude as experts and social media pundits alike share their speculation. Of course, epidemiological models are only as good as the data that goes into them and are based on many premises, variables, and numerous unknowns.
Wanting to accurately know the future is of course a venerable tradition. But as a recent post on Medium written by an epidemiologist noted: “Here is a simple fact: every prediction you’ve read on the numbers of COVID-19 cases or deaths is almost certainly wrong. All models are wrong. Some models are useful. It is very easy to draw a graph using an exponential curve and tell everyone that there will be 10 million cases by next Friday. It is far harder to model infectious disease epidemics with any accuracy. Stop making graphs and putting them online. Stop reading the articles by well-meaning people who have no idea what they are doing. The real experts aren’t posting random Excel graphs on twitter, because they are working flat-out to try and get a handle on the epidemic.”
4) Information that’s true but not helpful
Finally, there’s another, less-recognized category: information that is true but not helpful on an individual level, or what might be called “trivially true.” We usually think of false information being shared as harmful—and it certainly is—but trivially true information can also be harmful to public health. Even when it’s not directly harmful, it adds to the background of noise.
News media and social media are flooded with information and speculation that—even if accurate—is of little practical use to the average person. Much of the information is not helpful, useful, actionable, or applicable to daily life. It’s like in medicine and psychology what’s called “clinical significance”: the practical importance of a treatment effect—whether it has a real, genuine, palpable, and noticeable effect on daily life. A finding may be true, may be statistically significant, but be insignificant in the real world. A new medicine may reduce pain by 5 percent but nobody would create or market it because it’s not clinically significant; a 5 percent reduction in pain isn’t useful compared to other pain relievers with better efficacy.
One example might include photos of empty store shelves widely shared on social media, depicting the run on supplies such as sanitizer and toilet paper. The information is both true and accurate; it’s not being faked or staged. But it’s not helpful, because it leads to panic buying, social contagion, and hoarding as people perceive a threat to their welfare and turn an artificial scarcity into a real one.
Another example is Trump’s recent reference to the COVID-19 virus as “the China virus.” Ignoring the fact that diseases aren’t named for where they emerge, we can acknowledge that it’s technically accurate that, as Trump claimed, COVID-19 was first detected in China—and also that it’s not a relevant or useful detail. It doesn’t add to the discussion or help anyone’s understanding of what the disease is or how to deal with it. If anything, referring to it by other terms such as “the China virus” or “Wuhan flu” is likely to cause confusion and even foment racism.
Before believing or sharing information on social media, ask yourself questions such as: Is it true? Is it from a reliable source? But there are other questions to ask: Even if it may be factually true, is it helpful or useful? Does it promote unity or encourage divisiveness? Are you sharing it because it contains practical information important to people’s health? Or are you sharing it just to have something to talk about, some vehicle to share your opinions about? The signal-to-noise ratio is already skewed against useful information, being drowned out by false information, speculation, opinion, and trivially true information.
While self-isolating from the disease (and those who might carry it) is vital to public health, there’s a less-discussed aspect: self-distancing from social media information on the virus, which is a form of social media hygiene. Six feet is enough distance in physical space, but doesn’t apply to cyberspace where viral misinformation spreads unchecked (until it hits this site).
The analogy between disease and misinformation is apt. Just as you can be a vector for a virus if you get and spread it, you can be a vector for misinformation and fear. But you can stop it by removing yourself from it. You don’t need hourly updates on most aspects of the pandemic. Most of what you see and read isn’t relevant to you. The idea is not to ignore important and useful information about the coronavirus; in fact, it’s exactly the opposite: to better distinguish the news from the noise, the relevant from the irrelevant.
Doctors around the world have been photographed sharing signs that say “We’re at work for you. Please stay home for us.” That’s excellent advice, but we can take it further. While at home not becoming a vector for disease, also take steps not to become a vector for misinformation. After all, doing so can have just as much of an impact on public health.
During a time when people are isolated, it’s cathartic to vent on social media. Humans are social creatures, and we find ways to connect even when we can’t physically. Especially during a time of international crisis, it’s easy to become outraged about one or another aspect of the pandemic. Everyone has opinions about what is (or isn’t) being done, what should (or shouldn’t) be done. Everyone’s entitled to those opinions, but they should be aware that those opinions expressed on social media have consequences and may well harm others, albeit unintentionally. Just as it feels good to physically hang out with other people (but may in fact be dangerous to them), it feels good to let off steam to others in your social circles (but may be dangerous to them). Your steam makes others in your feed get steamed too, and so on. Again, it’s the disease vector analogy.
You don’t know who will end up seeing your posts and comments (such is the nature of “viral” posts and memes), and while you may think little of it, others may be more vulnerable. Just as people take steps to protect those with compromised immune systems, it may be wise to take similar steps to protect those with compromised psychological defenses on social media—those suffering from anxiety, depression, or other issues who are especially vulnerable at this time.
This isn’t about self-censorship; there are many ways to reach out to others and share concerns and feelings in a careful and less public way through email, direct messaging, video calls, and even—gasp—good old fashioned letters. Like anything else, people can express feelings and concerns in measured, productive ways, ways that are more (or less) likely to harm others (referring to it as “COVID-19” instead of “the Chinese virus” is one example).
Though the public loves to blame the news media for misinformation—and deservedly so—we are less keen to see the culprit in the mirror. Many people, especially on social media, fail to recognize that they have become de facto news outlets through the stories and posts they share. The news media helps spread myriad “fake news” stories—gleefully aided by ordinary people like us. We cannot control what news organizations (or anyone else) publishes or puts online. But we can—and indeed we have an obligation to—help stop the spread of misinformation in all its forms.
It’s overwhelming; it’s too much. In psychology there’s what’s called the Locus of Control. It basically means the things which a person has control over: themselves, their immediate family, their pets, most aspects of their lives, and so on. It’s psychologically healthy to focus on those things you can do something about. You can’t do anything about how many deaths there are in China or Italy. You can’t do anything about whether or not medical masks are being manufactured and shipped quickly enough. But you can do something about bad information online.
It can be as simple as not forwarding, liking, or sharing that dubious news story before checking the facts, especially if that story seems crafted to encourage social outrage. The Center for Inquiry can act as a clearinghouse for accurate information about the pandemic, but it’s up to each person to heed that advice. We can help separate the truth from the myths, but we can’t force people to believe the truth. Be safe, practice social and cyber distancing, and wash your hands.
This is the first in a series of original articles on the COVID-19 pandemic by the Center for Inquiry as part of its COVID-19 Resource Center, created to help the public address the crisis with evidence-based information. Please check back periodically for updates and new information.
There have been many pandemics throughout history, but none have taken place during such a connected time—both geographically and via social media. There’s a tendency to follow the news closely during times of emergency; especially when separated during isolation and quarantines, people are understandably desperate for information to keep their friends and family safe.
Overreacting and Underreacting
While scientists, doctors, nurses, epidemiologists, and others struggle to contain the disease, many are spending their self-isolating time on social media, sharing everything from useful information to dangerous misinformation to idle speculation. One thing most people can agree on is that other people and institutions aren’t handling the crisis correctly.
There’s much debate about whether Americans and governments are underreacting or overreacting to the pandemic threat. This is of course a logical fallacy, because there are some 330 million Americans, and the answer is that some Americans are doing one or the other; most Americans, however, are doing neither.
As The New York Times noted, “contrarian political leaders, ethicists and ordinary Americans have bridled at what they saw as a tendency to dismiss the complex trade-offs that the measures collectively known as ‘social distancing’ entail. Besides the financial ramifications of such policies, their concerns touch on how society’s most marginalized groups may fare and on the effect of government-enforced curfews on democratic ideals. Their questions about the current approach are distinct from those raised by some conservative activists who have suggested the virus is a politically inspired hoax, or no worse than the flu. Even in the face of a mounting coronavirus death toll, and the widespread adoption of the social distancing approach, these critics say it is important to acknowledge all the consequences of decisions intended to mitigate the virus’s spread.”
Recently the governor of Georgia, Brian Kemp, joined much of the country in finally ordering citizens to stay at home to slow the spread of the disease, after suggesting that other states were unnecessarily overreacting to the threat. Kemp inexplicably claimed to have only recently learned that the virus can be spread by asymptomatic carriers—something widely known and reported by health officials for well over a month.
On social media, the issue of how and whether the threat is being exaggerated often breaks along political party lines, with conservatives seeing the danger as exaggerated or an outright hoax. There are countless examples of divisive rhetoric, and many are framing the pandemic in terms of class warfare (for example pitting the rich against the poor) or spinning the outbreak to suit other social and political agendas. It’s understandable, but not helpful. Pointing out that the wealthy universally have better access to health care than the poor is merely stating the obvious—like much pandemic information, true but unhelpful. It’s not going to prevent someone’s family member from catching the virus and not going to open schools or businesses any faster. This isn’t a time for what-about-ism or “they’re doing it too” replies; this is a time for unity and cooperation. Liberals, conservatives, independents, and everyone else would benefit from putting aside the blame-casting, demonizing rhetoric and unite against the real enemy: the COVID-19 virus that’s sickening and killing people across races and social strata.
At the same time, it’s important to recognize that the measures taken to slow the spread of the coronavirus in America and around the world—while necessary and effective—have taken a disproportionate toll on minorities. As Charles Blow wrote in The New York Times, “social distancing is a privilege … this virus behaves like others, screeching like a heat-seeking missile toward the most vulnerable in society. And this happens not because it prefers them, but because they are more exposed, more fragile and more ill. What the vulnerable portion of society looks like varies from country to country, but in America, that vulnerability is highly intersected with race and poverty … . It is happening with poor people around the world, from New Delhi to Mexico City. If they go to work, they must often use crowded mass transportation, because low-wage workers can’t necessarily afford to own a car or call a cab.”
While each side likes to paint the other in extreme terms as under or overreacting, there’s plenty of common ground between these straw man positions. Most people are neither blithely and flagrantly ignoring medical advice (and the exceptions—such as widely maligned Spring Breakers on Florida beaches, some of whom have since been diagnosed with COVID-19—are newsworthy precisely because of their rarity) nor spending their days in masks and containment suits, terrified to go anywhere near others.
Idiots and Maniacs, Cassandras and Chicken Littles
People can take prudent precautions and still reasonably think or suspect that at least some of what’s going on in the world is an overreaction or underreaction. Policing other people’s opinions or shaming them because they’re taking the situation more (or less) seriously than we are is unhelpful. It’s like the classic George Carlin joke: “Anybody driving slower than you is an idiot, and anyone going faster than you is a maniac.”
Instead of seeing others as idiots and maniacs, panicky ninnies and oblivious fools, perhaps we can recognize that everyone is different. Some people are in poorer health than others; some people listen to misinformation more than others; and so on. People who were mocked online for wearing masks in public may be following their doctor’s orders; they may be sick or immunocompromised or have some other health issue that’s not apparent in the milliseconds we spend judging the situation before commenting. Or they may be ahead of the curve, with changing medical advice. Why not give them the benefit of the doubt and treat them as we’d like to be treated?
Whether people are underreacting or overreacting is a matter of opinion not fact. The truth is that we simply don’t know what will happen and how bad it will get. In many cases, we simply don’t have enough information to make accurate predictions, and when it comes to life and death topics such as disease outbreaks, the medical community wisely adopts a better-safe-than-sorry approach.
Both positions argue from a false certainty, a smugness that they know better than others do, that the Cassandras and Chicken Littles will get their comeuppance. Humans crave certainty, but science can’t offer it. Certainty is why psychic predictions such as Sylvia Browne’s (supposedly foretelling the outbreak, which I recently debunked) have such popular appeal. The same is true for conspiracy theories and religion: All offer certainty—the idea that whatever happens is being directed by hidden powers and all part of God’s plan (or the Illuminati’s schemes, take your pick).
Instead of bickering over how stupid or silly others are for however they’re reacting, it may be best to let them do their thing as long as it’s not hurting others. Polarization is a form of intolerance. Maybe this is a time to come together instead of mocking those who don’t share your opinions and fears. We all have different backgrounds and different tolerances for uncertainty.
This doesn’t mean that governments should be given license to do whatever they want, of course. Citizens differ on their opinions about everything the government does; there’s never universal agreement on anything (from gun control to education funding), and there’s no reason to assume that responses to COVID-19 would be any different. If you don’t like what measures state and federal governments are taking to stop the virus, welcome to the club. Some are of the opinion that too much is being done, while others think too little is being done. While the public noisily argue and finger point on social media, scientists around the world are working hard to develop better treatments and vaccines.
Before believing or sharing information on social media, ask yourself questions such as: Is it true? Is it from a reliable source? But also, is it helpful or useful? Does it promote unity or encourage divisiveness? Are you sharing it because it contains practical information important to people’s health? Or are you sharing it just to have something to talk about, some vehicle to share your opinions about? Be safe, practice social and cyber distancing, and wash your hands.
This article originally appeared as part of a series of original articles on the COVID-19 pandemic by the Center for Inquiry as part of its Coronavirus Resource Center, created to help the public address the crisis with evidence-based information. You can find it HERE.
My article examines uncertainties in covid-19 data, from infection to death rates. While some complain that pandemic predictions have been exaggerated for social or political gain, that’s not necessarily true; journalism always exaggerates dangers, highlighting dire predictions. But models are only as good as the data that goes into them, and collecting valid data on disease is inherently difficult. People act as if they have solid data underlying their opinions, but fail to recognize that we don’t have enough information to reach valid conclusion…
You can read Part 1 Here.
Certainty and the Unknown Knowns
The fact that our knowledge is incomplete doesn’t mean that we don’t know anything about the virus; quite the contrary, we have a pretty good handle on the basics including how it spreads, what it does to the body, and how the average person can minimize their risk.
Humans crave certainty and binary answers, but science can’t offer it. The truth is that we simply don’t know what will happen or how bad it will get. For many aspects of COVID-19, we don’t have enough information to make accurate predictions. In a New York Times interview, one victim of the disease reflected on the measures being taken to stop the spread of the disease: “We could look back at this time in four months and say, ‘We did the right thing’—or we could say, ‘That was silly … or we might never know.’”
There are simply too many variables, too many factors involved. Even hindsight won’t be 20/20 but instead be seen by many through a partisan prism. We can never know alternative history or what would have happened; it’s like the concern over the “Y2K bug” two decades ago. Was it all over nothing? We don’t know because steps were taken to address the problem.
But uncertainty has been largely ignored by pundits and social media “experts” alike who routinely discuss and debate statistics while glossing over—or entirely ignoring—the fact that much of it is speculation and guesswork, unanchored by any hard data. It’s like hotly arguing over what exact time a great-aunt’s birthday party should be on July 4, when all she knows is that she was born sometime during the summer.
So, if we don’t know, why do people think they know or act as if they know?
Part of this is explained by what in psychology is known as the Dunning-Kruger effect: “in many areas of life, incompetent people do not recognize—scratch that, cannot recognize—just how incompetent they are … . Logic itself almost demands this lack of self-insight: For poor performers to recognize their ineptitude would require them to possess the very expertise they lack. To know how skilled or unskilled you are at using the rules of grammar, for instance, you must have a good working knowledge of those rules, an impossibility among the incompetent. Poor performers—and we are all poor performers at some things—fail to see the flaws in their thinking or the answers they lack.”
Most people don’t know enough about epidemiology, statistics, or research design to have a good idea of how valid disease data and projections are. And of course, there’s no reason they would have any expertise in those fields, any more than the average person would be expected to have expertise in dentistry or theater. But the difference is that many people feel confident enough in their grasp of the data—or, often, confident enough in someone else’s grasp of the data, as reported via their preferred news source—to comment on it and endorse it (and often argue about it).
Psychology of Uncertainty
Another factor is that people are uncomfortable admitting when they don’t know something or don’t have enough information to make a decision. If you’ve taken any standardized multiple-choice tests, you probably remember that some of the questions offered a tricky option, usually after three or four possibly correct specific answers. This is some version of “The answer cannot be determined from the information given.” This response (usually Option D) is designed in part to thwart guessing and to see when test-takers recognize that the question is insoluble or the premise incomplete.
The principle applies widely in the real world. It’s difficult for many people—and especially experts, skeptics, and scientists—to admit they don’t know the answer to a question. Even if it’s outside our expertise, we often feel as if not knowing (or even not having a defensible opinion) is a sign of ignorance or failure. Real experts freely admit uncertainty about the data; Dr. Anthony Fauci has been candid about what he knows and what he doesn’t, responding for example when asked how many people could be carriers, “It’s somewhere between 25 and 50%. And trust me, that is an estimate. I don’t have any scientific data yet to say that. You know when we’ll get the scientific data? When we get those antibody tests out there.”
Yet there are many examples in our everyday lives when we simply don’t have enough information to reach a logical or valid conclusion about a given question, and often we don’t recognize that fact. We routinely make decisions based on incomplete information, and unlike on standardized tests, in the real world of messy complexities there are not always clear-cut objectively verifiable answers to settle the matter.
This is especially true online and in the context of a pandemic. Few people bother to chime in on social media discussions or threads to say that there’s not enough information given in the original post to reach a valid conclusion. People blithely share information and opinions without having the slightest clue as to whether it’s true or not. But recognizing that we don’t have enough information to reach a valid conclusion demonstrates a deeper and nuanced understanding of the issue. Noting that a premise needs more evidence or information to complete a logical argument and reach a valid conclusion is a form of critical thinking.
One element of conspiracy thinking is that those who disagree are either stupid (that is, gullible “sheeple” who believe and parrot everything they see in the news—usually specifically the “mainstream media” or “MSM”) or simply lying (experts and journalists across various media platforms who know the truth but are intentionally misleading the public for political or economic gain). This “If You Disagree with Me, Are You Stupid or Dishonest?” worldview has little room for uncertainty or charity and misunderstands the situation.
The appropriate position to take on most coronavirus predictions is one of agnosticism. It’s not that epidemiologists and other health officials have all the data they need to make good decisions and projections about public health and are instead carefully considering ways to fake data to deceive the public and journalists. It’s that they don’t have all the data they need to make better predictions, and as more information comes in, the projections will get more accurate. The solution is not to vilify or demonize doctors and epidemiologists but instead to understand the limitations of science and the biases of news and social media.
This article first appeared at the Center for Inquiry Coronavirus Resource Page; please check it out for additional information.
The new episode of Squaring the Strange is now out!
We chat about various Covid-related topics and then dive into a few examples of bad or misleading polls. First we go over a couple that don’t really set off alarm bells, like whether beards are sexy or what determines people’s beer-buying habits.
Then I dissect some bad reporting on polls and surveys that relate to much more important topics like Native American discrimination or the Holocaust, and we see how a bit of media literacy on how polls can be twisted around is a vital part of anyone’s skeptical toolbox.
For those who didn’t see it: A recent episode of Squaring the Strange revisited a classic mystery: The Bermuda Triangle! The Bermuda Triangle is a perennial favorite for seekers of the strange; Ben still gets calls regularly from students eager to ask him questions about this purported watery grave in the Caribbean. We look into the history of this mysterious place and a few factors that influenced its popularity. What does the Bermuda Triangle have to do with a college French class? And what does a new bit of 2020 shipwreck sleuthing have to do with the legend? The one thing the Bermuda Triangle does seem to suck in like a vortex is a kitchen sink of very weird theories, from Atlantis and UFOs to rogue tidal waves and magnetic time-space anomalies.
You can listen to it here!
In February a headline widely shared on social media decried poor reviews of the new film Birds of Prey and blamed it on male film critics hating the film for real or perceived feminist messages (and/or skewed expectations; it’s not clear). The article, by Sergio Pereira, was headlined “Birds of Prey: Most of the Negative Reviews Are from Men.”
The idea that the film was getting bad reviews because hordes of trolls or misogynists hated it was certainly plausible, and widely discussed for example in the case of the all-female Ghostbusters reboot a few years ago. As a media literacy educator and a film buff, I was curious to read more, and when I saw it on a friend’s Facebook wall I duly did what the writer wanted me (and everyone else) to do: I clicked on the link.
I half expected the article to contradict its own headline (a frustratingly common occurrence, even in mainstream news media stories), but in this case Pereira’s text accurately reflected its headline: “Director Cathy Yan’s Birds of Prey (and the Fantabulous Emancipation of One Harley Quinn), starring Margot Robbie as the Clown Princess of Crime, debuted to a Fresh rating on Rotten Tomatoes. While the score has dropped as more reviews pour in, the most noticeable thing is that the bulk of the negative reviews come from male reviewers. Naturally, just because the film is a first for superhero movies—because it’s written by a woman, directed by a woman and starring a mostly all-female cast—doesn’t absolve it from criticism. It deserves to be judged for both its strengths and weaknesses like any other piece of art. What is concerning, though, is how less than 10% of the negative reviews are from women.” In the article and later on Twitter Pereira attributed the negative reviews to an alleged disparity between what male film reviewers expected from the film and what they actually saw, describing it as “literally… where a bunch of fools got upset about the movie they THOUGHT it was, instead of what it ACTUALLY was.”
I was reminded of the important skeptical dictum that before trying to explain why something is the case, be sure that it is the case; in other words question your assumptions. This is a common error on social media, in journalism, and of course in everyday life. We shouldn’t just believe what people tell us—especially online. To be fair, the website was CBR.com (formerly known as Comic Book Resources) and not, for example, BBC News or The New York Times. It’s pop culture news, but news nonetheless.
Curious to see what Pereira was describing, I clicked the link to the Rotten Tomatoes listing and immediately knew that something wasn’t right. The film had a rating of 80% Fresh rating—meaning that most of the reviews were positive. In fact according to MSNBC, “The film charmed critics [and is] the third-highest rating for any movie in the DCEU, just behind Wonder Woman and Shazam.” Birds of Prey may not have lived up to its expectations, but the film was doing fairly well, and hardly bombing—because of male film reviewers or for any other reason.
I know something about film reviewing; I’ve been a film reviewer since 1994, and attended dozens of film festivals, both as an attendee and a journalist. I’ve also written and directed two short films and taken screenwriting courses. One thing I’ve noticed is that for whatever reason most film critics are male (a fact I double checked, learning that the field is about 80% male). So, doing some very basic math in my head, I knew there was something very wrong with the headline—and not just the headline, but the entire premise of Pereira’s article.
Here’s a quick calculation: Say there are 100 reviewers. 80 of them are men; 20 are not. If half of each gender give it a positive review, that’s 40 positive reviews from men and 10 from women, for a total of 50% approval (or “Fresh”) rating. If half of men (40) and 100% of the women (20) gave it a positive review, that’s a 60% Fresh rating. If three-quarters of men (60) and 100% of the women (20) gave it a positive review, that’s an 80% Fresh rating.
Birds of Prey had an 80% Fresh rating. So even if every single female reviewer gave the film a positive review—which we know didn’t happen just from a glance at the reviews on RottenTomatoes—then that means that at least three out of four men gave it a positive review. Therefore it might statistically be true that “most of the negative reviews are from men,” but the exact opposite is also true: most of the positive reviews are from men, simply because there are more male reviewers. The article’s claim that “the bulk of the negative reviews come from male reviewers” is misleading at best and factually wrong at worst.
I also thought it strange that Pereira didn’t specify which male reviewers he was talking about. By “fools” did he mean professional film critics such as Richard Roeper and Richard Brody were overwhelmingly writing scathing reviews of the film? Or did he mean reviews from random male film fans? And if the latter, how did he determine the gender of the anonymous reviewers? It was possible that his statistic was correct, but readers would need much more information about where he got his numbers. Did he take the time to gather data, create a spreadsheet, and do some calculations? Did he skim the reviews for a minute and do a rough estimate? Did he just make it up?
I was wary of assuming the burden of proof regarding this claim. After all, Sergio Pereira was the one who claimed that most of the negative reviews were by men. The burden of proof is always on the person making the claim; it’s not up to me to show he’s wrong, but up to him to show he’s right. Presumably he got that number from somewhere—but where?
I contacted Pereira via Twitter and asked him how he arrived at the calculations. He did not reply, so I later contacted CBR directly, emailing the editors with a concise, polite note saying that the headline and articles seemed to be false, and asking them for clarification: “He offers no information at all about how he determined that, nor that less than 10% of the negative reviews are from women. The RottenTomatoes website doesn’t break reviewers down by gender (though named and photos offer a clue), so Pereira would have to go through one by one to verify each reviewer’s gender. It’s also not clear whether he’s referring to Top Reviewers or All Reviewers, which are of course different datasets. I spent about 20 minutes skimming the Birds of Prey reviews and didn’t see the large gender imbalance reported in your article (and didn’t have hours to spend verifying Pereira’s numbers, which I couldn’t do anyway without knowing what criterion he used). Any clarification about Pereira’s methodology would be appreciated, thank you.” They also did not respond.
Since neither the writer nor the editors would respond, I resignedly took a stab at trying to figure out where Pereira got his numbers. I looked at the Top Critics and did a quick analysis. I found 41 of them whose reviews appeared at the time: 26 men and 15 women. As I suspected, men had indeed written the statistical majority of both the positive and negative reviews.
On my friend’s Facebook page where I first saw the story being shared I posted a comment noting what seemed to be an error, and offering anyone an easy way to assess whether the headline was plausible: “A quick-and-dirty way to assess whether the headline is plausible is to note that 1) 80% of film critics are male, and that 2) Birds of Prey has a 80% Fresh rating, with 230 Fresh (positive) and 59 Rotten (negative). So just glancing at it, with 80% of reviewers male, how could the film possibly have such a high rating if most of the men gave it negative reviews?”
The reactions from women on the post were interesting—and gendered: one wrote, “did you read the actual article or just the headline? #wellactually,” and “haaaaard fucking eyeroll* oh look y’all. The dude who thinks he’s smarter than the author admits its maybe a little perhaps possible that women know what they’re talking about.”
The latter comment was puzzling, since Sergio Pereira is a man. It wasn’t a man questioning whether women knew what they were talking about; it was a man questioning whether another man’s harmful stereotypes about women highlighted in his online article were true. I was reminded of the quote attributed to Mark Twain: “It’s easier to fool people than to convince them they’ve been fooled.” Much of the reaction I got was critical of me for questioning the headline and the article. I got the odd impression that some thought I was somehow defending the supposed majority male film critics who didn’t like the film, which was absurd. I hadn’t (and haven’t) seen the film and have no opinion about it, and couldn’t care less whether most of the male critics liked or didn’t like the film. My interest is as a media literacy educator and someone who’s researched misleading statistics.
To scientists, journalists, and skeptics, asking for evidence is an integral part of the process of parsing fact from fiction, true claims from false ones. If you want me to believe a claim—any claim, from advertising claims to psychic powers, conspiracy theories to the validity of repressed memories—I’m going to ask for evidence. It doesn’t mean I think (or assume) you’re wrong or lying, it just means I want a reason to believe what you tell me. This is especially true for memes and factoids shared on social media and designed to elicit outrage or scorn.
The problem is when the person does occasionally encounter someone who is sincerely trying to understand an issue or get to the bottom of a question, their knee-jerk reaction is often to assume the worst about them. They are blinded by their own biases and they project those biases on others. This is especially true when the subject is controversial, such as with race, gender, or politics. To them, the only reason a person would question a claim is if they are trying to discredit that claim, or a larger narrative it’s being offered in support of.
Of course that’s not true; people should question all claims, and especially claims that conform to their pre-existing beliefs and assumptions; those are precisely the ones most likely to slip under the critical thinking radar and become incorporated into your beliefs and opinions. I question claims from across the spectrum, including those from sources I agree with. To my mind the other approach has it backwards: How do you know whether to believe a claim if you don’t question it?
If the reviews are attributable to sexism or misogyny due to feminist themes in the script—instead of, for example, lackluster acting, clunky dialogue, lack of star power, an unpopular title (the film was renamed during its release, a very unusual marketing move), or any number of other factors unrelated to its content—then presumably that same effect would be clear in other similar films.
Ironically, another article on CBR by Nicole Sobon published a day earlier—and linked to in Sergio Pereira’s piece—offers several reasons why Birds of Prey wasn’t doing as well at the box office, and misogyny was conspicuously not among them: “Despite its critical success, the film is struggling to take flight at the box office…. One of the biggest problems with Birds of Prey was its marketing. The trailers did a great job of reintroducing Robbie’s Quinn following 2017’s Suicide Squad, but they failed to highlight the actual Birds of Prey. Also working against it was the late review embargo. It’s widely believed that when a studio is confident in its product, it will hold critics screenings about two weeks before the film’s release, with reviews following shortly afterward to buoy audience anticipation and drive ticket sales. However, Birds of Prey reviews didn’t arrive until three days before the film’s release. And, while the marketing finally fully kicked into gear by that point, for the general audience, it might’ve been too late to care. Especially when Harley Quinn’s previous big-screen appearance was in a poorly received film.”
The Cinematic Gender Divide
A gender divide in positive versus negative reviews of ostensibly feminist films (however you may want to measure that, whether by the Bechdel Test or some other way—such as an all-female cast, or female writer/directors), is eminently provable. It’s not a subject that I’ve personally researched and quantified, but since Pereira didn’t reference any of this in his article, I did some research on it.
For example Salon did a piece on gender divisions in film criticism, though not necessarily finding that it was rooted in sexism or a reaction to feminist messages: “The recent Ghostbusters reboot, directed by Paul Feig, received significantly higher scores from female critics than their male counterparts. While 79.3 percent of women who reviewed the film gave it a positive review, just 70.8 percent of male critics agreed with them. That’s a difference of 8.5 percent… In total, 84 percent of the films surveyed received more positive reviews from female reviewers than from men. The movies that showed the greatest divide included A Walk to Remember, the Nicholas Sparks adaptation; Twilight, the 2008 vampire romance; P.S. I Love You, a melodrama about a woman (Hilary Swank) grieving the loss of her partner; Divergent, the teen dystopia; and Divine Secrets of the Ya-Ya Sisterhood… Men tended to dislike Young Adult literary adaptations and most films marketed to teenage girls. Pitch Perfect, which was liked by 93.8 percent of female critics, was rated much lower by men—just 76.9 percent of male reviewers liked it.” (There was nothing supporting Pereira’s assertion that critics, male or female, didn’t like films marketed to the opposite gender because of a perceived gap between what the reviewers expected from a film versus what the film delivered.)
The phrasing “just 76.9 percent of male reviewers liked Pitch Perfect” of course invites an ambiguous comparison (how many should have liked the film? 90%? 95%? 100%?). More to the point, if over three-quarters of men liked the obviously female-driven film Pitch Perfect, that rather contradicts Pereira’s thesis. In fact Pitch Perfect has an 80% Fresh rating on RottenTomatoes—exactly the same score that Birds of Prey did. We have two female-driven films with the majority of male reviewers giving both films a positive review—yet Pereira suggests that male reviewers pilloried Birds of Prey.
Perpetuating Harmful Stereotypes
Journalists making errors and writing clickbait headlines based on those errors is nothing new, of course. I’ve written dozens of media literacy articles about this sort of thing. As I’ve discussed before, the danger is that these articles mislead people, and reinforce harmful beliefs and stereotypes. In some cases I’ve researched, misleading polls and surveys create the false impression that most Americans are Holocaust deniers—a flatly false and highly toxic belief that can only fuel fears of anti-Semitism (and possibly comfort racists). In other cases these sorts of headlines exaggerate fear and hatred of the transgender community.
As noted, Pereira’s piece could have been titled, “Birds of Prey: Most of the Positive Reviews Are from Men.” That would have empowered and encouraged women—but gotten fewer outrage clicks.
In many cases what people think other people think about the world us just as important as what they personally think. This is due to what’s called the third-person effect, or pluralistic ignorance. People are of course intimately familiar with our own likes and desires—but where do we get our information about the 99.99% of the world we don’t and can’t directly experience or evaluate? When it comes to our understanding and assumptions about the rest of the world, our sources of information quickly dwindle. Outside of a small circle of friends and family, much of the information about what others in the world think and believe comes from the media, specifically social and news media. These sources often misrepresent the outside world. Instead they distort the real world in predictable and systemic ways, always highlighting the bad and the outraged. The media magnifies tragedy, exploitation, sensationalism and bad news, and thus we assume that others embody and endorse those traits.
We’re seeing this at the moment with shortages of toilet paper and bottled water in response to Covid-19 fears. Neither are key to keeping safe or preventing the spread of the virus, yet people are reacting because other people are reacting. There’s a shortage because people believe there’s a shortage—much in the way that the Kardashians are famous for being famous. In much the same way, when news and social media exaggerate (or in some cases fabricate) examples of toxic behavior, it creates the false perception that such behavior is more pervasive (and widely accepted) than it actually is. Whether or not male film reviewers mostly hated Birds of Prey as Pereira suggested—and they didn’t—the perception that they did can itself cause harm.
I don’t think it was done intentionally or with malice. But I hope Pereira’s piece doesn’t deter an aspiring female filmmaker who may read his widely-shared column and assume that no matter how great her work is, the male-dominated film critic field will just look for ways to shut her out and keep her down merely because of her gender.
There certainly are significant and well-documented gender disparities in the film industry, on both sides of the camera, from actor pay disparity to crew hiring. But misogynist men hating on Birds of Prey simply because it’s a female-led film isn’t an example of that. I note with some irony that Pereira’s article concludes by saying that “Birds of Prey was meant to be a celebration, but it sadly experienced the same thing as every other female-driven film: a host of negativity about nothing.” That “host of negativity” is not reflected in male film reviews but instead in Sergio Pereira’s piece. His CBR article is itself perpetuating harmful stereotypes about female-driven films, which is unfortunate given the marginalization of women and minorities in comic book and gaming circles.
A longer version of this article appeared on my Center for Inquiry blog; you can read it here.
In recent weeks there’s been many rumors, myths, and misinformation about the current coronavirus pandemic, Covid-19. One of the most curious is the recent resurrection of a posthumous prediction by psychic Sylvia Browne.
In her 2008 book End of Days, Browne predicted that “In around 2020 a severe pneumonia-like illness will spread throughout the globe, attacking the lungs and the bronchial tubes and resisting all known treatments. Almost more baffling than the illness itself will be the fact that it will suddenly vanish as quickly as it arrived, attack again ten years later, and then disappear completely.”
This led to many on social media assuming that Browne had accurately predicted the Covid-19 outbreak, and Kim Kardashian among others shared such posts, causing the book to become a best-seller once again.
There’s a lot packed into these two sentences, and I recently did a deep dive into it. First, we have an indefinite date range (“in around 2020”), which depends on how loosely you interpret the word “around,” but plausibly covers seven (or more) years. Browne predicted “A severe pneumonia-like illness,” but Covid-19 is not “a severe pneumonia-like illness” (though it can in some cases lead to pneumonia). Most of those infected (about 80 percent) have mild symptoms and recover just fine, and the disease has a mortality rate of between 2 percent and 4 percent. Browne claims it “will spread throughout the globe, attacking the lungs and the bronchial tubes,” and Covid-19 has indeed spread throughout the globe, but Browne also says the disease she’s describing “resists all known treatments.” This does not describe Covid-19; in fact, doctors know how to treat the disease—it’s essentially the same for influenza or other similar respiratory infections. This coronavirus is not “resisting” all (or any) known treatments.
She further describes the illness: “Almost more baffling than the illness itself will be the fact that it will suddenly vanish as quickly as it arrived, attack again ten years later, and then disappear completely.” This is false; Covid-19 has not “suddenly vanished as quickly as it arrived,” and even if it eventually does, its emergence pattern would have to be compared with other typical epidemiology data to know whether it’s “baffling.”
You can read my full piece at the link above, but basically we have a two-sentence prediction written in 2008 by a convicted felon with a long track record of failures. Half of the prediction has demonstrably not happened. The other half of the prophecy describes an infectious respiratory illness that does not resemble Covid-19 in its particulars that would supposedly happen within a few years of 2020. At best, maybe one-sixth of what she said is accurate, depending again on how much latitude you’re willing to give her in terms of dates and vague descriptions.
But there’s more to the story, because as it turns out Browne made at least one other similar prediction with some significant differences. I discovered this a few days ago. I have several books by Browne in my library (mostly bought at Goodwill and library sales), among them Browne’s 2004 book Prophecy: What the Future Holds for You (written with Lindsay Harrison, from Dutton Publishing).
On p. 214, I found an earlier, somewhat different version of this same prophecy. Details and exact words matter, so here’s her 2004 prediction verbatim: “By 2020 we’ll see more people than ever wearing surgical masks and rubber gloves in public, inspired by an outbreak of a severe pneumonia-like illness that attacks both the lungs and the bronchial tubes and is ruthlessly resistant to treatment. This illness will be particularly baffling in that, after causing a winter of absolute panic, it will seem to vanish completely until ten years later, making both its source and its cure that much more mysterious.”
Comparing this to her later 2008 version (“In around 2020 a severe pneumonia-like illness will spread throughout the globe, attacking the lungs and the bronchial tubes and resisting all known treatments. Almost more baffling than the illness itself will be the fact that it will suddenly vanish as quickly as it arrived, attack again ten years later, and then disappear completely”) we can see a few differences.
It’s not uncommon for writers to revise and republish their work in different forms, sometimes changing or summarizing material for different formats and purposes. But in the case of predictions, it also serves an important, albeit unrecognized, purpose: It greatly increases the chances of a prediction—or, more accurately, some version of that prediction—being retroactively “right.” It’s one thing to make a single (seemingly specific) prediction about a future event; it’s another to make several different versions of that prediction so that your followers can pick and choose which details they think better fit the situation.
Note that the earlier prediction—which said “By 2020” (a limited, much more specific date than “In around 2020,” which as I noted spans several years)—focused on “more people than ever wearing surgical masks and rubber gloves in public,” which was “inspired by an outbreak of a severe pneumonia-like illness that attacks both the lungs and the bronchial tubes and is ruthlessly resistant to treatment.”
It’s certainly true that surgical masks (and, to a much lesser extent, gloves) are much more common today than, say, in 2019, but that’s an obvious and predictable reaction—as Browne herself admits—to the outbreak she mentions. Had Browne predicted any respiratory disease outbreak and specified that more people would not react by wearing masks and gloves, that would itself be an amazing (if nonsensical) prophecy. While we’re on the topic of self-evident revelations, note that Browne’s phrase “Both the lungs and the bronchial tubes” is redundant and nonsensical, providing only the illusion of specificity, since bronchial tubes are inside the lungs; saying “both … and” is meaningless, like saying “both the face and the nose will be affected,” or “both the West Coast and California will be affected.” Either Browne doesn’t know where bronchial tubes are, or she assumes her readers don’t.
Note that “This illness will be particularly baffling in that, after causing a winter of absolute panic, it will seem to vanish completely until ten years later, making both its source and its cure that much more mysterious” was changed to “Almost more baffling than the illness itself will be the fact that it will suddenly vanish as quickly as it arrived, attack again ten years later, and then disappear completely.”
Note that the qualifier “after causing a winter of absolute panic” was dropped from the earlier version—which is convenient for Browne, because the widespread panic surrounding Covid-19 didn’t begin in winter but instead in mid-March. (Of course, I’m not suggesting that Browne predicted in 2008 that her 2004 prediction would be wrong and changed the phrasing!)
Another noteworthy prediction dropped in the later edition was that the disease would seem to vanish completely after ten years, “making both its source and its cure that much more mysterious.” But “seeming to vanish completely” for a decade has nothing to do with whether “its source and its cure” are mysterious. The source of the outbreak has been pretty well established: Likely a meat market in Wuhan, China. The exact source, the specific name of the very first person that first had it (and where he or she got it from), the so-called Patient Zero, may never be known—not because it’s inherently “mysterious” but merely because epidemiology is a difficult task.
It’s not clear what Browne means by a “cure” because viruses themselves can’t be “cured,” though the diseases they lead to can be prevented with vaccination. Like the common cold, influenza, and most other contagious respiratory illnesses, people are “cured” of Covid-19 when they recover from it. In any event, neither the source nor the “cure” (whatever that would mean the context of Covid-19) are “mysterious” by medical and epidemiological standards.
After I wrote a piece about Browne’s failed predictions, I soon received hate mail from many of her fans who defended the accuracy of her prophecy and demanded that I take a closer look at her predictions. So I did; many of her predictions are set far in the future, but I did find a few dozen in her book Prophecy: What the Future Holds for You that referred to events between the time the book was published (2004) and this year. Here’s a sampling, in chronological order:
• “There will be a significant vaccine against HIV/AIDS in 2005” (p. 211).
• “The common cold will be over with by about 2009 or 2010” (p. 204)
• “By around 2010, law enforcement’s use of psychics will ‘come out of the closet’ and be a commonplace, widely accepted collaboration” (p. 180).
• “By around 2010, it will be mandated that a DNA sample of every infant born in the United States is taken and recorded at the time of the baby’s birth” (p. 182).
• “In about 2011, home security systems start becoming common … The windows are unbreakable glass, able to be opened only by the homeowner … Doors and windows will no longer have visible, traditional locks that can be picked or tampered with. Instead the security system allows access … by ‘eyeprints’” (p. 169).
• “There won’t be a successful manned exploration of Mars until around 2012” (p. 128).
• “In around 2012 or 2013 a coalition of five major international corporations … will combine their almost limitless resources and mobilize a vast, worldwide, ultimately successful movement to revitalize the rainforests” (p. 105).
• “By around 2014, pills, capsules, and even most liquid medicine will be replaced by readily accessible vaporized heat and oxygen chambers that can infuse every pore of the body with the recommended medications” (p. 209).
• “By the year 2015 invasive surgery involving cutting and scalpels and stitches and scars will be virtually unheard of” (p. 205).
• “Telemarketers will have long since vanished by 2015” (p. 171).
• “To give law enforcement one more added edge, by 2015 their custom-designed high-speed vehicles will be atomically powered and capable of becoming airborne enough to fly several feet above other traffic” (p. 190).
• “The search for extraterrestrials will ultimately end in around 2018 … because they begin stepping forward and identifying themselves to various international organizations and heads of state, particularly the United Nations, NATO, Scotland Yard, NASA, and a summit being held at Camp David” (p. 127).
• “By the year 2020 researchers will have created a wonderful material … able to perfectly duplicate the eardrum and will be routinely implanted, to restore hearing to countless thousands … Long before 2020, blindness will become a thing of the past” (p. 203).
• “By 2020 we’re going to see an end to the institution of marriage as we know it” (p. 255).
• “By about 2020 we’ll see the end of the one-man presidency and the costly, seemingly perpetual cycle of presidential campaigns and elections” (p. 135).
• “The year 2020 will spark an amazing resurgence in the popularity of the barter system throughout the United States, with goods and services almost becoming a more common form of payment than cash” (p. 140–141).
There’s more—oh, so much more—but you get the idea. Hundreds of predictions, mostly wrong, vague, unverifiable, in the distant future, some right, and so on. On p. 97 of Prophecy: What the Future Holds For You Browne claims that “my accuracy rate is somewhere between 87 and 90 percent if I’m recalling correctly.” Yet another failed prediction.
The new episode of Squaring the Strange is now out, for those who want a break from virus bad news. Celestia, Kenny Biddle, and I tracked psychic detectives’s involvement in current cases. In this part 2 of the show, I talk about my examination of a 14-year-old Ohio boy who went missing last Christmas and the psychic detectives involved. Details of the case might be disturbing to some, but it’s an important topic.
Please check it out, you can listen HERE!
The new episode of Squaring the Strange is out! The first of a two-parter, in this episode Celestia and Kenny Biddle each examined recent (then-current) missing person cases and closely examined how psychic detectives “helped” (or interfered) with each. A sobering topic but we think you’ll find it enlightening. Please check it out HERE!
For my Russian-speaking friends, I present my appearance on a Russian television show talking about monster folklore. Though Moscow paid for it, I promise my part (around 19 minutes in) is not Putin’s dezinformatsiya, and I got no Shill Rubles for it!
You can see it HERE!
As my awesome podcast Squaring the Strange (co-hosted by Pascual Romero and Celestia Ward) nears its three-year anniversary, I will be posting episode summaries from the past year to remind people some of the diverse topics we’ve covered on the show, ranging from ghosts to folklore to mysteries and topical skepticism. If you haven’t heard it, please give a listen!
The recent Squaring the Strange is even more awesome than most! We talk with expert Ron Pine about the Minnesota Iceman, a “sasquatchcicle” hoax of truly epic proportions. How did a sideshow gaffe fool two prominent cryptid researchers, and make it all the way to the Smithsonian for (limited) examination? What does J. Edgar Hoover have to do with this? Or a reclusive California millionaire? Listen and find out!
So this is cool: I’m quoted in an article on Bigfoot in The Mountaineer:
If you wear a size 14 shoe, chances are some of your high-school classmates called you “Bigfoot.” But that doesn’t mean you are an ape-like beast who may — or may not — just be a myth. A 1958 newspaper column began the whole thing. The Humboldt Times received a letter from a reader reporting loggers in California who had stumbled upon mysterious and excessively large footprints. The two journalists who reported the discovery treated it as a joke. But to their great surprise, the story caught on and soon spread far and wide. Bigfoot was born. Of course, reports of large beasts were not exactly new. The Tibetans had a Yeti, familiarly known as the “Abominable Snowman,” and an Indian Nation in Canada had its “Sasquatch.”
Guess what? Cochran found out the hair did not belong to Bigfoot. It was sent back to Byrne, with the conclusion it belonged to the deer family. Four decades later, the FBI declassified the “bigfoot file” about having done this analysis.“Byrne was one of the more prominent Bigfoot researchers,” said Benjamin Radford, deputy editor of the Skeptical Inquirer magazine. “In the 1970s, Bigfoot was very popular.”
You can read the rest of the article HERE!
As my awesome podcast Squaring the Strange (co-hosted by Pascual Romero and Celestia Ward) nears its three-year anniversary, I will be posting episode summaries from the past year to remind people some of the diverse topics we’ve covered on the show, ranging from ghosts to folklore to mysteries and topical skepticism. If you haven’t heard it, please give a listen!
As the world enters its third full month dealing with the deadly coronavirus, misinformation is running rampant. For many, the medical and epidemiological aspects of the outbreak are the most important and salient elements, but there are other prisms through which we can examine this public health menace.
There are many facets to this outbreak, including economic damage, cultural changes, and so on. However, my interest and background is in media literacy, psychology, and folklore (including rumor, legend, and conspiracy), and my focus here is a brief overview of some of the lore surrounding the current outbreak. Before I get into the folkloric aspects of the disease, let’s review the basics of what we know so far.
First, the name is a bit misleading; it’s a coronavirus, not the coronavirus. Coronavirus is a category of viruses; this one is dubbed “Covid-19.” Two of the best known and most deadly other coronaviruses are SARS (Severe Acute Respiratory Syndrome, first identified in 2003) and MERS (Middle East Respiratory Syndrome, identified in 2012).
The symptoms of Covid-19 are typical of influenza and include a cough, sometimes with a fever, shortness of breath, nausea, vomiting, and/or diarrhea. Most (about 80 percent) of infected patients recover within a week or two, like patients with a bad cold. The other 20 percent contract severe infections such as pneumonia, sometimes leading to death. The virus Covid-19 is spreading faster than either MERS or SARS, but it’s much less deadly than either of those. The death rate for Covid-19 is 2 percent, compared to 10 percent for SARS and 35 percent for MERS. There’s no vaccine, and because it’s not bacterial, antibiotics won’t help.
The first case was reported in late December 2019 in Wuhan, China. About a month later the Health and Human Services Department declared a U.S. public health emergency. The average person is at very low risk, and Americans are at far greater risk of getting the flu—about 10 percent of the public gets it each year. Three cruise ships and several airplanes have been quarantined. There are about a dozen confirmed cases in the U.S., and most of the infected are in China or are people who visited there. Though the number of people infected in China sounds alarming, keep in mind the country’s population of 1.4 billion.
The information issues can be roughly broken down into three (at times overlapping) categories: 1) Lack of information; 2) Misinformation; and 3) Disinformation.
The lack of information stems from the fact that scientists are still learning about this specific virus. Much is known about it from information gathered so far (summarized above), but much remains to be learned.
The lack of information has been complicated by a lack of transparency by the Chinese government, which has sought to stifle early alarms about it raised by doctors, including Li Wenliang, who recently died. As The New York Times reported:
On Friday, the doctor, the doctor, Li Wenliang, died after contracting the very illness he had told medical school classmates about in an online chat room, the coronavirus. He joined the more than 600 other Chinese who have died in an outbreak that has now spread across the globe. Dr. Li “had the misfortune to be infected during the fight against the novel coronavirus pneumonia epidemic, and all-out efforts to save him failed,” the Wuhan City Central Hospital said on Weibo, the Chinese social media service. Even before his death, Dr. Li had become a hero to many Chinese after word of his treatment at the hands of the authorities emerged. In early January, he was called in by both medical officials and the police, and forced to sign a statement denouncing his warning as an unfounded and illegal rumor.
Chinese officials were slow to share information and admit the scope of the outbreak. This isn’t necessarily evidence of a conspiracy—governments are often loathe to admit bad news or potentially embarrassing or damaging information (recall that it took nearly a week for Iran to admit it had unintentionally shot down a passenger airliner over its skies in January)—but part of the Chinese government’s long standing policies of restricting news reporting and social media. Nonetheless, China’s actions have fueled anxiety and conspiracies; more on that presently.
There are various types of misinformation, revolving around a handful of central concerns typical of disease rumors. In his book An Epidemic of Rumors: How Stories Shape Our Perceptions of Disease, Jon D. Lee notes:
People use certain sets of narratives to discuss the presence of illness, mediate their fears of it, come to terms with it, and otherwise incorporate its presence into their daily routines … Some of these narratives express a harsher, more paranoid view of reality than others, some are openly racist and xenophobic, and some are more concerned with issues of treatment and prevention than blame—but all revolve around a single emotion in all its many forms: fear. (169)
As Lee mentions, one common aspect is xenophobia and contamination fears. Many reports, in news media but on social media especially, focus on the “other,” the dirty aberrant outsiders who “created” or spread the menace. Racism is a common theme in rumors and urban legends—what gross things “they” eat or do. As Prof. Andrea Kitta notes in her book The Kiss of Death: Contagion, Contamination, and Folklore:
The intriguing part of disease legends is that, in addition to fear of illness, they express primarily a fear of outsiders … Patient zero [the assumed origin of the “new” disease] not only provides a scapegoat but also serves as an example to others: as long as people do not act in the same way as patient zero, they are safe. (27–28)
In the case of Covid-19, rumors have suggested that seemingly bizarre (to Americans anyway) eating habits of Chinese were to blame, specifically bats. One video circulated allegedly showing Chinese preparing bat soup, suggesting it was the cause of the outbreak, though it was later revealed to have been filmed in Palau, Micronesia.
The idea of disease and death coming from “unclean” practices has a long history. One well known myth is that AIDS originated when someone (presumably an African man) had sex with a monkey or ape. This linked moralistic views of sexuality with the later spread of the disease, primarily among the homosexual community. More likely, however, chimps with simian immunodeficiency virus were killed and eaten for game meat, which is documented, which in turn transferred the virus to humans and spawned HIV (human immunodeficiency virus), which in turn causes AIDS.
The fear of foreigners and immigrants bringing disease to the country was of course raised a few years ago when a Fox News contributor suggested without evidence that a migrant caravan from Honduras and Guatemala coming through Mexico carried leprosy, smallpox, and other dreaded diseases. This claim was quickly debunked.
Then there are the conspiracies, prominent among them the disease’s origin. Several are circulating, claiming for example that Covid-19 is in fact a bioweapon that has either been intentionally deployed or escaped/stolen from a secure top secret government lab. Some have claimed that it’s a plot (by the Bill and Melinda Gates Foundation or another NGO or Big Pharma) to sell vaccines—apparently unaware that there is no vaccine available at any price.
This is a classic conspiracy trope, evoked to explain countless bad things, ranging from chupacabras to chemtrails and diseases. This is similar to urban legends and rumors in the African American community, claiming that AIDS was created by the American government to kill blacks, or that soft drinks and foods (Tropical Fantasy soda and Church’s Fried Chicken, for example) contained ingredients that sterilized the black community (for more on this, see Patricia Turner’s book I Heard It Through the Grapevine: Rumor in African-America Culture.) In Pakistan and India, public health workers have been attacked and even killed trying to give polio vaccinations, rumored to be part of an American plot.
Of course such conspiracies go back centuries. As William Naphy notes in his book Plagues, Poisons, and Potions: Plague Spreading Conspiracies in the Western Alps c. 1530-1640, people were accused of intentionally spreading the bubonic plague. Most people believed that the plague was a sign of God’s wrath, a pustular and particularly punitive punishment for the sin of straying from Biblical teachings. “Early theories saw causes in: astral conjunctions, the passing of comets; unusual weather conditions … noxious exhalations from the corpses on battlefields” and so on (vii). Naphy notes that “In 1577, Claude de Rubys, one of the city’s premier orators and a rabid anti-Protestant, had openly accused the city’s Huguenots of conspiring to destroy Catholics by giving them the plague” (174). Confessions, often obtained under torture, implicated low-paid foreigners who had been hired to help plague victims and disinfect their homes.
Other folkloric versions of intentional disease spreading include urban legends of AIDS-infected needles placed in payphone coin return slots. Indeed, that rumor was part of an older and larger tradition; as folklorist Gillian Bennett notes in her book Bodies: Sex Violence, Disease, and Death in Contemporary Legend, in Europe and elsewhere “Stories proliferated about deliberately contaminated doorknobs, light switches, and sandboxes on playgrounds” (115).
Various theories have surfaced online suggesting ways to prevent the virus. They include avoiding spicy food (which doesn’t work); eating garlic (which also doesn’t work); and drinking bleach (which really, really doesn’t work).
In addition, there’s also something called MMS, or “miracle mineral solution,” and the word miracle in the name should be a big red flag about its efficacy. The solution is 28 percent sodium chlorite mixed in distilled water, and there are reports that it’s being sold online for $900 per gallon (or if that’s a bit pricey, you can get a four-ounce bottle for about $30).
The FDA takes a dim view of this, noting that it
has received many reports that these products, sold online as “treatments,” have made consumers sick. The FDA first warned consumers about the products in 2010. But they are still being promoted on social media and sold online by many independent distributors. The agency strongly urges consumers not to purchase or use these products. The products are known by various names, including Miracle or Master Mineral Solution, Miracle Mineral Supplement, MMS, Chlorine Dioxide Protocol, and Water Purification Solution. When mixed according to package directions, they become a strong chemical that is used as bleach. Some distributors are making false—and dangerous—claims that Miracle Mineral Supplement mixed with citric acid is an antimicrobial, antiviral, and antibacterial liquid that is a remedy for autism, cancer, HIV/AIDS, hepatitis, flu, and other conditions.
It’s true that bleach can kill viruses—when used full strength on surfaces, not when diluted and ingested. They’re two very different things; confuse the two at your great peril.
Folk remedies such as these are appealing because they are something that victims (and potential victims) can do—some tangible way they can take action and assume control over their own health and lives. Even if the treatment is unproven or may be just a rumor, at least they feel like they’re doing something.
There have been several false reports and rumors of outbreaks in local hospitals across the country, including in Los Angeles, Santa Clarita, and in Dallas County, Texas. In all those cases, false social media posts have needlessly alarmed the public—and in some cases spawned conspiracy theories. After all, some random, anonymous mom on Facebook shared a screen-captured Tweet from some other random person who had a friend of a friend with “insider information” about some anonymous person in a local hospital who’s dying with Covid-19—but there’s nothing in the news about it! Who are you going to believe?
Then there’s Canadian rapper/YouTube cretin James Potok, who last week stood up near the end of his WestJet flight from Toronto to Jamaica and announced loudly to the 240 passengers that he had just come from Wuhan, China, and “I don’t feel too well.” He recorded it with a cell phone, planning to post it online as a funny publicity stunt. Flight attendants reseated him, and the plane returned to Toronto where police and medical professionals escorted him off the plane. Of course he tested negative and was promptly arrested.
When people are frightened by diseases, they cling to any information and often distrust official information. These fears are amplified by the fact that the virus is of course invisible to the eye, and the fears are fueled by ambiguity and uncertainty about who’s a threat. The incubation period for Covid-19 seems to be between two days and two weeks, during which time asymptomatic carriers could potentially infect others. The symptoms are common and indistinguishable from other viruses, except when confirmed with lab testing, which of course requires time, equipment, a doctor visit, and so on. Another factor is that people are very poor at assessing relative risk in general anyway (for example, fearing plane travel over statistically far more dangerous car travel). They often panic over alarmist media reports and underestimate their risk of more mundane threats.
The best medical advice for dealing with Covid-19: Thoroughly cook meat, wash your hands, and stay away from sick people … basically the same advice you get for avoiding any cold or airborne virus. Face masks don’t help much, unless you are putting them on people who are already sick and coughing. Most laypeople use the masks incorrectly anyway, and hoarding has led to a shortage for medical workers.
Hoaxes, misinformation, and rumors can cause real harm during public health emergencies. When people are sick and desperately afraid of a scary disease, any information will be taken seriously by some people. False rumors can not only kill but can hinder public health efforts. The best advice is to keep threats in perspective, recognize the social functions of rumors, and heed advice from medical professionals instead of your friend’s friend on Twitter.
An Epidemic of Rumors: How Stories Shape Our Perceptions of Disease, Jon D. Lee
Bodies: Sex Violence, Disease, and Death in Contemporary Legend, Gillian Bennett
I Heard It Through the Grapevine: Rumor in African-America Culture, Patricia Turner
Plagues, Poisons, and Potions: Plague Spreading Conspiracies in the Western Alps c. 1530-1640, William Naphy
The Global Grapevine: Why Rumors of Terrorism, Immigration, and Trade Matter, Gary Alan Fine and Bill Ellis
The Kiss of Death: Contagion, Contamination, and Folklore, Andrea Kitta
A different version of this article originally appeared in my blog for the Center for Inquiry; you can find it HERE.
As my awesome podcast Squaring the Strange (co-hosted by Pascual Romero and Celestia Ward) nears its three-year anniversary, I will be posting episode summaries from the past year to remind people some of the diverse topics we’ve covered on the show, ranging from ghosts to folklore to mysteries and topical skepticism. If you haven’t heard it, please give a listen!
Did you catch our recent bonus episode of Squaring the Strange? I gather some myths and misinformation going round about Wuhan Coronovirus, aka Novel Coronavirus, aka “we’re all gonna die,” aka COVID-19. Then special guest Doc Dan breaks down some virus-busting science for us and talks about the public health measures in place. Check it out HERE!
I’m quoted in a new article about real estate omens and superstitions at Realtor.com!
“An outdated kitchen and a lack of curb appeal aren’t the only things that can keep buyers from biting. When it seems like there’s just no explanation for a perfectly good home sitting on the market, you might consider other possible causes. Certain items, colors, and symbols have been thought to attract malicious forces to an otherwise peaceful abode. And while some people scoff at such beliefs, others take them seriously—and not just around Halloween.
“There are countless folkloric beliefs, and savvy homeowners are smart to acknowledge and respect such beliefs, whether they share them or not,” says Benjamin Radford, deputy editor of Skeptical Inquirer science magazine and co-host of the “Squaring the Strange” podcast.”
You can see the rest HERE!
Last month Neil Peart, the drummer and main lyricist for the rock band Rush, died. He’d been living in California and privately battled brain cancer for several years. The Canadian trio (Alex Lifeson on guitar, Geddy Lee on vocals, bass, and keyboards, and Neil Peart on drums) announced they’d stopped touring in 2015, after 40 years.
As a Rush fan and a skeptic I thought it would be a good opportunity to reflect on Peart’s passing and his skepticism-infused lyrics. There are over 150 Rush songs written or co-written by Neil, and many themes can be found among them, including alienation, skepticism, libertarianism, fantasy, and humanism. The discussion here is not comprehensive; my interest here is to briefly highlight some of the more potent lyrics and songs expressing doubt, skepticism, the frailty of perception, the fallibility of knowledge, and the dangers of certainty. Peart was likely one of the most widely-read lyricists in rock and roll, on topics ranging from philosophy to humanism to science. He was, as described in The New Yorker, “wildly literate.” George Hrab is among the many skeptics who offered a memorial to Peart (as well as Geo’s initial skepticism about the news of Peart’s death, and why Peart and the band seemed relatable), on his Geologic Podcast (episode 646).
As has been written elsewhere, Rush was a polarizing band that either you “got,” or you didn’t. I’ve met people who have barely heard of them, but few who were ambivalent about them. At the risk of employing the “I liked them when they weren’t cool” trope, I’ll note that my love of the band dates back to hearing “Tom Sawyer” on the radio for the first time in 1981 and being blown away. I joined the nascent Rush Backstage Club. This was back in a day when Rush fans such as myself connected via letters; a Pen Pals section offered a dozen or so addresses for Rush fans to meet each other and share their enthusiasm, at the comfortable pace of postal delivery.
I proceeded to buy all their albums and saw them live a dozen times over the years. Most of the albums were great, a few were good, and some of the later albums (Vapor Trails, Counterparts, and Test For Echo, for example) left me a bit cold. But Rush had earned my loyalty and I’d buy anything they put out, just on principle. The most mediocre Rush song—and there are many—was usually head and shoulders above most of the other rock music I was hearing.
For much of Rush’s history Peart was the shy, retiring member. He rarely did interviews or fan meet-and-greets after concerts; that was a role that Geddy and Alex happily—or, surely, sometimes dutifully—fulfilled. It wasn’t that he didn’t appreciate fans or thought it was beneath him, he just didn’t enjoy it and would rather be alone, read, or plan his solo motorcycle trip to the next venue (something he often did).
But that wasn’t always the case; as a member of the Rush Backstage Club I got their newsletter in which Neil would respond to questions from fans. This was the mid-1980s, of course, long before the internet; that’s how things were done in those days. I never wrote in, partly because I didn’t know what I’d ask him if he actually responded.
The quality of the questions varied widely, ranging from the insightful to the banal. Neil typically responded in earnest, though occasionally his replies revealed a latent and understandable irritation. One got the impression that Neil didn’t suffer fools lightly, but he also recognized that Rush fans were a broad lot that included (or perhaps dominated by) nerdy, misfit teenagers and young adults, mostly male, perhaps not unlike himself as a teen in St. Catharines, Ontario. (Peart wrote about this inevitable gap between performer and audience, expert and layman, in the song Limelight.)
The three performers, lifelong friends, often made better music than bands with two or three times the number of members. Watching other, larger, bands I was often confused: What the hell are those other musicians doing? Why are there three guitarists, two keyboardists, a singer, a drummer, and some woman on a tambourine? And the backup singers? Is this a flash mob or a rock band? The answer, of course, is that none of them were Geddy, Alex, or Neil.
Peart was widely known as “The Professor” because of his intellectualism, his analytical approach to percussion, and the fact that he taught and influenced a generation of musicians. I’m not a musician, and didn’t learn drumming from him (though I did learn about some of the history and techniques from him). I’m not a lyricist and didn’t learn songwriting from him either. But we had some shared interests including the 1960s British television show The Prisoner, as evinced by some of his lyrics and his wearing of the distinctive Number Six pennyfarthing badge used in the series. The Prisoner is widely regarded as one of the most innovative and cerebral series of the 1960s—or, really, ever. Had I gotten the chance to meet him, I’d have avoided talking about drumming—or even music in general—and instead steered the conversation to shared interests such as Africa, travel, writing, belief, skepticism, and so on.
To be clear: Geddy and Alex are no slouches in the intellectual and reading departments either, the latter having been photographed reading the Christopher Hitchens classic God Is Not Great. Lee and Lifeson are enormously accomplished outside of music as well, but here I focus on Peart’s contribution as a lyricist (I hear he’s regarded as a passable drummer as well).
I’m not going to engage in extensive dives on various meanings, allegories and interpretations of the lyrics. I believe that most of the lyrics speak for themselves; one of the qualities of Peart’s writing is that it’s (usually) accessible. In a 1992 interview with Roger Catlin Peart noted that “For a lot of people, lyrics just aren’t that important. I can enjoy a band when the lyrics are shallow. But I can enjoy it more if the lyrics are good.” Here are some lyrics I find especially resonant.
Tom Sawyer / Moving Pictures (1981)
No, his mind is not for rent
To any god or government
Always hopeful, yet discontent
He knows changes aren’t permanent
But change is
Freewill / Permanent Waves (1980)
You can choose a ready guide
In some celestial voice
If you choose not to decide
You still have made a choice
You can choose from phantom fears
And kindness that can kill
I will choose a path that’s clear
I will choose free will
The “Fear” Series
Rush released four songs related to the topic of fear: Witch Hunt (Moving Pictures); The Enemy Within (Grace Under Pressure); The Weapon (Signals), and, much later, Freeze (Vapor Trails). I want to focus on Peart’s plea for reason and rationality in Witch Hunt:
Witch Hunt / Moving Pictures (1981)
The night is black
Without a moon
The air is thick and still
The vigilantes gather on
The lonely torchlit hill
Features distorted in the flickering light
The faces are twisted and grotesque
Silent and stern in the sweltering night
The mob moves like demons possessed
Quiet in conscience, calm in their right
Confident their ways are best
The righteous rise
With burning eyes
Of hatred and ill-will
Madmen fed on fear and lies
To beat, and burn, and kill
The lyrics reference xenophobia, moral guardians, moral panics, and censorship in the second half of the song:
They say there are strangers, who threaten us
In our immigrants and infidels
They say there is strangeness, too dangerous
In our theatres and bookstore shelves
That those who know what’s best for us –
Must rise and save us from ourselves
Quick to judge,
Quick to anger
Slow to understand
Ignorance and prejudice
Walk hand in hand
Totem / Test for Echo (1996)
I believe in what I see
I believe in what I hear
I believe that what I’m feeling
Changes how the world appears
In his book Ghost Rider: Travels on the Healing Road, Peart wrote, “At the time of writing those lines [before the death of his daughter Selena], I had in mind the contradiction between a skeptic’s dismissal of anything not tangible (true agnosticism) and the entirely subjective way many people tend to view and judge the world, through the filters of ever-changing emotions and moods” (p. 79).
Angels and demons dancing in my head
Lunatics and monsters underneath my bed
Media messiahs preying on my fears
Pop culture prophets playing in my ears
Roll the Bones / Roll the Bones (1991)
Faith is cold as ice
Why are little ones born only to suffer
For the want of immunity
Or a bowl of rice?
Well, who would hold a price
On the heads of the innocent children
If there’s some immortal power
To control the dice?
We come into the world and take our chances
Fate is just the weight of circumstances
That’s the way that lady luck dances
Roll the bones
Get busy with the facts
No zodiacs or almanacs
No maniacs in polyester slacks
Just the facts
Brought Up To Believe (BU2B) / Clockwork Angels (2010)
I was brought up to believe
The universe has a plan
We are only human
It’s not ours to understand
The universe has a plan
All is for the best
Some will be rewarded
And the devil take the rest
All is for the best
Believe in what we’re told
Blind men in the market
Buying what we’re sold
Believe in what we’re told
Until our final breath
While our loving Watchmaker
Loves us all to death
In a world of cut and thrust
I was always taught to trust
In a world where all must fail
Heaven’s justice will prevail
There’s one final song I’d like to mention because it captures the mission of an inquisitive, Enlightenment-fueled mind:
Available Light / Presto (1989)
All four winds together
Can’t bring the world to me
Shadows hide the play of light
So much I want to see
Chase the light around the world
I want to look at life—In the available light
The “light” Peart is talking about is the same light of reason that Carl Sagan mentioned in his (borrowed) aphorism, “It’s better to light a candle than curse the darkness.” Peart was open about his agnosticism (some would consider it atheism) and wrote eloquently about the dangers of religion.
As an avid Rush fan I collected several tourbooks and one thing that stood out to me was how often Peart was photographed reading books. He could have been photographed drinking and partying, living the rock star life (see the accompanying artwork for pretty much any Guns N Roses album, for example). Peart was thoughtful and literate. In one album photo he poses with Aristotle’s classic Poetics, and it’s clear that it’s not done ironically. Peart didn’t grab a book to read when photographers were around; he just didn’t bother to put it down when they were. He was who he was, and he didn’t care whether he looked the part of a rock star. The band seemed down to earth, taking their music—but not themselves—too seriously (see their speech at Rush’s induction into the Rock and Roll Hall of Fame in 2013 for example).
Neil Peart isn’t resting in peace or anywhere else; he’s gone but remains with us. As he said during the Hall of Fame induction, quoting Bob Dylan: “The highest purpose of art is to inspire. What else can you do for anyone but inspire them?” He and his band have inspired tens of millions of people in ways large and small. As Neil wrote, “A spirit with a vision is a dream with a mission.”
As advertised, the Oscar-nominated World War I film 1917 takes place in April 1917, when two British soldiers, William Schofield (George MacKay) and Tom Blake (Dean-Charles Chapman), are rousted from a weary daytime slumber. They’re ordered to cross enemy territory (a no man’s land littered with death and decay) and deliver an urgent message to another brigade to call off an attack. It seems that the other soldiers—including the brother of one of the men—are falling into a trap set by their German enemies who have cut the lines of communication.
1917 is reminiscent of other war films such as Saving Private Ryan and Gallipoli, but I was also reminded of a Roger Waters song from his album Amused to Death titled “The Ballad of Bill Hubbard,” a spoken account by World War I veteran Alfred Razzell who describes finding a mortally injured soldier, Hubbard, on the battlefield and is forced to abandon him.
1917 is about many things, and like most films can be viewed through many prisms. It’s a war movie, of course, but it’s also about friendship, loyalty, sacrifice, and so on. But the theme I saw most clearly in 1917 was information: what it is, how it’s used, and the inherent difficulties in its transmission.
Too often in fictional entertainment information is treated as certain, easily accessed, and easily transferred. Countless films, and especially spy thrillers such as the Jason Bourne series, have scenes in which the hero needs to get some vital piece of information which is instantly produced with a few keyboard taps, in dramatic infographic fashion, usually on giant, easy-to-understand wall screens. The Star Trek Enterprise computers are notorious for this: they predict (seemingly with unerring accuracy) when, for example, a planet or ship will explode. It’s always annoyed me as a deus ex machina cheat.
I understand why screenwriters do that; they want to get the exposition and premises out of the way so we can move on with the plot. No need to question the accuracy or validity of the information; the characters—and by extension, the audience—just needs to accept it at face value and move on. (Imagine a dramatic countdown scene in which the hero fails to defuse a bomb at the last second—but it still doesn’t go off and everyone is saved simply because a wire got loose or a battery died. Such scenes, though realistic, are dramatically unsatisfying, and thus rarely if ever depicted. They’re certain to raise the ire of audiences as much as an “it was all a dream” conclusion—and for the same reasons.)
Whether it’s a character in a fantasy or horror film being told exactly what words to say or what to do when confronting some great evil at the film’s climax, as a natural skeptic, I’m often left wondering, “How exactly do you know that? Where did your information come from? Who told you that, and how do you know it’s true? What if they’re lying or just made a mistake?” (Or, in a Shakespearean context, “Um, so, MacBeth: How do you know those are prophetic witches, not just three crazy old ladies putting you on?”). Army of Darkness (1992) and The Woman in Black (2012) are two of the few movies that actually take this issue seriously.
1917 takes the matter deadly seriously, depicting the decidedly unglamorous horrors of warfare. Though the events depicted happened a century ago, the basics of war have not changed in millennia; the goal is still to defeat, maim, and kill the other bastards—often when implementing wrong or incomplete information. It’s been said that truth is the first casualty of war, though that’s not always by design. Sometimes truth (or, more broadly, true information) can’t get from those who know, to those who need to know, in time to save lives. Sometimes that’s by design, such as when enemies cut off communications (as in this case); other times the truth is hidden with encryption, such as in The Imitation Game (2014). Often it’s merely the result of chaos and miscommunication. Isolation (including isolation from information) is an effective tool for building dramatic tension; that’s why many horror films are set in remote areas out of cell phone service. When dialing 911 or just asking Siri or Alexa could presumably save the day, screenwriters need to find ways to keep the heroes vulnerable.
The film—co-written by director Sam Mendes and dedicated to his grandfather, a veteran who “who told us the stories”—also doesn’t give much depth to the two soldiers. Blake and Schofield are given the barest of backstories, and the actors do what they can to flesh them out. The acting is good overall, but the real reason to see 1917 is the immersive and compelling filmmaking.
A longer version of this piece was published on my CFI blog.
Below is an excerpt:
I really did not want to read the book this article is about. I know that will likely give away the tone of this overall piece, but it’s just my honest reaction. When I saw the first announcements on social media that semi-celebrity Zak Bagans was releasing a new book titled Ghost Hunting for Dummies, I immediately groaned, deciding I’d pass on reviewing it. I’ve amassed quite a collection of “How to Ghost Hunt” type books since the 1990s, and I didn’t see any possibility of Bagans offering anything new—especially given his spotless track record of completely failing to find good evidence of ghosts during his decade-plus on television. At the time, I had no idea how right I’d be about that.
A close friend and colleague, Mellanie Ramsey, mentioned she was going to review the book on a podcast. After a brief conversation, she urged me to read it and participate in the podcast. I reluctantly agreed, placing an Amazon order and receiving my copy of Ghost Hunting for Dummies two days later. The book is over 400 pages and published by John Wiley & Sons, Inc., under the For Dummies brand, which boasts over 2,400 titles (Wiley 2020a). The “Dummies” books are meant to “transform the hard-to-understand into easy-to-use,” according to the company’s website (Wiley 2020b).
My first impression comes from the front cover, which I found to be an overall poor design compared to the Dummies format I was used to seeing: the slanted title, a pronounced and stylized Dummies logo, and either a character with a triangle-shaped head or a photo representing the content of the book. The cover of Ghost Hunting features the title printed straight across with a much smaller and less stylized version of the Dummies logo. The word for is so small that when I showed the book to my wife, she asked “Why did you buy a book called ‘Ghost Hunting Dummies’?” The cover also features a photograph of a basement stairway and door, along with an odd photograph of Bagans with his right hand extended toward the camera, like he’s reaching out to take your money. Overall, it’s just not an attractive cover.
Inside the book, the first thing I noticed was a lack of references; there are no citations or references listed anywhere and no bibliography at the end of the book. For me, this is a red flag; references tell us where the author obtained their information, quotes, study results, and so on. When a book is supposed to be educating you on a specific topic (or in this case, multiple topics), I expect to know the source material from which the information came. However, because this is the first book from the Dummies brand that I’ve purchased, I wasn’t sure if the lack of a bibliography was the standard format. I headed over to my local Barnes & Noble store and flipped through more than forty different Dummies titles, none of which contained references. I also noticed that all of the titles I checked, from Medical Terminology to 3D Printing, were copyrighted by Wiley Publishing/John Wiley & Sons. Ghost Hunting for Dummies is instead copyrighted by Zak Bagans.
There are several indications this book was rushed into publication for the 2019 holiday season. Chief among them are the extensive number of errors: typos, misspellings, repeated words, and missing words are littered throughout the pages. Another indication of premature release comes from the lack of the classic Dummies icons. On page 2, it’s explained that “Throughout the margins of this book are small images, known as icons. These icons mark important tidbits of information” (Bagans 2020). We are presented with four icons: the Tip (a lightbulb), the Remember (hand with string tied around one finger), the Warning (triangle with exclamation point inside), and the “Zak Says” (Zak’s face), which “Highlights my [Zak’s] words of wisdom or personal experiences” (Bagans 2020, 3). Over the 426 pages, there are only thirteen icons to be found throughout: five Tips, four Remembers, three Warnings, and one “Zak Says.” I guess Bagans didn’t have much wisdom to impart upon his readers.
Throughout much of the book, Bagans displays a strong bias against skeptics and scientists, even going as far as to claim to understand scientific concepts better than actual scientists. For example, while relating why he believes human consciousness can exist outside of the body, Bagans mentions Albert Einstein’s well-known quote, “Energy cannot be created or destroyed; it can only be changed from one form to another.” Bagans follows this with, “it’s baffling why this concept is so easy to understand for a paranormal investigator but not for a mainstream scientist” (Bagans 2020, 108). It’s actually mainstream scientists who understand this and Bagans who’s confused. The answer is very simple. Ben Radford addressed this common mistake in his March/April 2012 Skeptical Inquirer column “Do Einstein’s Laws Endorse Ghosts?”:
This is the second of a two-part piece.
The recent Clint Eastwood film Richard Jewell holds interesting lessons about skepticism, media literacy, and both the obligations and difficulties of translating real events into fictional entertainment.
Reel vs. Real
The film garnered some offscreen controversy when the Atlanta Journal-Constitution issued a statement complaining about the film, specifically how it and its journalism were portrayed. They and other critics complained particularly that the film unfairly maligns Scruggs, who (in real life) co-wrote the infamous AJC newspaper article that wrongly implicated Jewell in the public’s mind based on unnamed insider information. Scruggs, who isn’t alive to respond, is depicted as sleeping with FBI agent Shaw—with whom she had a previous relationship, at least according to Wilde—in return for information about Jewell.
The AJC letter to Warner Bros. threatened legal action and read in part, “Significantly, there is no claim in Ms. Brenner’s Vanity Fair piece on which the film is based that the AJC’s reporter unethically traded sex for information. It is clear that the film’s depiction of an AJC reporter trading sex for stories is a malicious fabrication contrived to sell movie tickets.” Such a depiction, the newspaper asserts, “makes it appear that the AJC sexually exploited its staff and/or that it facilitated or condoned” such behavior.
The newspaper’s response was widely seen in the public (and by many journalists) as a full-throated defense of Scruggs’s depiction in the film as being baseless and a sexist trope fabricated by Clint Eastwood and screenwriter Billy Ray to bolster the screenplay.
Richard Brody of The New Yorker writes that “It’s implied that she has sex with a source in exchange for a scoop; those who knew the real-life Scruggs deny that she did any such thing. It’s an ignominious allegation, and one that Eastwood has no business making, particularly in a movie about ignominious allegations.”
Becca Andrews, assistant news editor at Mother Jones, had a similar take: “Wilde plays Kathy Scruggs, who was, by all accounts, a hell-on-wheels shoe-leather reporter who does not appear to have any history of, say, sleeping with sources…. Despite Scruggs’ standing as a respected reporter who, to be clear, does not seem to have screwed anyone for a scoop over the course of her career, the fictional version of her in the film follows the shopworn trope.”
It all seems pretty clear cut and outrageous: the filmmakers recklessly and falsely depicted a female reporter (based on a real person, using her real name) behaving unethically, in a way that had no basis in fact.
A Closer Look
But a closer look reveals a somewhat different situation. It is true, as the AJC letter to Warner Bros. states several times, that the film was based on Brenner’s Vanity Fair article. However the letter conspicuously fails to mention that the film was not based only on Brenner’s article: There was a second source credited in the film—one which does in fact suggest that Scruggs had (or may have had) sex with her sources.
Screenwriter Ray didn’t make that detail up; one of the sources the film credits, The Suspect, by two respected journalists, Kent Alexander and Kevin Salwan, specifically refers to Scruggs’s “’reputation’ for sleeping with sources” (though not necessarily in the Jewell case specifically) according to The New York Times. Ray fictionalized and dramatized that part of the story, in the same way that all the events and characters are fictionalized to some degree. This explains why Scruggs was depicted as she was: that’s what the source material suggested.
The defense that, well, while it may be true that she was thought by colleagues to have had affairs with some of her sources—but not necessarily in that specific case—is pretty weak. It’s not as if there was no basis whatsoever for her depiction in the film, with Eastwood and Ray carelessly and maliciously manufacturing a sexist trope out of thin air. Ironically this book—the one that refers to Scruggs’s reputation for sleeping with her sources—was described by the Atlanta Journal-Constitution itself as “exhaustively researched” and “unsparing but not unfair.” It’s not clear why mentioning her reputation for sleeping with sources was “not unfair” when Alexander and Salwan did it in their (nonfiction) book about Richard Jewell, but is “false” and “extraordinarily reckless” when Ray and Eastwood did it in their (fictional) screenplay based in part on that very book.
True Stories in Fiction
The issues surrounding the portrayal of Scruggs in Richard Jewell—just like the portrayal of Jewell himself in the film—are more nuanced and complex than they first appear. Eastwood and Ray were not accused of tarnishing a dead reporter’s image by including a true-but-unseemly aspect of her real life in her fictional depiction. Nor were they accused of failing to confirm that information contained in one of their sources. Instead they were accused of completely fabricating that aspect of Scruggs’s life to sensationalize their film—which is demonstrably not true.
More fundamentally, complaints that the film isn’t the “real story” miss the point. It is not—and was never claimed to be—the “real story.” The film is not a documentary, it’s a scripted fictional narrative film (as it says on posters) “based on the true story.” (The full statement that appears in the film reads, “The film is based on actual historical events. Dialogue and certain events and characters contained in the film were created for the purposes of dramatization.”) That is, the film is based on some things that actually happened; that doesn’t mean that everything that really happened is in the film, and it doesn’t mean that everything in the film really happened. It means exactly what it says: the movie is “based on actual historical events.” Complaints about historical inaccuracy are of course very common in movies about real-life people and events.
Similar complaints were raised about Eastwood’s drama American Sniper about the film’s historical accuracy as it relates to the true story of the real-life Chris Kyle; these pedantic protests rather miss the point. Much of the “controversy” over whether it’s a 100% historically accurate account of Kyle’s life is a manufactured controversy sown of a misunderstanding, a straw man argument challenge to a strict historicity no one claimed.
In an interview with The New York Times, “Kelly McBride, a onetime police reporter who is the senior vice president of the Poynter Institute, a nonprofit organization that supports journalism, said the portrayal of Ms. Scruggs did not reflect reality” (emphasis added). It’s not clear why McBride or anyone else would believe or assume that a scripted film would “reflect reality.” There is of course no reason why fictional entertainment should necessarily accurately reflect real life–in dialogue, plot, or in any other way. Television and film are escapist entertainment, and the vast majority of characters in scripted shows and films lead far more interesting, dramatic, and glamorous lives than the audiences who watch them. While fictional cops on television shows regularly engage in gunfire and shootouts, in reality over 90% of police officers in the United States never fire their weapons at another person during the course of their career. TV doctors seem to leap from one dramatic, life-saving situation to another, while most real doctors spend their careers diagnosing the flu and filling out paperwork. I wrote about this a few years ago.
Richard Jewell is one of many “based on a true story” films currently out, including Bombshell, Ford v. Ferrari, A Beautiful Day in the Neighborhood, Seberg, Dark Waters, Midway, Honey Boy, Harriet, and others. Every one of these has scenes, dialogue, and events that never really happened, and characters that either never existed or existed but never did some of the specific things they’re depicted as having done on the screen.
It’s understandable for audiences to wonder what parts of the film are historically accurate and which parts aren’t, but making that distinction and parsing out exactly which characters are real and which are made up, and which incidents really happened and precisely when and how, is not the responsibility of the film or the filmmakers. The source material is clearly and fully credited and so anyone can easily see for themselves what the true story is. There are many books (such as Based on a True Story—But With More Car Chases: Fact and Fiction in 100 Favorite Movies, by Jonathan Vankin and John Whalen) and websites devoted specifically to parsing out what’s fact and what’s fiction in movies. There are also a handful of online articles comparing the true story of Richard Jewell with the fictional one.
There’s no deception going on, no effort to “trick” audiences into mistaking the film for a documentary. It is a scripted drama, with events carefully chosen for dramatic effect and dialogue written by a screenwriter and performed by actors. It’s similar in some ways to the complaint that a film adaptation of a book doesn’t follow the same story. That’s because books and films are very different media that have very different storytelling structures and demands. It’s not that one is “right” and the other is “wrong;” they’re different ways of telling roughly the same story.
Similarly, to ask “how accurate” a film is doesn’t make sense. A fictional film is not judged based on how “accurate” it is (whatever that would mean) but instead how well the story is told. Screenwriters taking dramatic license with bits and pieces of something that happened in real life in order to tell an effective story is their job. Writers can add characters, combine several real-life people into a single character, play with the chronology of events, and so on.
Ray certainly could—and arguably should—have changed the name of the character, but since in real life it was Scruggs specifically who broke the news about Jewell, and it was Scruggs specifically who in real life was rumored to have been romantically involved with sources, the decision not to do so is understandable. It’s likely, of course, that complaints would still have arisen even if her name had been changed, since Scruggs’s name is so closely connected to the real story.
The question of fictional representation is a valid and thorny one. Films and screenplays based (however loosely) on real events and people are, by definition, fictionalized and dramatized (this seems obvious, but may be more clear to me, as I have attended several screenwriting workshops taught by Hollywood screenwriters). Plots need conflict, and in stories based on things that actually happened, there will be heroes (who really existed in some form) and there will be villains (who also really existed in some form). The villains in any story will, by definition—and rightly or wrongly—typically not be happy with their depiction; villains are heroes in their own story.
The question is instead what obligations a screenwriter has to the real-life people cast in that villain role—keeping in mind of course that interesting fictional characters are a blend of hero and villain, good and bad. Heroes will have flaws and villains will have positive attributes, and may even turn out to be heroes in some cases.
You can argue that if Ray was going to suggest that Scruggs’s character slept with an FBI agent (as The Suspect suggested), that he should have confirmed it. But screenwriters, like non-fiction writers, typically don’t fact-check the sources of their sources. In other words they assume that the information in a seemingly reputable source (such as a Vanity Fair article or a well-reviewed book by the U.S. Attorney for the Northern District of Georgia and a former Wall Street Journal columnist, for example), is accurate as written. If they report that Scruggs had a reputation for sleeping with sources, or hid in the back of Jewell’s lawyer’s car hoping for an interview, or met with FBI agents in a bar, or any number of other things, then the screenwriter believing that she did so—or may have done so—is not unreasonable nor malicious.
In the end, the dispute revolves around a minor plot point in a single scene, and the sexual quid pro quo is implied, not explicit. Reasonable people can disagree about whether or not Scruggs was portrayed fairly in the film (and if not, where the blame lies) as well as the ethical limits of dramatic license in portraying real historical events and figures in fictional films, but the question here is more complex than has been portrayed—about, ironically, a film with themes of rushing to judgment and binary thinking—and should not detract from what is overall a very good film.
For those interested in the real, true story of how Richard Jewell was railroaded, bullied, and misjudged—instead of the obviously fictionalized version portrayed in the film—people can consult Marie Brenner’s book Richard Jewell: And Other Tales of Heroes, Scoundrels, and Renegades, based on her 1997 Vanity Fair article; and The Suspect, mentioned above.
The Social Threat of Richard Jewell
In addition to the potential harm to Scruggs’s memory, several critics have expressed concern about presumed social consequences of the film, suggesting, for example that Richard Jewell could potentially change the way Americans think about journalism (and female journalists in particular), as well as undermine public confidence in investigative institutions such as the FBI.
There is of course a long history of fears about the consequences of fictional entertainment on society. I’ve previously written about many examples, such as the concern that the 50 Shades of Grey book and film franchise would lead to harmful real-world effects, and that the horror film Orphan, about a murderous dwarf posing as a young girl, would literally lead to a decline in international adoptions. Do heavy metal music, role-playing games, and “occult” Halloween costumes lead to Satanism and drug use? Does exposure to pornography lead to increased sexual assault? Does seeing Richard Jewell decrease trust in journalism and the FBI? All these are (or were) plausible claims to many.
The public need not turn to a fictional film—depicting events that happened nearly 25 years ago—to find reasons to be concerned with the conduct of (today’s) Federal Bureau of Investigations. Earlier this month, a story on the front page of The New York Times reported that “The Justice Department’s inspector general… painted a bleak portrait of the F.B.I. as a dysfunctional agency that severely mishandled its surveillance powers in the Russia investigation, but told lawmakers he had no evidence that the mistakes were intentional or undertaken out of political bias rather than ‘gross incompetence and negligence.’”
No one would suggest that fictional entertainment have no effect at all on society, of course—there are clear examples of copycat acts, for example—and the topic of media effects is far beyond the scope here. I’ll just note that the claim that Richard Jewell (or any other film) affects public opinions about its subjects is a testable hypothesis, and could be measured using pre- and post-exposure measures such as questionnaires. This would be an interesting topic to explore, and of course it’s much easier to simply assume that a film has a specific effects than to go to the considerable time, trouble, and expense of actually testing it. Who needs all the hassle of creating and implementing a scientific research design (and tackling thorny causation issues) when you can just baldly assert and assume that they do?
There are certainly valid reasons to criticize the film, including its treatment of Scruggs, the FBI, and Jewell himself (who is also not alive to comment or defend himself). Good films provoke conversation, and those conversations should be informed by facts and thoughtful analysis instead of knee-jerk reactions and unsupported assumptions. Richard Jewell is a moving, important, and powerful film about a rush to judge and an otherwise ordinary guy—flawed and imperfect, just like the rest of us—who was demonized by institutional indifference and a slew of well-meaning but self-serving people in power.
A longer version of this piece appeared on my CFI blog; you can read it HERE.
The recent Clint Eastwood film Richard Jewell holds interesting lessons about skepticism, media literacy, and both the obligations and difficulties of translating real events into fictional entertainment.
It’s no secret that non-police security officers get little or no respect. They’re universally mocked and ignored in malls, security checkpoints, and airports. The stereotype is the self-important, dim, chubby ones, typified by Kevin James in Paul Blart: Mall Cop and—shudder—its sequel. Of course the stereotype extends to sworn officers as well, from rotund doughnut aficionado Chief Wiggum in The Simpsons to Laverne Hooks in the Police Academy franchise. They’re usually played for laughs, but there’s nothing funny about what happened to Richard Jewell.
Richard Jewell tells the story of just such a security guard who found a bomb at the 1996 Atlanta Olympics celebration. He spots a suspicious bag underneath a bench and alerts authorities, helping to clear the area shortly before the bomb goes off. The unassuming Jewell (played by a perfectly-cast Paul Walter Hauser) is soon seen as a hero and asked to make the media rounds of TV talk shows and possible book deals. There’s no evidence connecting Jewell to the crime, but the FBI, without leads and under increasing public pressure to make an arrest, turns its attention to Jewell. Things take a turn when Jewell is named in the press as being the FBI’s main suspect, a tip leaked by agent Tom Shaw (Jon Hamm) to hard-driving Atlanta Journal-Constitution reporter Kathy Scruggs (Olivia Wilde). But when he becomes the target of unrelenting attacks as an unstable and murderous “wannabe cop” he seeks out a lawyer named Watson Bryant (Sam Rockwell) to defend him.
What’s the case against him? FBI “experts” assured themselves (and the public) that the bomber fit a specific profile—one that Jewell himself fit as well (a loner with delusions of grandeur and a checkered past; the fact that he was single and living with his mother didn’t help). Psychological profiling is inherently more art than science, and to the degree to which it can be called a science, it’s an inexact one. At best it can provide potentially useful (if general and somewhat obvious) guidelines for who investigators should focus on, but cannot be used to include or exclude anyone from a list of suspects.
Bob Carroll, in his Skeptics Dictionary, notes that “FBI profiles are bound to be inaccurate. I noted some of these in a newsletter five years ago. Even if the profilers got a representative sample of, say, serial rapists, they can never interview the ones they don’t catch nor the ones they catch but don’t convict. Also, it would be naive to believe that serial rapists or killers are going to be forthright and totally truthful in any interview.” For more on this see “Myth #44: Criminal Profiling is Helpful in Solving Cases,” in 50 Great Myths of Popular Psychology, by Scott Lilienfeld, Steven Jay Lynn, John Ruscio, and Barry Beyerstein; and Malcolm Gladwell’s New Yorker article “Dangerous Minds: Criminal Profiling Made Easy.”
Psychologists will readily acknowledge these caveats, and their assessments are typically heavily qualified—much in the way that a good science journal report about an experiment will be candid about its limitations.
Journalists, however, are less interested in important nuances and caveats, and their readers are exponentially less so. The public wants binary certainty: Is this the bomber, or not? If not, why is the FBI investigating him, and why wouldn’t they explicitly announce that he wasn’t a suspect? Complicating matters, the public often misunderstands criminal justice issues and procedures. They widely assume, for example, that lie detectors actually detect lies (they don’t); or that an innocent person would never confess to a crime he or she didn’t really commit (they do). (In the film Jewell passes a polygraph, though little is made of it.)
When agent Shaw is confronted with evidence suggesting that Jewell does not, in fact, fit the profile and is likely innocent, instead of questioning his assumptions he doubles down, rationalizing away inconsistencies and stating that no one is going to fit the profile perfectly.
Jewell, a by-the-books type, is especially heartbroken to realize that his faith in the FBI’s integrity was sorely misplaced. All his life he’d looked up to federal law enforcement, until they turned on him. He isn’t angry or upset that he’s being investigated; he’s familiar enough with law enforcement procedures to understand that those closest to a murder victim (or a bomb) will be investigated first. But his initial openness and cooperation wanes as he sees FBI agents attempting to deceive and entrap him.
As Bryant tells Jewell, every comment he makes, no matter how innocuous or innocent, can be twisted into something nefarious that will put him in a bad light, and provide dots for others to (mis)connect. The fact that a friend as a teenager built homemade pipe bombs for throwing down gopher holes (long before he met Jewell) could be characterized as either a piece of evidence pointing to his guilt—or completely irrelevant. The fact that he has an impressive stash of weapons in his home could similarly be seen (if not by a jury, then certainly by a story-hungry news media) as being evidence of an obsession with guns—or, as he says with a shrug to Bryant, “This is Georgia.”
The film doesn’t paint the villains with too broad a brush; before an interview with the FBI Bryant reminds Jewell that the handful of agents harassing and persecuting him don’t represent the FBI in general; the entire U.S. government isn’t out to get him—no matter what it feels like. The news media is seen as a pack of vultures, camping out in front of his house, robbing him and his mother of privacy and dignity. You can probably guess what would have happened to Jewell in today’s age of internet-driven social media outrage; if not, see Jon Ronson’s book So You’ve Been Publicly Shamed. Shaw and the other FBI agents, as well as Scruggs (presumably) sincerely believed they’d named the right man—at least until a more thorough investigation reveals otherwise. The film is not anti-FBI, anti-government, nor anti-press; it is pro-due process and sympathetic to those who are denied it.
Ironically but predictably, even not talking to the police can be seen as incriminating. Those ignorant of the criminal justice system may ask, “What do you have to hide?” or even “Why do you need a lawyer if you’re innocent?” These are the sorts of misguided souls who would presumably be happy to let police search their property without a warrant because, well, a person should be fine with it if they have nothing to hide.
The result is a curious and paradoxical situation in which a completely innocent person is (rightfully) afraid to speak openly and honestly. Not out of fear of self-incrimination but out of fear that those with agendas will take anything they say out of context. This is not an idle fear; it happens on a daily basis to politicians, movie stars, and anyone else in the spotlight (however tangentially and temporarily). Newspaper and gossip reporters salivate, waiting for an unguarded moment when—god forbid—someone of note express an opinion. A casual, honest, and less-than-charitable but otherwise mild remark about a film co-star can easily be twisted and turned into fodder for a Twitter war. For example Reese Witherspoon laughing and reminiscing casually in an interview that, years ago, at a dinner party Jennifer Aniston’s steak was “tasty but a bit overcooked” can easily spawn headlines such as “Reese Witherspoon Hates Jennifer Aniston’s Cooking.” A flustered Oscar winner who forgets to thank certain people (such as a mentor or spouse) can set tongues wagging about disrespect or even infidelity—which is one reason why nominees write out an acceptance speech ahead of time, even if they don’t expect to win. The fewer things you say, the fewer bits of information you provide, the less fodder you give those who would do you harm. As Richard Jewell demonstrates, this is, ironically, a system that prevents people from being totally open and forthcoming.
Eastwood’s past half-dozen or so films have been based on real events and actual historical people: American Sniper (about Navy Seal sniper Chris Kyle); The 15:17 to Paris (Spencer Stone, Anthony Sadler, and Alek Skarlatos, who stopped a 2014 train terrorist attack); The Mule (Leo Sharp, a World War II veteran-turned-drug mule); Jersey Boys (the musical group The Four Seasons) and J. Edgar (as in FBI director Hoover). The complex, sometimes ambiguous nature and myriad facets of heroism clearly interest Eastwood, arguably dating back over a half century to his spaghetti Westerns (and, later, Unforgiven) where he played a reluctant gunslinger.
This is not the first biographical film that Eastwood has done about a falsely accused hero. His 2016 film Sully, for example, was about Chesley “Sully” Sullenberger (played by Tom Hanks), who became a hero after landing his damaged plane on the Hudson river and saving lives. Where Jewell was lauded—and then demonized—in public, Sullenberger was a hero in public but behind closed doors was suspected of having made poor decisions. National Transportation Safety Board (NTSB) officials second-guessed his actions based, as it turned out, in part on flawed flight simulator data, and Sullenberger was eventually cleared. (In another parallel, just as the Atlanta Journal-Constitution complained about its portrayal in Richard Jewell—more on that later—the NTSB complained about its portrayal in Sully.)
Just as we have imperfect victims, we have imperfect heroes. Bryant eventually realizes that Jewell has an admittedly spotty past, including impersonating an officer and being overzealous in enforcing rules on a campus. Jewell, like many social heroes, humbly denies he’s a hero; he was just doing his job. And he is exactly correct: Jewell didn’t do anything particularly heroic. He didn’t use his body to shield anyone from the bomb; he didn’t bravely charge at an armed gunman, or risk his life rushing to pull a stranded motorist from an oncoming train (as happened recently in Utah).
He’s not a chiseled and battle-hardened Navy SEAL; he’s an ordinary guy who did what he was trained and encouraged to do in all those oft-ignored public security PSAs: he saw something, and he said something. This is not to take anything away from him but instead to note that mundane actions can be heroic. Any number of other security guards and police officers could have been the first to spot the suspicious package; he just happened to be the right guy at the right (or wrong) time. One theme of the film is rule following; Jewell saved many people by following the rules and insisting that the backpack be treated as a suspicious package instead of another false alarm. But the FBI did not follow the rules in either its pursuit of Jewell or its leaking information to a reporter.
Jewell’s life was turned upside down, and if not destroyed at least severely damaged. That didn’t end some three months later when he was finally formally cleared. The news media had spent many weeks saturating the country with his name and face, strongly suggesting—though not explicitly saying, for legal reasons—that he was a domestic terrorist bomber.
Who’s responsible for an innocent man being falsely accused, bullied, and harassed? In the real case, apparently no one: though in real life an FBI agent was briefly disciplined for misconduct in connection to the case, the agency insisted that it had done nothing wrong; after all, Jewell was a suspect and the investigation did eventually clear him. The Atlanta Journal-Constitution also got off scot-free, with a judge later determining (in dismissing a defamation suit filed by Jewell) that its reporting, though ultimately flawed, was “substantially true” given the information known at the time it was published. Richard Jewell is having none of it, and points fingers at misconduct in both law enforcement and news media (though the film depicts no consequences for anyone responsible).
A longer version of this article first appeared on my CFI blog; you can read it HERE.
Part 2 will follow soon…
So this is cool: I’m quoted, and my book “Bad Clowns” mentioned, in a recent article on clowns in The Guardian (U.K.)!:
“In his 2016 book Bad Clowns, Benjamin Radford writes: “It’s misleading to ask when clowns turned bad, for they were never really good.” Radford asserts that clowns and jesters have been ambiguous characters for hundreds of years. But he adds that the clowns of our nightmares have “flourished and found fame” in the past few decades. Clowns, as with everything else in modern life, have become polarised, leaving audiences unsure how to react to a performer in white face paint. As Radford writes: “You can no more separate a good clown from a bad clown than a clown from his shadow.” So where does this leave our well-intended red-nosed comedians?”
If you missed the 2019 documentary I’m in, “Wrinkles the Clown,” it’s now available on DVD and streaming. It’s a fascinating look at a real-life evil clown hired by parents to scare kids–or maybe something else entirely…
More info is HERE!
For those who missed it: The recent episode of Squaring the Strange is out! In this episode we talk about caricature and mysterious crystal skulls. Can we trust what Dan Ayrkroyd tells us on his fancy vodka bottle? Are there really thirteen of these ancient and powerful relics? What is the Skull of Doom, and does it have strange properties that baffle scientists? We even look at an ill-conceived lawsuit against Steven Spielberg involving the crystal skulls featured in a Indiana Jones movie. Check it out, you can listen HERE!
I have a new book out! Or at least some contributions in a new book: Imagining the End: The Apocalypse in American Pop Culture. I wrote several sections including on the Antichrist, the Mark of the Beast, the Rapture, Latter-Day Saints Prophecy, and more.
You can see more about it HERE.
It began when the Mall of America hired a jolly bearded man named Larry Jefferson as one of its Santas. Jefferson, a retired Army veteran, is black–a fact that most kids and their parents neither noticed nor cared about. The crucial issue for kids was whether a Playstation might be on its way or some Plants vs. Zombies merchandise was in the cards given the particular child’s status on Santa’s naughty-or-nice list. The important thing for parents was whether their kids were delighted by the Santa, and all evidence suggests that the answer was an enthusiastic Yes. “What [the children] see most of the time is this red suit and candy,” Jefferson said in an interview. “[Santa represents] a good spirit. I’m just a messenger to bring hope, love, and peace to girls and boys.”
The fact that Santa could be African-American seemed self-evident (and either an encouraging sign or a non-issue) for all who encountered him. Few if any people at the Mall of America made any negative or racist comments. It was, after all, a self-selected group; any parents who might harbor reservations about Jefferson simply wouldn’t wait in line with their kids to see him and instead go somewhere else or wait for another Santa. Like anything that involves personal choice, people who don’t like something (a news outlet, brand of coffee, or anything else) will simply go somewhere else–not erupt in protest that it’s available to those who want it.
However a black Santa was a first for that particular mall, and understandably made the news. On December 1 the local newspaper, the Minneapolis Star Tribune, carried a story by Liz Sawyer titled “Mall of America Welcomes Its First Black Santa.”
Scott Gillespie, the editorial page editor for the Tribune, tweeted later that night (at 9:47 PM): “Looks like we had to turn comments off on story about Mall of America’s first black Santa. Merry Christmas everyone!” The tweet’s meaning seemed both clear and disappointing: On a story that the Star Tribune posted about an African-American Santa, the racial hostility got so pervasive in the comments section that they had to put an end to it, out of respect for Jefferson and/or Star Tribune readers. He ended with a sad and sarcastic, “Merry Christmas” and sent the tweet into cyberspace.
Overnight and the next morning his tweet went viral and served as the basis for countless news stories with titles such as “Paper Forced to Close Comments On Mall Of America’s First Black Santa Thanks to Racism” (Jezebel); “Santa is WHITE. BOYCOTT Mall of America’: Online Racists Are Having a Meltdown over Mall’s Black Santa” (RawStory); “Racists Freak Out Over Black Santa At Mall Of America” (Huffington Post); “Mall of America Hires Its First Black Santa, Racists of the Internet Lose It” (Mic.com), and so on. If you spend any time on social media you get the idea. It was just another confirmation of America’s abysmal race relations.
There’s only one problem: It didn’t happen.
At 1:25 PM the following day Gillespie, after seeing the stories about the scope and nature of the racist backlash the Tribune faced, reversed himself in a follow-up tweet. Instead of “we had to turn off comments,” Gillespie stated that the commenting was never opened for that article in the first place: “Comments were not allowed based on past practice w/stories w/racial elements. Great comments on FB & Instagram, though.”
This raised some questions for me: If the comments had never been opened on the story, then how could there have been a flood of racist comments? Where did that information come from? How many racist comments did the paper actually get? Fewer than a dozen? Hundreds? Thousands? Something didn’t add up about the story, and as a media literacy educator and journalist I felt it was important to understand the genesis of this story.
It can serve as an object lesson and help the public understand the role of confirmation bias, unwarranted assumptions, and failure to apply skepticism. In this era of attacks on “fake news” it’s important to distinguish intentional misinformation from what might be simply a series of mistakes and assumptions.
While I have no doubt that the Tribune story on Jefferson would likely have been the target of some racist comments at some point, the fact remains that the main point of Gillespie’s tweet was false: the Tribune had not in fact been forced to shut down the comments on its piece about the Mall of America’s black Santa because of a deluge of racist comments. That false information was the centerpiece of the subsequent stories about the incident.
The idea that some might be upset about the topic is plausible; after all, the question of a black Santa had come up a few times in the news and social media (perhaps most notably Fox News’s Megyn Kelly’s infamous incredulity at the notion three years earlier–which she later described as an offhand jest). Racist, sexist, and otherwise obnoxious comments are common in the comments section of many articles online on any number of subjects, and are not generally newsworthy. There were of course some racists and trolls commenting on the secondary stories about the Star Tribune‘s shutting down its comment section due to racist outrage (RawStory collected about a dozen drawn from social media), but fact remains that the incident at the center of the controversy that spawned outrage across social media simply did not happen.
A few journalists added clarifications and corrections to the story after reading Gillespie’s second tweet or being contacted by him. The Huffington Post, for example, added at the bottom of its story: “CLARIFICATION: This story has been updated to reflect that the Minneapolis Star Tribune‘s comment section was turned off when the story was published, not in response to negative comments.” But most journalists didn’t, and as of this writing nearly two million news articles still give a misleading take on the incident.
The secondary news reports could not, of course, quote from the original non-existent rage-filled comments section in the Star Tribune, so they began quoting from their own comments sections and those of other news media. This became a self-fulfilling prophecy, wherein the worst comments from hundreds of blogs and websites were then selected and quoted, generating another round of comments. Many people saw racist comments about the story and assumed that they had been taken from the Star Tribune page at the center of the story, and couldn’t be sure if they were responding to the original outrage or the secondary outrage generated by the first outrage. As with those drawn to see and celebrate Jefferson as the mall’s first black Santa, this was also a self-selected group of people–namely those who were attracted to a racially charged headline and had some emotional stake in the controversy, enough to read about it and comment on it.
I contacted Gillespie and he kindly clarified what happened and how his tweet inadvertently caused some of the world’s most prominent news organizations to report on an ugly racial incident that never occurred.
Gillespie–whose beat is the opinion and editorial page–was at home on the evening of December 1 and decided to peruse his newspaper’s website. He saw the story about Larry Jefferson and clicked on it to see if the black Santa story was getting any comments. He noticed that there were no comments at all and assumed that the Star Tribune‘s web moderators had shut them off due to inflammatory posts, as had happened occasionally on previous stories.
Understandably irritated and dismayed, he tweeted about it and went to bed, thinking no more of it. The next day he went into work and a colleague noticed that his tweet had been widely shared (his most shared post on social media ever) and asked him about it. Gillespie then spoke with the newspaper’s web moderators, who informed him that the comments had never been turned on for that particular post–a practice at the newspaper for articles on potentially sensitive subjects such as race and politics, but also applied to many other topics that a moderator for whatever reason thinks might generate comments that may be counterproductive.
“I didn’t know why the comments were off,” he told me. “In this case I assumed we followed past practices” about removing inflammatory comments. It was a not-unreasonable assumption that in this case just happened to be wrong. Gillespie noted during our conversation that a then-breaking Star Tribune story about the death of a 2-year-old girl at a St. Paul foster home also had its commenting section disabled–presumably not in anticipation of a deluge of racist or hateful comments.
“People thought–and I can see why, since I have the title of editorial page editor–that I must know what I’m talking about [in terms of web moderation],” Gillespie said. He was commenting on a topic about his newspaper but outside his purview, and to many his tweet was interpreted as an official statement and explanation of why comments did not appear on the black Santa story.
When Gillespie realized that many (at that time dozens and, ultimately, millions) of news stories were (wrongly) reporting that the Star Tribune‘s comments section had been shut down in response to racist comments based solely on his (admittedly premature and poorly phrased) Dec. 1 tweet, he tried to get in touch with some of the journalists to correct the record (hence the Huffington Post clarification), but by that time the story had gone viral and the ship of fools had sailed. The best he could do was issue a second tweet trying to clarify the situation, which he did.
“I can see why people would jump to the conclusion they did,” he told me. Gillespie is apologetic and accepts responsibility for his role in creating the black Santa outrage story, and it seems clear that his tweet was not intended as an attempt at race-baiting for clicks.
In the spirit of Christmas maybe one lesson to take from this case is charity. Instead of assuming the worst about someone or their intentions, give them the benefit of the doubt. Assuming the worst about other people runs all through this story. Gillespie assumed that racists deluged his newspaper with racist hate, as did the public. The web moderator(s) at the Star Tribune who chose not to open the comments on the Santa story may (or may not) have assumed that they were pre-empting a deluge of racism (which may or may not have in fact followed). I myself was assumed to have unsavory and ulterior motives for even asking journalistic questions about this incident (a topic I’ll cover next week).
In the end there are no villains here (except for the relative handful of racists and trolls who predictably commented on the secondary stories). What happened was the product of a series of understandable misunderstandings and mistakes, fueled in part by confirmation bias and amplified by the digital age.
Gillespie and I agreed that this is, when fact and fiction are separated, a good news story. As noted, Gillespie initially assumed that the newspaper’s moderators had been inundated with hostile and racist comments, and finally turned the comments off after having to wade through the flood of hateful garbage comments to find and approve the positive ones. He need not have feared, because exactly the opposite occurred: Gillespie said that the Star Tribune was instead flooded with positive comments applauding Jefferson as the Mall of America’s first black Santa (he referenced this in his Dec. 2 tweet). The tiny minority of nasty comments were drowned out by holiday cheer and goodwill toward men–of any color. He echoed Jefferson, who in a December 9 NPR interview said that the racist comments he heard were “only a small percentage” of the reaction, and he was overwhelmed by support from the community.
The fact that Jefferson was bombarded by love and support from the general public (and most whites) should offer hope and comfort. Gillespie said that he had expected people to attack and criticize the Mall of America for succumbing to political correctness, but the imagined hordes of white nationalists never appeared. A few anonymous cranks and racists complained on social media posts from the safety of their keyboards, but there was very little backlash–and certainly nothing resembling what the sensational headlines originally suggested.
The real tragedy is what was done to Larry Jefferson, whose role as the Mall of America’s first black Santa has been tainted by this social media-created controversy. Instead of being remembered for, as he said, bringing “hope, love, and peace to girls and boys,” he will forever be known for enduring a (fictional) deluge of bilious racist hatred. The true story of Jefferson’s stint as Santa is diametrically the opposite of what most people believe: He was greeted warmly and embraced by people of all colors and faiths as the Mall of America’s first black Santa.
Some may try to justify their coverage of the story by saying that even though in this particular case Jefferson was not in fact inundated with racist hate, it still symbolizes a very real problem and was therefore worthy of reporting if it raised awareness of the issue. The Trump administration adopted this tactic earlier this week when the President promoted discredited anti-Muslim videos via social media; his spokeswoman Sarah Huckabee Sanders acknowledged that at least some of the hateful videos Trump shared were bogus (and did not happen as portrayed and described), but insisted that their truth or falsity was irrelevant because they supported a “larger truth”–that Islam is a threat to the country’s security: “I’m not talking about the nature of the video,” she told reporters. “I think you’re focusing on the wrong thing. The threat is real, and that’s what the President is talking about.”
This disregard for truth has been a prominent theme in the Trump administration. Yes, some tiny minority of Muslims are terrorists; no one denies that, but that does not legitimize the sharing of bogus information as examples supposedly illustrating the problem. Similarly, yes, some tiny minority of Americans took exception to Jefferson as a black Santa, but that does not legitimize sharing false information about how a newspaper had to shut down its comments because of racist rage. There are enough real-life examples of hatred and intolerance that we need not invent new ones.
In this Grinchian and cynical ends-justifies-the-means worldview, there is no such thing as good news and the import of every event is determined by how it can be used to promote a given narrative or social agenda–truth be damned.
I understand that “Black Santa Warmly Welcomed by Virtually Everyone” isn’t a headline that any news organization is going to see as newsworthy or eagerly promote, nor would it go viral. But it’s the truth.
This piece originally appeared on my Center for Inquiry blog in 2017; you can see it HERE!
Better late than never: I was interviewed recently by Ty Bannerman on KUNM’s program “Let’s Talk New Mexico” about NM ghost stories and folklore. I discussed my KiMo theater ghost investigation, and a bit about the St. James hotel… check it out HERE!
Let’s Talk New Mexico: We’ll be discussing the paranormal side of New Mexico, from modern visitations to traditional legends, as well as taking a look at why we are so fascinated by these supernatural tales. And we want to hear from you! Have you ever experienced a ghost sighting? What happened? Or do you just love ghost stories and want to share a few of your favorite? Why do you think people find these tales so compelling?
In a recent episode of Squaring the Strange we look back at pop culture aspects of the Satanic Panic of the 1980s and 1990s, including Dungeons & Dragons, Geraldo Rivera, heavy metal, “Satanic Yoda,” and how technology influenced the panic…You can find it HERE.
Around Halloween I was interviewed by Ty Bannerman on KUNM’s program “Let’s Talk New Mexico” about NM ghost stories and folklore. I discussed my KiMo theater ghost investigation, and a bit about the St. James hotel… check it out HERE!
I’ll be giving a talk at the La Farge library in Santa Fe on “Ghosts of New Mexico,” so if you’re free stop by and learn about some Land of Enchantment folklore and spookiness!
You can find more information HERE!
So this is cool: I appear in a new documentary film titled “Wrinkles the Clown,” about a creepy clown in Florida who scares kids (often at their parents’ request). It’s a fascinating, weird story, and you can hear my voice in the official trailer (link in story below). The film will be released Oct. 4 in theaters and streaming, so look for it this weekend!
Here’s what Nerdist has to say:
Between It Chapter Two and the upcoming Joker, it is safe to say creepy clowns are having a moment again. Thanks to Deadline, we’ve learned about a new documentary about a real life terrifying clown that has been haunting the nightmares of kids for years. Wrinkles The Clown is all about a Florida clown who found a whole new career being hired by parents to scare the crap out of their misbehaving kids. Well, we hope the kids were misbehaving, or else this is just plain mean.
You can see the first trailer for Wrinkles The Clown down below:
Wrinkles first rose to internet fame several years back. It all started when a grainy low-resolution video of a terrifying clown slowly coming out from underneath a child’s bed was posted on YouTube. It quickly went viral, and suddenly the legend of Wrinkles the Clown was born. There were Wrinkles sightings across the state of Florida, freaking locals out. And kids calling what they believed to be Wrinkles’ phone number and seeing if he’d pick up became a rite of passage, much like saying “Bloody Mary” five times in front of a mirror was for previous generations. Only in this case, Wrinkles actually was a real guy.
This new documentary from filmmaker Michael Beach Nichols explores the man behind the terrifying mask, a man who inspired a wave of copycat “creepy clown sightings” all across America not long after. It will explores how quickly urban legends can take hold in the age of YouTube and social media. Even as such, things are easier to debunk as hoaxes than ever before…
Check it out!