Jul 122020
 

The recent episode of Squaring the Strange is out! This time around we examine the legend of snuff films–movies in which one or more of the actors are (really) killed! 

We are joined by filmmaker and encyclopedia of weird film knowledge Erik Kristopher Myers. The notion of a “snuff film” is a strange convergence of conspiracy thinking, urban legend, moral panic, and actual film trivia, and we tour the genre–or, rather, things that have been assumed part of this elusive genre–from the Manson family to Faces of Death to an early found-footage gore fest called Cannibal Holocaust. Have any real snuff films ever been uncovered, or any black market snuff rings investigated? What are the factors that play into our belief in, and fear of, these monstrous commodifications of our mortality? And how have moviemakers and underground video producers capitalized on the idea?

Check it out HERE! 

Jul 082020
 

Spree killer and ex-police officer Christopher Dorner was recently mentioned in a standup comedy piece by Dave Chapelle. If the name sounds vaguely familiar, it’s because Dorner killed four people and wounded three others in mid-February 2013, his victims including police and civilians in the Los Angeles area.

The manhunt for Dorner over the course of several days provides real-world insight into eyewitness reliability. As police tracked him down they received eyewitness reports of Dorner in dozens of places around southern California. Heavily-armed police officers descended on a Kentucky Fried Chicken outlet in Tarzana after a tipster said Dorner might be inside; an innocent African-American man who barely resembled Dorner was arrested and soon (thankfully) released.

Dorner was soon spotted at a Lowe’s home improvement store in Northridge, causing the store to be evacuated and a SWAT team dispatched, but he wasn’t there either; he was also reported at Men’s Central Jail in downtown Los Angeles, and so on. The fact that a $1 million reward was offered for information about Dorner’s location understandably increased the public’s incentive for reporting any and all potential sightings.

The public can often be instrumental in helping find missing persons—it is how Elizabeth Smart was eventually recovered after being abducted from her Salt Lake City home in 2002. But it also leads to many false leads. In 2002 two snipers attacked the Washington, D.C., area and terrorized much of America. Based upon eyewitness descriptions, law enforcement agencies alerted the public to be on the lookout for suspects in a white van. Thousand of vehicles were stopped and searched, jamming highways for miles. The focus on a white van intensified after a shooting outside a store in suburban Virginia, when a man claimed to have seen the shooter standing next to a white van. The man later admitted that he lied to police, likely seeking media attention.

While some false sightings are reported as pranks or for attention, most are from sincere eyewitnesses trying to help out. Elvis Presley sighting jokes aside, it is a real problem for police. It is not uncommon, especially in high-profile hunts for fugitives or missing persons, for dozens or hundreds of sightings to be reported to law enforcement. Police, of course, must treat all sightings and reports as potential leads; ignoring a valid tip might cost lives.

Furthermore, because false accusations often target minorities, it’s especially dangerous. You may recall Susan Smith, the mother who in 1994 blamed an African-American man for kidnapping her children when she in fact drowned them in a lake. Or Jennifer Wilbanks, the so-called “Runaway Bride” who claimed to have been kidnapped and assaulted by a Hispanic man, but who had in fact voluntarily left her groom at the altar. Or the infamous Central Park Five case, in which five Black and Latino teenagers were arrested in 1989 for the brutal rape and assault of a white jogger in New York’s Central Park. Many people—including Donald Trump and African-American poet Sapphire (author of Push, from which the Oscar-winning film Precious was adapted)—jumped on the bandwagon falsely accusing the young men of the crime. More recently in Central Park there was Amy Cooper, a white woman who called 911 on black bird-watcher Christian Cooper, stating falsely that “there’s an African American threatening me and my dog.”

The many false sightings of Dorner is not unusual. One of the most famous cases of false sightings was the disappearance of a three-year-old British girl named Madeleine McCann, last seen at a resort in Portugal in May 2007. Her presumed abduction made international news, and photos of McCann circulated widely as police and the family hoped for tips from the public. This led to the girl being “sighted” in dozens of different places in Europe and around the world, from Belgium to Brazil, Australia to Africa, by eyewitnesses who reported seeing her. The case remains unsolved, though in recent weeks it’s been reported that a German man is the main suspect.

All this has implications for psychology and eyewitness reliability; if you tell people what to look for, any face or physique that is even remotely similar (large Black male, small blonde girl) can become a (false) positive identification. By some estimates, as many as one-third of eyewitness identifications in criminal cases are wrong, and nearly 200 people who were convicted of crimes based on positive eyewitness identifications were later exonerated through DNA evidence.

Dorner died in a shootout in the San Bernardino Mountains on February 12, 2013. Though he died before he could stand trial, Dorner left an extensive rambling manifesto complaining about racism, politics, and his perceived scapegoating when he reported another officer’s misconduct toward a mentally ill man. He quotes Mia Farrow and D.H. Lawrence; praises a long list of celebrities including Dave Chapelle, Bill Cosby, Tavis Smiley, and others (Charlie Sheen is “effin awesome”); he lists “THE MOST beautiful women on this planet, period” (including Jennifer Beals, Natalie Portman, Kelly Clarkson, Margaret Cho, and Queen Latifah); gives musical shout-outs (Eric Clapton, Bob Marley, Metallica, etc.); and so on. Recognizing that his mass murder spree would likely end in his death, he also lamented the fact that he would not live to see The Hangover 3.

He also addresses those he plans to kill and explains his motives: “Terminating officers because they expose a culture of lying, racism (from the academy), and excessive use of force will immediately change. The blue line will forever be severed and a cultural change will be implanted. You have awoken a sleeping giant. I am here to change and make policy. The culture of LAPD versus the community and honest/good officers needs to and will change. I am here to correct and calibrate your morale compasses to true north… I never had the opportunity to have a family of my own, I’m terminating yours. Look your wives/husbands and surviving children directly in the face and tell them the truth as to why your children are dead.”

The fact that police treatment of a mentally ill man was one of the triggers for the murder spree was not lost on many. Though it’s often claimed that the news media highlight mental illness primarily with white mass shooters and suspects, Dorner was widely described by officials and news media as mentally ill, with L.A. Mayor Antonio Villaraigosa stating that “Whatever problem [Dorner] has is mental” and February 9 Associated Press news article describing Dorner as “severely emotionally and mentally disturbed.” In fact the characterization of Dorner as mentally ill was so prominent that some even complained about it; one writer, Thandisizwe Chimurenga in the L.A. Watts Times (February 21, 2013), complained that “The Media Tried to Assassinate Chris Dorner [with descriptions] of ‘Mental Illness’.”

It is of course possible, even likely, that Dorner was both mentally ill and subjected to racial harassment in the LAPD. Given the rarity of African-American serial killers or mass shooters—let alone ones who are also police officers and leave a manifesto—it’s no wonder that Dorner is still discussed today.

 

A longer version of this piece appeared on my Center for Inquiry blog; you can read it HERE. 

Jun 202020
 

We’ve all seen it on social media, especially Facebook. Some friend, or “friend,” or friend of a “friend,” posts a news story. Because it’s social media, the story is often selected (by human nature and algorithms) for its outrage factor. Amid the kitten videos and funny or cute memes, the news stories most likely to be shared are those that push people’s buttons—sometimes good news but more often bad news, tragedies, disasters, and the obligatory political outrage du jour. 

You read the headline and may Like or Share, but in the back of your head the news story may seem vaguely familiar … didn’t that happen years ago? In a world of twenty-four-hour news, it’s hard to remember, and on some level a lot of the stories sound (or are) basically the same: Someone killed someone in a gruesome way or because of some toxic motive. Trump said something that provoked (real or feigned) outrage. Some country implemented some new law affecting minorities. And so on. Even if it happened before, it must have happened again. 

Not long ago you could be reasonably certain that news was in fact news—that is, it happened recently and was “new.” But one of the consequences of getting news filtered via social media (as more and more people do) is that news organizations are further and further removed from their audiences. On television, in newspapers, or on news websites, the information is direct; you’re reading what a journalist (who presumably has some credibility to maintain) has to say about some given topic. News editors as a rule value breaking news, not old news. Unless it’s a special case (such as an anniversary of some significant event) or a retrospective, old news very rarely appears on broadcasts or on reputable news sites except in clearly-designated archives. 

On social media, of course, news is filtered through our peers and friends. Often it’s legitimate “new news,” but increasingly it’s old news misrepresented, mistaken for, or disguised as new news. This is a media literacy challenge, because old news is often fake news and shared by well-meaning people. News sharing on social media is less about the content of that story than it is about symbolic endorsement, or what’s been called virtue signaling. Liking or Sharing a news story doesn’t necessarily mean you’ve read it—much less understand it or can intelligently discuss it—but instead it’s often used as a visual badge representing your social and political views. If you’re concerned about environmentalism, social justice, immigration, politics, or anything else you can remind everyone where you stand on the issue. It’s sort of like bumper stickers on the information superhighway.

The Epistemology of Fake News

To understand why old news is often fake news, let’s take a brief look at epistemology, or the nature of knowledge. All of science is subject to revision and further information; new studies and research may always throw “facts” into the “former facts” category.

Science does not deal in absolute certainties, and it is possible—despite overwhelming evidence to the contrary—that smoking does not cause lung cancer, for example, and that humans are not contributing to global warming. Decades of research have established a clear causative link between these variables (smoking and lung cancer, human activity and global warming), but they are not 100 percent definitive; nothing in science ever is. 

Facts are only true at a certain time and under a certain set of circumstances. But the world is constantly changing, in ways both miniscule and dramatic, thus a fact about the world is accurate as of that time. It was once a fact that there were forty-eight states in the United States, but that no longer a fact; there are now fifty (including commonwealths). It was once a fact that the capital of the African state of Rhodesia is Salisbury; but Rhodesia no longer exists, and therefore that fact is a former fact, or more accurately the fact has been slightly changed to maintain its accuracy: “The capital of Rhodesia was Salisbury” remains a true fact. 

The point is not to revel in pedantry—though I’ve been accused of doing as much—but instead to note that many facts that we have incorporated into our knowledge base have changed and may no longer be true. That Texas is south of Canada has been true my entire life, but that my friend Amy is unmarried has not (she got married a few years ago). There are countless other examples, and they show why “is” and “was” are important distinctions, especially when it comes to news stories. Rehashing old news as new blurs the line between the two, sowing unnecessary confusion about what is true and what was true at one point (but may no longer be). 

This does not at all suggest that facts are subjective, of course, or that each person (or political party) is entitled to their own facts. But keeping in mind the important caveat that many people don’t read past the headline of a given news story, we see that recycling headlines makes misleading people likely. People don’t constantly update their knowledge about the world unless they have to, and thus typically rely on old (often outdated) information. 

Samuel Arbesman discusses this issue at length in his 2012 book The Half-Life of Facts: Why Everything We Know Has an Expiration Date. He notes that “Ultimately the reason errors spread is because it’s a lot easier to spread the first thing you find, or the fact that sounds correct, than to delve deeply into the literature in search of the correct fact … . Bad information can spread fast. And a first-mover advantage in information often has a pernicious effect. Whatever fact first appears … whether true or not, is very difficult to dislodge … . It’s like trying to gather dandelion seeds once they have been blown to the wind.” The best way to stop the spread of misinformation is Skepticism 101. “There is a simple remedy: Be critical before spreading information and examine it to see what is true. Too often not knowing where one’s facts came from and whether it is well-founded at all is the source of an error. We often just take things on faith.”

We all know that recycling is good in the context of natural resources, for example. Good ideas can be recycled, because, as they say, there’s nothing new under the sun, and what works (or doesn’t) at one point in time, in a specific set of circumstances, may work (or fail) at another time under a different set of circumstances. At one point, for example, developments for electric cars were prematurely proclaimed dead (as seen in the 2006 documentary Who Killed the Electric Car?) but today is a growing business. News stories are a different beast. 

Recycling Bad News

The news media go out of their way to emphasize bad or alarming news (“if it bleeds, it leads”), but social media compounds the problem. For the past year or two, I’ve noticed news articles from reputable sources shared on Facebook and other social media as if they were recent. Articles from 2015 and 2016 have been revived and given a new life, often shared and spread by people who didn’t know (or care) they were recycling old news. 

This is misleading because the posts rarely if ever include the date, instead showing merely the headline and perhaps a photo and the first sentence. So when unflattering events about Hillary Clinton, Donald Trump, or anyone else circulate, they are likely to take on a second or third life. Sometimes the events themselves are clearly dated (tied, for example, to election results), but it’s often political stories putting a prominent person in a bad light that tend to get recycled. A news story about a natural disaster is unlikely to get intentionally seen again, because no one benefits from fooling others into thinking that another devastating earthquake recently hit Mexico, for example. 

But a news story about a single specific incident of, for example, a Muslim group killing innocent Christians, or vice-versa, may be revived multiple times over the years, giving the illusion that the events keep occurring when in fact it may have been a one-time event. News organizations would not intentionally present past events as recent news, precisely because people assume that what they’re seeing in news feeds is both timely and important. Social media users, on the other hand, have no qualms about sharing old or misleading content if it promotes some pet social or political agenda. To conservatives, old news stories that make Obama or Clinton look bad are just as relevant and useful today as they were nearly a decade ago. To liberals, old news stories that highlight Trump’s corruption or incompetence are equally useful. (The Russians, for their part, are just happy to stir up divisiveness.)

Information can always be weaponized, but old news is by its nature often weaponized; it’s recirculated for a reason. It’s not information for the sake of knowledge; it’s information that misleads for a purpose and shared by those trying to support a greater good.

Whichever President’s Nazi U.N. Vote

To offer one example, a CBS News article titled “U.S. Votes against Anti-Nazi Resolution at U.N.” has circulated many times in recent months on Facebook, invariably accompanied by commentary about how it’s (another) example of Trump refusing to condemn Nazis and white supremacists. 

 

The news story is accurate—it’s just several years old and in fact occurred under Barack Obama. 

This opens liberals up to accusations of hypocrisy by conservatives: If voting against anti-Nazi resolutions is patently wrong, racist, and un-American, then where was the outrage when Obama did it? (This is of course a bit of a false equivalence fallacy, since Obama and Trump do not have similar histories regarding race relations—and to be fair, there are many perfectly valid reasons a country would refuse to vote for a measure that is otherwise worthy but may have unwanted attachments or obligations.) 

The point here is not to set up any false equivalence between the administration on this issue—nor, certainly, to defend Trump—but instead to illustrate how the psychological tendency toward confirmation bias can affect us. A “no” vote on an otherwise not-particularly-notable U.N. resolution takes on special social media newsworthy significance in a context and under an administration that has come under fire for similar accusations of racial insensitivity or even outright racism. In one context, it’s a non-event; in another, it’s a (in this case, false) data point in a constellation of incidents suggesting Trump’s support of white supremacy. 

News reports of racist attacks were often attributed to Trump’s influence, for example when a ninety-two-year-old Mexican man, Rodolfo Rodriguez, was attacked with a brick in Los Angeles and told to “go back to his own country.” His jaw was shattered and he suffered multiple broken ribs. I saw the story circulate in 2019 as recent news, with commentary that this was obviously a result of Trump’s latest racist comments—except that the attack had occurred over a year earlier, and the attacker was an African-American woman, Laquisha Jones. 

While it’s possible the attack was influenced by Trump, it’s unlikely and we should avoid the post hoc ergo propter hoc (“after this, therefore because of this”) fallacy. Some ugly racial incidents clearly are, some may be, and some are not; lumping them together serves no purpose. One can certainly argue that Trump’s words and actions have encouraged racial divisiveness in America, but using that specific news article or incident as an example is simply false and misleading. 

Some may try to justify sharing bogus information by saying that even though in this particular case the facts were wrong, it still symbolizes a very real problem and was therefore worthy of sharing if it raised awareness of the issue. This is an ends-justifies-the-means tactic often employed by those caught reporting a false story. The Trump administration adopted this position earlier in November 2017 when the President promoted discredited anti-Muslim videos via social media; his spokeswoman, Sarah Huckabee Sanders, acknowledged that at least some of the hateful videos Trump shared were bogus and represented events that did not happen as portrayed, but she insisted that their truth or falsity was irrelevant because they supported a “larger truth”—that Islam is a threat to the country’s security: “I’m not talking about the nature of the video,” she told reporters. “I think you’re focusing on the wrong thing. The threat is real, and that’s what the President is talking about.” There are plenty of other factually accurate news stories that could have made the same point.

The same applies to all recycled stories; the process, not the content, is the problem, and you can be sure that were a Democrat in the White House, the use of old, fake news would be just as robust. Facebook is aware of the problem and has recently introduced a small red “Breaking News” icon that appears along with (actually) new news stories, to help users distinguish new from old. 

 

 

But the ultimate responsibility lies with each social media user, who is after all the curator of their own newsfeeds. People need to take responsibility for what they share and (explicitly or tacitly) promote on social media. Every hoax or misleading meme can be stopped from going further by diligent—or even half-assed—efforts to not mislead others. It could be as simple as adding a caveat to the post such as “From 2015, and still relevant.” 

But this of course requires the person to spend a few seconds verifying the date—which is easy enough; it can be done simply by clicking on the link and noting the date, or often by merely hovering the cursor over an active URL, which often reveals the date (see below). 

 

Very few people generate new content on social media (and a significant portion of those who do are part of organized misinformation campaigns); most simply and blindly pass along whatever information they receive. In today’s misinformation-marinated world, skeptics and critical thinkers must be vigilant if they want to avoid becoming part of the problem. Otherwise socially and media literate people routinely admonish others to check their facts and demand evidence while actively share misinformation themselves. People can complain about misinformation and disinformation from Russia and biased news media all they like, but change begins at home.

 

This article first appeared at my CFI blog in 2019 and remains sadly relevant; you can find it HERE

 
Jun 162020
 

The twin plagues of COVID-19 and racism have come to the fore globally over the past few months, and as with any such afflictions there’s a social desire to scapegoat, finding someone (or some group) to blame. Parallels between the pandemic and racism are not hard to find. Earlier this month George Clooney referred to racism as America’s “pandemic,” for which we must find a “vaccine.” Street protesters as well can be seen holding signs encouraging people to “Treat Racism Like COVID-19.” 

The two are analogous in some ways, prompting some anti-vaccination crusaders to compare themselves to pioneering Civil Rights leaders, seeing themselves as victims of social injustice who will no longer be silent. As MacKenzie Mays noted in a September 2019 piece for Politico: “A chorus of mostly white women sang the gospel song ‘We Shall Overcome’ in the California State Capitol, an anthem of the civil rights movement. Mothers rallied outside the governor’s office and marched through Capitol corridors chanting “No segregation, no discrimination, yes on education for all!’ Some wore T-shirts that read ‘Freedom Keepers.’ But this wasn’t about racial equality. In the nation’s most diverse state, protesters opposed to childhood vaccine mandates — many from affluent coastal areas — had co-opted the civil rights mantle from the 1960s, insisting that their plight is comparable to what African Americans have suffered from segregationist policies. Assemblywoman Sydney Kamlager-Dove said, ‘The whole conversation around vaccinations is actually one about privilege and opportunity. It’s a personal choice. It’s a luxury to be able to have a conversation about medical exemptions and about whether or not you think your child should be vaccinated.” However passionate Jenny McCarthy is, she’s no Rosa Parks. 

Racism and Anti-Vaccination

René F. Najera, editor of the College of Physicians of Philadelphia’s History of Vaccines website, recently examined the cross-pollination of racism and anti-vaccination efforts, highlighting an incident that came to the attention of the California Asian Pacific Islander Legislative Caucus (APILC), which “denounced racist online postings from anti-vaccine people. One of those people is Rob Schneider, an actor and comedian who at one time had a television series on Netflix. This is not Mr. Schneider’s first foray into the cultural discussion on vaccination … The posts denounced by APILC includes Mr. Schneider’s comparison of Richard Pan, MD, to Mao Zedong, founder of the People’s Republic of China and author of several atrocities within China and the expansion of Communism around China’s sphere of influence in the post-World War 2 era. Dr. Pan is a child of Taiwanese immigrants to the United States…. In another post denounced by APILC, ‘Christine Lee’ posted a photoshopped poster of members of the California Legislature who have Asian heritage. In the text of the posting, she asks several leading questions, such as ‘Notice anything else about them?’ after pointing out that they are ‘all doctors-turned-politicians.’ (The implication being that they are all of Asian descent?) The final posting being denounced is that of ‘Cathy S-R,’ a self-described ‘Doctor of Chiropractic, medical freedom supporter, informed consent, dog/cat lover.’ In her posting to Twitter, she asks Dr. Pan if he is an American citizen [and] then contradicts her initial insinuation about Dr. Pan’s citizenship by stating that Dr. Pan ‘[m]ake [his] country proud.’” 

It’s not just Asians, of course—though prejudice toward them has increased with their association with COVID-19 and its origin in Wuhan, China. University of Wisconsin-Eau Claire professor David Shih notes that “People of color have been long associated with disease and public health pandemics. In the United States alone, the history of racialization cannot be separated from the discourse of non-white bodily or mental illness … I would like to focus on black Americans, and the influential story told about them by a single man, Frederick L. Hoffman. Hoffman was an actuary for the Prudential Life Insurance Company when he published Race Traits and Tendencies of the American Negro (1896). The 330-page document argued that black people should not be insured because they were a greater risk for mortality compared to other racial groups. Their lower life expectancies were directly related, Hoffman explained, to inferior, inherited racial traits which promised their eventual extinction as a people. Flawed as it was and critiqued by no less than W.E.B. DuBois in its day, Hoffman’s diagnosis was widely adopted by the insurance industry and went on to shape public debate over the ‘Negro question’ … Blackness was, quite simply, a public health problem. One of the reasons why we are not talking about the anti-vaccination movement as white is because we talk about geography and social class instead. These demographic characteristics often stand in as proxies for race, which is more controversial.” 

Nevertheless, race does occasionally come to the fore. In The Kiss of Death: Contagion, Contamination, and Folklore, professor Andrea Kitta examines the characteristics of well-known “patient zeros and superspreaders” of various diseases, including Mary Mallon (“Typhoid Mary”), Amber Vinson (the Texas nurse who contracted Ebola in 2014), and Chong Pei Ling (SARS victim in 2003). Notably, “of the thirteen cases listed, only four are ‘white’” (p. 34). The perceived link between nonwhite skin and contagion is clear and helps form the basis for initiatives to close America’s borders. The fear of foreigners and immigrants bringing disease to the country was of course raised a few years ago when a Fox News contributor suggested without evidence that a migrant caravan from Honduras and Guatemala coming through Mexico carried leprosy, smallpox, and other dreaded diseases. This claim was quickly debunked. For more on COVID-19 racist conspiracies, see my previous article in this series. 

New Age, Holistic Healers, and Conspiracies

Conspiracy theories are common among alternative medicine proponents—who often portray themselves as marginalized medical professionals denied the imprimatur of mainstream medicine—and some bleed over into racism. One prominent proponent is Kelly Brogan, a “holistic psychiatrist” who has gathered a huge following online for her dangerous theories about COVID-19, made in interviews and a series of videos. 

Brogan invokes Jewish history and the Holocaust in her arguments against vaccination, “suggesting the possibility that the US government is planning to ‘link our passports with our vaccination records’ as a method of gaining ‘totalitarian governmental control not unlike the divide-and-conquer dehumanization agendas that preceded the Holocaust.’” Brogan, associated with Gwyneth Paltrow’s New Age company Goop, was found to have misstated her credentials. On her website, she claimed that she was board certified in psychiatry and psychosomatic medicine/consultation psychiatry, but a search of records found that she was not; after an investigation by The Daily Beast, Brogan quietly deleted the references to her certifications. 

On social media, Brogan has shared videos with titles such as “Vaccine Conspiracy or Racist Population Control Campaign,” a 2014 video from anti-vaccination activist Celesta McGovern reprising longstanding rumors about attempts to sterilize Africans. The claims were soon debunked on the Science-Based Medicine website but have continued to circulate widely. There are many examples of racism in medicine, but the campaign Brogan highlights is, ironically, not among them. 

Many other alternative medicine and holistic websites also promote anti-vaccination conspiracies. NaturalNews, Mike “The Health Ranger” Adams, and others, for example, have widely shared bogus “news” stories attempting to discredit mainstream science, with headlines such as “Tetanus vaccines found spiked with sterilization chemical to carry out race-based genocide against Africans.” It’s all thrown into a toxic stew of misinformation about the dangers of vaccines, GMOs, cell phones, you name it. 

Like all conspiracy theories, these rumors and stories have a superficial plausibility, and gain traction by tapping into deep-seated—and often legitimate—concerns and fears. There is of course a long and well-documented history of racism in medicine, from the Tuskegee Experiments beginning in the 1930s to disparate healthcare treatment. When two French doctors recently suggested that a tuberculosis vaccine should be tested on Africans to see if it could be effective against COVID-19, the comments were denounced as racist and relics of a colonial past by the head of the World Health Organization (WHO). “Shouldn’t we do this study in Africa, where there are no masks, no treatment, no resuscitation, a bit like some studies on AIDS, where among prostitutes, we try things, because they are exposed, and they don’t protect themselves?” asked physician Jean-Paul Mira. The WHO called the comments “appalling” and said that any WHO-led vaccine testing will follow the same standards regardless of where it’s done. 

Folklorist Patricia Turner, in her book I Heard It Through the Grapevine: Rumor in African-American Culture, observes that “African-American mistrust of governmental agencies is not without merit … Official disrespect for the bodies of African-Americans has a long history in this country” (p. 112). Medicalized racism is real, harmful, and a serious problem, but that doesn’t mean that any given wild conspiracy theory is true.

Brogan’s attempt to paint the medical establishment as racist is ironic given her own history of promoting conspiracy theorist David Icke—who claims among many other things that Barack Obama is a Reptilian (when not spewing racist tropes). As The New York Times noted, “Mr. Icke draws on ideas from the anti-Semitic pamphlet The Protocols of the Elders of Zion, argues that Holocaust denial should be taught in schools and that Jews are responsible for organizing anti-Semitic attacks, and calls the Talmud a racist document. In other writings, he has posited that a cabal of a child-sacrificing, bloodthirsty lizard people, many of whom are Jewish, are secretly running the world.” 

In a March 20, 2020, post, Brogan encouraged her followers to “listen through to the end [of an interview with Icke] to learn how to remain calm and manifest the impossible.” Regarding COVID-19, she states that in fact “there is potentially no such thing as the coronavirus.” Brogan seems to decry racism conspiracy when it serves her anti-vaccination purposes, and promote racism conspiracy—or at least those who do—when it suits her.

Anti-vaccination wellness influencers such as Brogan are also actively sharing conspiracy theories from far-right groups such as QAnon about COVID-19. A recent Mother Jones article found that “Some have fused wellness hoaxes and pseudoscientific homeopathic treatments with QAnon and other far-right conspiracies. One such notable influencer is Joseph Arena, a chiropractor who uses the title ‘Dr.’ and has more than 40,000 followers. Arena has pushed explicit QAnon theories about massive pedophile rings run by the deep state on his Instagram account and has directed his followers to pro-QAnon pages to find ‘the truth.’… Dr. Shiva Ayyadurai, a biology PhD [with] nearly 100,000 followers, pushes QAnon-styled conspiracies about “deep state” [including] that the coronavirus is a tool for the ‘deep state’ in ‘consolidating its Power using its protected class of Hollywood & Academic whores.’”

Plandemic 

The recent Plandemic video is laden with conspiracies and hints darkly at motivations in its attacks on Dr. Anthony Fauci and Bill Gates. For example, as to the claims made about Fauci in Plandemic, former New York police officer Mitch Danzig, notes in an article for The Jewish Journal that “The NIAID, under Fauci’s leadership since 1984, provides dozens of grants to labs researching infectious diseases. These grants weren’t awarded to work on COVID-19. Many were, however, awarded to perform work on SARS, which spread across the world in 2003. The NIAID also didn’t give the funds directly to the Wuhan Institute. The grants were given instead to the EcoHealth Alliance, which invests in health research globally that led to at least 20 research papers on pre-COVID-19 coronaviruses published over the past six years. The grant referenced in these breathless, innuendo-filled stories about Fauci also wasn’t the first awarded by the NIAID to the EcoHealth Alliance. The NIAID has been providing grants to EcoHealth Alliance to fund infectious disease research projects all over the world, including in Chinese institutes, since 2005. This ‘smoking gun’ that Fauci conspiracy theorists keep touting is about as big a ‘Nothing Burger’ as one can imagine. But it is about as demonstrative of the claim that Fauci is responsible for COVID-19 as pointing to a specific Jew being the president of CBS as ‘proof’ that the ‘Jews control the media.’ To say that these conspiracy theories about Gates and Fauci, which often are promoted by a cohort of anti-vaxxers as well as anti-Semites, are specious and baseless, is to be kind.” 

Anti-vaccination advocates are of course not alone in spreading medical misinformation for social and political purposes; anti-abortion groups have been known to spread false rumors about contraception being secretly given instead of tetanus vaccines to women in developing countries. 

The protests about race relations and reopening the country are also being shared and eagerly amplified for political purposes by America’s enemies. In a Washington Post piece, Ishaan Tharoor noted that along with American citizens watching the racial rioting and protests, “America’s putative foreign adversaries also are watching. ‘This incident is far from the first in a series of lawless conduct and unjustified violence from U.S. law enforcement,’ the Russian Foreign Ministry said in a statement, adding to the Kremlin’s long history of pointing to human rights abuses in the United States. ‘American police commit such high-profile crimes all too often.’ Officials in Iran did the same, calling out racial injustice in America. ‘If you’re dark-skinned walking in the US, you can’t be sure you’ll be alive in the next few minutes,’ read a tweet from an account associated with Ayatollah Ali Khamenei, Iran’s supreme leader, which was accompanied by a video that detailed the horrific history of slavery in the United States. And then there was China. Already locked in a spiraling geopolitical confrontation with Washington, officials in Beijing seized on the protests to push back against the Trump administration’s assertive messaging on Hong Kong, a city whose unique autonomy is being dramatically curtailed by China.” China in particular is especially sensitive to the widespread criticism of its early handling of the COVID-19 outbreak, and its leaders may feel a sense of schadenfreude in America’s troubles. 

Who’s spurring the racial protests? Conspiracies point to any number of people, including rich Jewish businessmen such as George Soros who are allegedly hiring fake protesters. (In fact, this has been debunked.) Who’s spreading COVID-19? Rich liberals such as Bill Gates, hoping to become even richer. (In fact, this also has been debunked.)

Not all alternative medicine proponents are anti-vaccine, of course, just as not all anti-vaccination activists are conspiracy theorists, right-wing, racist, or all three. However, it’s not surprising that a Venn diagram reveals considerable overlap among the worldviews. Conspiracy is inherent in anti-vaccination belief, because Big Pharma has allegedly invested untold fortunes in keeping the “truth” about vaccines from public knowledge—despite, of course, widespread knowledge of precisely such anti-vaccination claims. 

People across the political spectrum believe conspiracy theories, and they all share a common worldview, one which is fundamentally distrustful of authority and anti-establishment. All pride themselves on being independent thinkers, a special breed of “woke” folk who are smart enough to separate themselves from the sheeple and not be swayed by what “They” want you to think. Theirs is a world in which world events are part of a Master Plan orchestrated by a Jewish cabal, the Illuminati, Bill Gates, Big Pharma, or whoever else. 

Racism, conspiracy thinking, and the rejection of science are all toxic problems, made worse when combined with the chaos and uncertainty of a pandemic. Fortunately, these are all learned behaviors that can be conquered. The best inoculations against misinformation are critical thinking, media literacy, and skepticism.

 

This is the sixth in a series of original articles on the COVID-19 pandemic by the Center for Inquiry as part of its Coronavirus Resource Center, created to help the public address the crisis with evidence-based information. A different version of this article appeared there. 

You can find more on me and my work with a search for “Benjamin Radford” (not “Ben Radford”) on Vimeo, and please check out my podcast Squaring the Strange! 

 

Jun 062020
 

Last month, a YouTube video for an (apparently) upcoming documentary titled Plandemic was released by Mikki Willis (credited onscreen as “father/filmaker” [sic]). The video features a lengthy interview with virologist Judy Mikovits, who offers scattershot conspiracy-laden assertions about the “truth” behind the COVID-19 pandemic, prefaced by claims of having been framed for a crime (she was charged with theft in 2011) and accusations of government coverups going back decades involving various medical authorities, including Dr. Anthony Fauci. Willis’s voiceover gravely warns that “for exposing their deadly secrets, the minions of Big Pharma have waged war on Dr. Mikovits,” who in the film (and in her new best-selling book the video promotes) bravely reveals “the plague of corruption that places all human life in danger.”

Dozens of claims are made in the twenty-six-minute video, some of which are unverifiable—as conspiracy theories tend to be. But many statements made by Mikovits have been investigated and proven to be misleading or simply false.

Among its claims, the video suggests that a vaccine for the virus (which of course hasn’t been developed) will be mandatory; however, no one is forced to get medical treatment. If and when a vaccine is available, federal agents armed with automatic weapons in one hand and a syringe in the other aren’t going to be bursting through doors to forcibly vaccinate anyone—paranoid conspiracy fantasies to the contrary.

It’s now been several weeks since the video was widely shared on social media, and questions have been raised by reputable journalists for publications including The Washington Post and The Atlantic, as well as Politifact. For an expert and filmmaker who claim to have been censored and silenced (with social media platforms such as Facebook and YouTube removing the video for containing dangerous misinformation), Mikovits and Willis have been strangely silent about answering legitimate questions raised about their claims.

In an effort to clarify the matter, the Center for Inquiry reviewed the video and, in collaboration with researcher Dr. Paul Offit, composed a list of eight simple questions about claims made in the video. CFI contacted Mr. Willis, who agreed in writing to respond to our questions. The next day he was provided the questions below, thanked for his cooperation, and asked to reply.

1) The Plandemic video claims that face masks “activate” coronaviruses, including SARS-CoV-2; what scientific evidence do you have that the virus is more infectious for individuals wearing masks than for those not wearing masks?

2) The video promotes hydroxychloroquine as effective against the virus (despite elevated cardiac risks and several placebo-controlled studies finding no efficacy at all). Instead of being ignored or suppressed by the medical establishment, controlled clinical trials of the drug have been performed. What is the “thousands of pages of data” already demonstrating the drug’s safety and efficacy referred to in the video?

3) The video claims that vaccines increase the odds of getting the virus by 36 percent, referencing a study by Dr. Greg Wolff published in the journal Vaccine. But the study did not examine SARS-CoV-2, was found to have been flawed, and in any event didn’t find that vaccines increased the risk by 36 percent. In fact, that statistic doesn’t appear anywhere in the Wolff study. Can you explain this?

4) The video claims that during the COVID-19 outbreak, beaches should be opened to the public because “You’ve got … healing microbes in the ocean and the salt water.” However, considering that bacteria don’t kill viruses, how would “healing microbes” reduce or treat coronavirus infection?

5) The video claims that COVID-19 deaths are being inflated due to medical profiteering (supposed payments of $13,000 per diagnosed patient)—yet hospitals across the country are losing money (and support staff are being laid off) because lucrative elective procedures are being cancelled or delayed due to the pandemic. How do you explain this discrepancy?

6) The video claims that the plan is “to prevent the therapies until everyone is infected, then push the vaccines.” Yet no vaccines are available, and if everyone is infected then a vaccine wouldn’t be needed. If the pandemic were part of a scheme to sell a vaccine (or force it on the public), why wouldn’t it have been developed before the virus was released and before hundreds of thousands of potential customers (sure to pay anything to stay alive) had already died? Can you clarify your logic?

7) The video refers to censorship by news media and corporate scientists, claiming that “there is [sic] no dissenting voices allowed.” If that’s true, then how did Mikovits’s books get published? And, for example, how did Dr. Andrew Wakefield publish an article in the prestigious journal Lancet in 1998 claiming a (since-discredited) link between childhood vaccines and autism? After other researchers failed to replicate the findings, the study was retracted, but how could it have been published in the first place if the medical establishment effectively silences “dissenting voices” who challenge the “agreed-upon narrative”?

8) Plandemic repeatedly emphasizes the importance of independent thinking and considering different perspectives. Did you interview anyone who challenged Mikovits’s claims, and what research did you do as a filmmaker to independently verify her claims?

The Center for Inquiry waited several days for a response and then followed up with a query asking Willis to confirm he received the questions and would be offering answers as agreed to. It’s now been nearly a week, and no response has been forthcoming from anyone featured in (or representing) the video. This article will be updated when and if substantive answers are received.

If the claims made by Mikovits and Willis in Plandemic are based in truth and facts, you’d think they would be eager to offer evidence supporting their claims. What better way to turn the tables on scientists, skeptics, and journalists than to offer a referenced, fact-based, point-by-point rebuttal to critics who offer them a platform?

The video repeatedly emphasizes the importance of “considering different points of view” and asking questions, yet offers no other points of view that contradict or undermine Mikovits. Plandemic claims the medical community has a set narrative that refuses to answer opposing voices—and instead offers its own set narrative that refuses to answer opposing voices. Plandemic made many claims, most of which have been widely debunked. We have to wonder: Where are their responses? Why are they suddenly so quiet? Why are they afraid to answer questions? What do they have to hide?

May 162020
 

In the recent episode, we  discuss a few pandemic-related things that set off some skeptical alarms over social media this past week. Then we are joined by Southern California-based comedian and film editor Emery Emery to talk about his soon-to-be-released project with Brian Dunning. With the help of many science communicators and experts (me among them), Emery and Dunning have crafted a documentary called Science Friction, revealing the myriad ways experts have been manipulated, maligned, and misrepresented by producers of questionable documentaries. 

You can listen HERE. 

 

 

May 142020
 

So this is cool… I’m quoted in a recent article in Rolling Stone about rumors of “coronavirus parties.” 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

On the surface, this sounds fairly straightforward: parties where people are intentionally trying to get a potentially deadly illness? Scary! They even used a Trump-esque exclamation point to drive the point home, so you know they mean business. And to be fair, the concept of “coronavirus parties” had previously gotten ink in none other than the New York Times, in an op-ed by epidemiologist Greta Bauer referring to “rumblings” about people hosting events “where noninfected people mingle with an infected person in an effort to catch the virus.” The piece enumerates the many reasons why such parties are a bad idea, including the fact that researchers know very little about coronavirus immunity, without citing direct evidence of the existence of these parties to begin with.

There’s good reason for this, says urban folklorist Benjamin Radford: “coronavirus parties” are probably BS. “They’re a variation of older disease urban legends such as the ‘bug chaser’ stories about people trying to get AIDS,” he tells Rolling Stone, referring to a brief spate in the early-aughts when so-called “bug-chasing” parties were subject to extensive media coverage (including a controversial story by this magazine). Such stories fed into a general sense of “moral panic” over the disease, resulting in it sticking around in the public imagination regardless of the lack of supporting evidence.

You can read the piece HERE. 

 

You can find more on me and my work with a search for “Benjamin Radford” (not “Ben Radford”) on Vimeo, and please check out my podcast Squaring the Strange! 

May 052020
 

For anyone who has been craving an episode with far less research, facts, or formality, this is the episode for you! Ben, Pascual, and Celestia, reflect on their various circumstances during the coronavirus national emergency, and then we talk about this, our third year in podcasting. We dish on past guests and future guests we have in the works, answer a couple of listener questions, and Ben quizzes Pascual about the finer points of air guitar. Enjoy the podcast–it’s almost as fun as other people!

 

Check it out HERE!

 

 

Apr 292020
 

There’s a natural—almost Pavlovian—tendency to follow the news closely, especially during times of emergency such as wars, terrorism, and natural disasters. People are understandably desperate for information to keep their friends and family safe, and part of that is being informed about what’s going on. 

News and social media are awash with information about the COVID-19 pandemic. But not all the information is equally valid, useful, or important. Much of what’s shared on social media about COVID-19 is false, misleading, or speculative. That’s why it’s important to get information from reputable sources such as the Center for Inquiry (CFI), not random YouTube videos, health bloggers, conspiracy theorists, and so on.

It’s easy to become overwhelmed, and science-informed laypeople are likely suffering this information overload keenly, as we absorb the firehose of information from a wide variety of sources: from the White House to the CDC, and from conspiracy cranks to Goop contributors. It’s a never ending stream—often a flood—of information, and those charged with trying to sort it out are quickly inundated. As important as news is, there is such a thing as medical TMI.

We have a Goldilocks situation when it comes to COVID-19 material. There’s too little, too much, and just the right amount of information about the COVID-19 virus in the news and social media. This sounds paradoxical until we break down each type of information. 

Types of COVID-19 Information

In thinking about the COVID-19 outbreak and the deluge of opinion, rumor, and news out there, it’s helpful to parse out the different types of information. 

1) Information that’s true

This includes the most important, practical information—how to avoid it: Wash your hands, avoid crowds, don’t touch your face, sanitize surfaces, and so on. This type of information has been proven accurate and consistent since the outbreak began. This is of course the smallest category of information: mundane but vital. 

2) Information that’s false 

Information that’s false includes a wide variety of rumors, miracle cures, misinformation, and so on. The Center for Inquiry’s COVID-19 Resource Center has been set up precisely to help journalists and the public debunk this false information. The problem is made worse by the fact that Russian disinformation organizations—which have a long and proven history of sowing false and misleading information in social media around the world, and particularly in the United States—have seized on the COVID-19. 

As CNN reported recently, “Russian state media and pro-Kremlin outlets are waging a disinformation campaign about the coronavirus pandemic to sow ‘panic and fear’ in the West, EU officials have warned. … The European Union’s External Action Service, which researches and combats disinformation online, said in an internal report that since January 22 it had recorded nearly 80 cases of disinformation about the COVID-19 outbreak linked to pro-Kremlin media. ‘The overarching aim of Kremlin disinformation is to aggravate the public health crisis in Western countries, specifically by undermining public trust in national healthcare systems—thus preventing an effective response to the outbreak,’ according to the report. … The disinformation has targeted international audiences in English, Italian, Spanish, Arabic as well as Russian and other languages, the report states. European Commission spokesperson Peter Stano said the center has seen a ‘flurry’ of disinformation about the spread of novel coronavirus over the past weeks.” 

3) Speculation, opinion, and conjecture

In times of uncertainty, prediction and speculation are rampant. Dueling projections about the outbreak vary by orders of magnitude as experts and social media pundits alike share their speculation. Of course, epidemiological models are only as good as the data that goes into them and are based on many premises, variables, and numerous unknowns. 

Wanting to accurately know the future is of course a venerable tradition. But as a recent post on Medium written by an epidemiologist noted: “Here is a simple fact: every prediction you’ve read on the numbers of COVID-19 cases or deaths is almost certainly wrong. All models are wrong. Some models are useful. It is very easy to draw a graph using an exponential curve and tell everyone that there will be 10 million cases by next Friday. It is far harder to model infectious disease epidemics with any accuracy. Stop making graphs and putting them online. Stop reading the articles by well-meaning people who have no idea what they are doing. The real experts aren’t posting random Excel graphs on twitter, because they are working flat-out to try and get a handle on the epidemic.” 

4) Information that’s true but not helpful

Finally, there’s another, less-recognized category: information that is true but not helpful on an individual level, or what might be called “trivially true.” We usually think of false information being shared as harmful—and it certainly is—but trivially true information can also be harmful to public health. Even when it’s not directly harmful, it adds to the background of noise.

News media and social media are flooded with information and speculation that—even if accurate—is of little practical use to the average person. Much of the information is not helpful, useful, actionable, or applicable to daily life. It’s like in medicine and psychology what’s called “clinical significance”: the practical importance of a treatment effect—whether it has a real, genuine, palpable, and noticeable effect on daily life. A finding may be true, may be statistically significant, but be insignificant in the real world. A new medicine may reduce pain by 5 percent but nobody would create or market it because it’s not clinically significant; a 5 percent reduction in pain isn’t useful compared to other pain relievers with better efficacy. 

One example might include photos of empty store shelves widely shared on social media, depicting the run on supplies such as sanitizer and toilet paper. The information is both true and accurate; it’s not being faked or staged. But it’s not helpful, because it leads to panic buying, social contagion, and hoarding as people perceive a threat to their welfare and turn an artificial scarcity into a real one. 

Another example is Trump’s recent reference to the COVID-19 virus as “the China virus.” Ignoring the fact that diseases aren’t named for where they emerge, we can acknowledge that it’s technically accurate that, as Trump claimed, COVID-19 was first detected in China—and also that it’s not a relevant or useful detail. It doesn’t add to the discussion or help anyone’s understanding of what the disease is or how to deal with it. If anything, referring to it by other terms such as “the China virus” or “Wuhan flu” is likely to cause confusion and even foment racism.  

Before believing or sharing information on social media, ask yourself questions such as: Is it true? Is it from a reliable source? But there are other questions to ask: Even if it may be factually true, is it helpful or useful? Does it promote unity or encourage divisiveness? Are you sharing it because it contains practical information important to people’s health? Or are you sharing it just to have something to talk about, some vehicle to share your opinions about? The signal-to-noise ratio is already skewed against useful information, being drowned out by false information, speculation, opinion, and trivially true information.  

Social Media Distancing

While self-isolating from the disease (and those who might carry it) is vital to public health, there’s a less-discussed aspect: self-distancing from social media information on the virus, which is a form of social media hygiene. Six feet is enough distance in physical space, but doesn’t apply to cyberspace where viral misinformation spreads unchecked (until it hits this site).

The analogy between disease and misinformation is apt. Just as you can be a vector for a virus if you get and spread it, you can be a vector for misinformation and fear. But you can stop it by removing yourself from it. You don’t need hourly updates on most aspects of the pandemic. Most of what you see and read isn’t relevant to you. The idea is not to ignore important and useful information about the coronavirus; in fact, it’s exactly the opposite: to better distinguish the news from the noise, the relevant from the irrelevant. 

Doctors around the world have been photographed sharing signs that say “We’re at work for you. Please stay home for us.” That’s excellent advice, but we can take it further. While at home not becoming a vector for disease, also take steps not to become a vector for misinformation. After all, doing so can have just as much of an impact on public health. 

During a time when people are isolated, it’s cathartic to vent on social media. Humans are social creatures, and we find ways to connect even when we can’t physically. Especially during a time of international crisis, it’s easy to become outraged about one or another aspect of the pandemic. Everyone has opinions about what is (or isn’t) being done, what should (or shouldn’t) be done. Everyone’s entitled to those opinions, but they should be aware that those opinions expressed on social media have consequences and may well harm others, albeit unintentionally. Just as it feels good to physically hang out with other people (but may in fact be dangerous to them), it feels good to let off steam to others in your social circles (but may be dangerous to them). Your steam makes others in your feed get steamed too, and so on. Again, it’s the disease vector analogy. 

You don’t know who will end up seeing your posts and comments (such is the nature of “viral” posts and memes), and while you may think little of it, others may be more vulnerable. Just as people take steps to protect those with compromised immune systems, it may be wise to take similar steps to protect those with compromised psychological defenses on social media—those suffering from anxiety, depression, or other issues who are especially vulnerable at this time. 

This isn’t about self-censorship; there are many ways to reach out to others and share concerns and feelings in a careful and less public way through email, direct messaging, video calls, and even—gasp—good old fashioned letters. Like anything else, people can express feelings and concerns in measured, productive ways, ways that are more (or less) likely to harm others (referring to it as “COVID-19” instead of “the Chinese virus” is one example). 

Though the public loves to blame the news media for misinformation—and deservedly so—we are less keen to see the culprit in the mirror. Many people, especially on social media, fail to recognize that they have become de facto news outlets through the stories and posts they share. The news media helps spread myriad “fake news” stories—gleefully aided by ordinary people like us. We cannot control what news organizations (or anyone else) publishes or puts online. But we can—and indeed we have an obligation to—help stop the spread of misinformation in all its forms. 

It’s overwhelming; it’s too much. In psychology there’s what’s called the Locus of Control. It basically means the things which a person has control over: themselves, their immediate family, their pets, most aspects of their lives, and so on. It’s psychologically healthy to focus on those things you can do something about. You can’t do anything about how many deaths there are in China or Italy. You can’t do anything about whether or not medical masks are being manufactured and shipped quickly enough. But you can do something about bad information online. 

It can be as simple as not forwarding, liking, or sharing that dubious news story before checking the facts, especially if that story seems crafted to encourage social outrage. The Center for Inquiry can act as a clearinghouse for accurate information about the pandemic, but it’s up to each person to heed that advice. We can help separate the truth from the myths, but we can’t force people to believe the truth. Be safe, practice social and cyber distancing, and wash your hands. 

 

This is the first in a series of original articles on the COVID-19 pandemic by the Center for Inquiry as part of its COVID-19 Resource Center, created to help the public address the crisis with evidence-based information. Please check back periodically for updates and new information. 

Apr 262020
 

There have been many pandemics throughout history, but none have taken place during such a connected time—both geographically and via social media. There’s a tendency to follow the news closely during times of emergency; especially when separated during isolation and quarantines, people are understandably desperate for information to keep their friends and family safe.

 

Overreacting and Underreacting

While scientists, doctors, nurses, epidemiologists, and others struggle to contain the disease, many are spending their self-isolating time on social media, sharing everything from useful information to dangerous misinformation to idle speculation. One thing most people can agree on is that other people and institutions aren’t handling the crisis correctly.

There’s much debate about whether Americans and governments are underreacting or overreacting to the pandemic threat. This is of course a logical fallacy, because there are some 330 million Americans, and the answer is that some Americans are doing one or the other; most Americans, however, are doing neither.

As The New York Times noted, “contrarian political leaders, ethicists and ordinary Americans have bridled at what they saw as a tendency to dismiss the complex trade-offs that the measures collectively known as ‘social distancing’ entail. Besides the financial ramifications of such policies, their concerns touch on how society’s most marginalized groups may fare and on the effect of government-enforced curfews on democratic ideals. Their questions about the current approach are distinct from those raised by some conservative activists who have suggested the virus is a politically inspired hoax, or no worse than the flu. Even in the face of a mounting coronavirus death toll, and the widespread adoption of the social distancing approach, these critics say it is important to acknowledge all the consequences of decisions intended to mitigate the virus’s spread.”

Recently the governor of Georgia, Brian Kemp, joined much of the country in finally ordering citizens to stay at home to slow the spread of the disease, after suggesting that other states were unnecessarily overreacting to the threat. Kemp inexplicably claimed to have only recently learned that the virus can be spread by asymptomatic carriers—something widely known and reported by health officials for well over a month.

On social media, the issue of how and whether the threat is being exaggerated often breaks along political party lines, with conservatives seeing the danger as exaggerated or an outright hoax. There are countless examples of divisive rhetoric, and many are framing the pandemic in terms of class warfare (for example pitting the rich against the poor) or spinning the outbreak to suit other social and political agendas. It’s understandable, but not helpful. Pointing out that the wealthy universally have better access to health care than the poor is merely stating the obvious—like much pandemic information, true but unhelpful. It’s not going to prevent someone’s family member from catching the virus and not going to open schools or businesses any faster. This isn’t a time for what-about-ism or “they’re doing it too” replies; this is a time for unity and cooperation. Liberals, conservatives, independents, and everyone else would benefit from putting aside the blame-casting, demonizing rhetoric and unite against the real enemy: the COVID-19 virus that’s sickening and killing people across races and social strata.

At the same time, it’s important to recognize that the measures taken to slow the spread of the coronavirus in America and around the world—while necessary and effective—have taken a disproportionate toll on minorities. As Charles Blow wrote in The New York Times, “social distancing is a privilege …  this virus behaves like others, screeching like a heat-seeking missile toward the most vulnerable in society. And this happens not because it prefers them, but because they are more exposed, more fragile and more ill. What the vulnerable portion of society looks like varies from country to country, but in America, that vulnerability is highly intersected with race and poverty … . It is happening with poor people around the world, from New Delhi to Mexico City. If they go to work, they must often use crowded mass transportation, because low-wage workers can’t necessarily afford to own a car or call a cab.”

While each side likes to paint the other in extreme terms as under or overreacting, there’s plenty of common ground between these straw man positions. Most people are neither blithely and flagrantly ignoring medical advice (and the exceptions—such as widely maligned Spring Breakers on Florida beaches, some of whom have since been diagnosed with COVID-19—are newsworthy precisely because of their rarity) nor spending their days in masks and containment suits, terrified to go anywhere near others.

Idiots and Maniacs, Cassandras and Chicken Littles

People can take prudent precautions and still reasonably think or suspect that at least some of what’s going on in the world is an overreaction or underreaction. Policing other people’s opinions or shaming them because they’re taking the situation more (or less) seriously than we are is unhelpful. It’s like the classic George Carlin joke: “Anybody driving slower than you is an idiot, and anyone going faster than you is a maniac.”

Instead of seeing others as idiots and maniacs, panicky ninnies and oblivious fools, perhaps we can recognize that everyone is different. Some people are in poorer health than others; some people listen to misinformation more than others; and so on. People who were mocked online for wearing masks in public may be following their doctor’s orders; they may be sick or immunocompromised or have some other health issue that’s not apparent in the milliseconds we spend judging the situation before commenting. Or they may be ahead of the curve, with changing medical advice. Why not give them the benefit of the doubt and treat them as we’d like to be treated?

Whether people are underreacting or overreacting is a matter of opinion not fact. The truth is that we simply don’t know what will happen and how bad it will get. In many cases, we simply don’t have enough information to make accurate predictions, and when it comes to life and death topics such as disease outbreaks, the medical community wisely adopts a better-safe-than-sorry approach.

Both positions argue from a false certainty, a smugness that they know better than others do, that the Cassandras and Chicken Littles will get their comeuppance. Humans crave certainty, but science can’t offer it. Certainty is why psychic predictions such as Sylvia Browne’s (supposedly foretelling the outbreak, which I recently debunked) have such popular appeal. The same is true for conspiracy theories and religion: All offer certainty—the idea that whatever happens is being directed by hidden powers and all part of God’s plan (or the Illuminati’s schemes, take your pick).

Instead of bickering over how stupid or silly others are for however they’re reacting, it may be best to let them do their thing as long as it’s not hurting others. Polarization is a form of intolerance. Maybe this is a time to come together instead of mocking those who don’t share your opinions and fears. We all have different backgrounds and different tolerances for uncertainty.

This doesn’t mean that governments should be given license to do whatever they want, of course. Citizens differ on their opinions about everything the government does; there’s never universal agreement on anything (from gun control to education funding), and there’s no reason to assume that responses to COVID-19 would be any different. If you don’t like what measures state and federal governments are taking to stop the virus, welcome to the club. Some are of the opinion that too much is being done, while others think too little is being done. While the public noisily argue and finger point on social media, scientists around the world are working hard to develop better treatments and vaccines.

Before believing or sharing information on social media, ask yourself questions such as: Is it true? Is it from a reliable source? But also, is it helpful or useful? Does it promote unity or encourage divisiveness? Are you sharing it because it contains practical information important to people’s health? Or are you sharing it just to have something to talk about, some vehicle to share your opinions about? Be safe, practice social and cyber distancing, and wash your hands.

 

This article originally appeared as part of a series of original articles on the COVID-19 pandemic by the Center for Inquiry as part of its Coronavirus Resource Center, created to help the public address the crisis with evidence-based information. You can find it HERE. 

 

You can find more on me and my work with a search for “Benjamin Radford” (not “Ben Radford”) on Vimeo, and please check out my podcast Squaring the Strange! 

Apr 232020
 

My article examines uncertainties in covid-19 data, from infection to death rates. While some complain that pandemic predictions have been exaggerated for social or political gain, that’s not necessarily true; journalism always exaggerates dangers, highlighting dire predictions. But models are only as good as the data that goes into them, and collecting valid data on disease is inherently difficult. People act as if they have solid data underlying their opinions, but fail to recognize that we don’t have enough information to reach valid conclusion…

You can read Part 1 Here.

 

Certainty and the Unknown Knowns

The fact that our knowledge is incomplete doesn’t mean that we don’t know anything about the virus; quite the contrary, we have a pretty good handle on the basics including how it spreads, what it does to the body, and how the average person can minimize their risk. 

Humans crave certainty and binary answers, but science can’t offer it. The truth is that we simply don’t know what will happen or how bad it will get. For many aspects of COVID-19, we don’t have enough information to make accurate predictions. In a New York Times interview, one victim of the disease reflected on the measures being taken to stop the spread of the disease: “We could look back at this time in four months and say, ‘We did the right thing’—or we could say, ‘That was silly … or we might never know.’” 

There are simply too many variables, too many factors involved. Even hindsight won’t be 20/20 but instead be seen by many through a partisan prism. We can never know alternative history or what would have happened; it’s like the concern over the “Y2K bug” two decades ago. Was it all over nothing? We don’t know because steps were taken to address the problem. 

But uncertainty has been largely ignored by pundits and social media “experts” alike who routinely discuss and debate statistics while glossing over—or entirely ignoring—the fact that much of it is speculation and guesswork, unanchored by any hard data. It’s like hotly arguing over what exact time a great-aunt’s birthday party should be on July 4, when all she knows is that she was born sometime during the summer. 

So, if we don’t know, why do people think they know or act as if they know? 

Part of this is explained by what in psychology is known as the Dunning-Kruger effect: “in many areas of life, incompetent people do not recognize—scratch that, cannot recognize—just how incompetent they are … . Logic itself almost demands this lack of self-insight: For poor performers to recognize their ineptitude would require them to possess the very expertise they lack. To know how skilled or unskilled you are at using the rules of grammar, for instance, you must have a good working knowledge of those rules, an impossibility among the incompetent. Poor performers—and we are all poor performers at some things—fail to see the flaws in their thinking or the answers they lack.” 

Most people don’t know enough about epidemiology, statistics, or research design to have a good idea of how valid disease data and projections are. And of course, there’s no reason they would have any expertise in those fields, any more than the average person would be expected to have expertise in dentistry or theater. But the difference is that many people feel confident enough in their grasp of the data—or, often, confident enough in someone else’s grasp of the data, as reported via their preferred news source—to comment on it and endorse it (and often argue about it).  

Psychology of Uncertainty

Another factor is that people are uncomfortable admitting when they don’t know something or don’t have enough information to make a decision. If you’ve taken any standardized multiple-choice tests, you probably remember that some of the questions offered a tricky option, usually after three or four possibly correct specific answers. This is some version of “The answer cannot be determined from the information given.” This response (usually Option D) is designed in part to thwart guessing and to see when test-takers recognize that the question is insoluble or the premise incomplete. 

The principle applies widely in the real world. It’s difficult for many people—and especially experts, skeptics, and scientists—to admit they don’t know the answer to a question. Even if it’s outside our expertise, we often feel as if not knowing (or even not having a defensible opinion) is a sign of ignorance or failure. Real experts freely admit uncertainty about the data; Dr. Anthony Fauci has been candid about what he knows and what he doesn’t, responding for example when asked how many people could be carriers, “It’s somewhere between 25 and 50%. And trust me, that is an estimate. I don’t have any scientific data yet to say that. You know when we’ll get the scientific data? When we get those antibody tests out there.” 

Yet there are many examples in our everyday lives when we simply don’t have enough information to reach a logical or valid conclusion about a given question, and often we don’t recognize that fact. We routinely make decisions based on incomplete information, and unlike on standardized tests, in the real world of messy complexities there are not always clear-cut objectively verifiable answers to settle the matter. 

This is especially true online and in the context of a pandemic. Few people bother to chime in on social media discussions or threads to say that there’s not enough information given in the original post to reach a valid conclusion. People blithely share information and opinions without having the slightest clue as to whether it’s true or not. But recognizing that we don’t have enough information to reach a valid conclusion demonstrates a deeper and nuanced understanding of the issue. Noting that a premise needs more evidence or information to complete a logical argument and reach a valid conclusion is a form of critical thinking.

One element of conspiracy thinking is that those who disagree are either stupid (that is, gullible “sheeple” who believe and parrot everything they see in the news—usually specifically the “mainstream media” or “MSM”) or simply lying (experts and journalists across various media platforms who know the truth but are intentionally misleading the public for political or economic gain). This “If You Disagree with Me, Are You Stupid or Dishonest?” worldview has little room for uncertainty or charity and misunderstands the situation. 

The appropriate position to take on most coronavirus predictions is one of agnosticism. It’s not that epidemiologists and other health officials have all the data they need to make good decisions and projections about public health and are instead carefully considering ways to fake data to deceive the public and journalists. It’s that they don’t have all the data they need to make better predictions, and as more information comes in, the projections will get more accurate. The solution is not to vilify or demonize doctors and epidemiologists but instead to understand the limitations of science and the biases of news and social media.

 

This article first appeared at the Center for Inquiry Coronavirus Resource Page; please check it out for additional information. 

 

 

Apr 202020
 

My new article examines uncertainties in covid-19 data, from infection to death rates. While some complain that pandemic predictions have been exaggerated for social or political gain, that’s not necessarily true; journalism always exaggerates dangers, highlighting dire predictions. But models are only as good as the data that goes into them, and collecting valid data on disease is inherently difficult. People act as if they have solid data underlying their opinions, but fail to recognize that we don’t have enough information to reach valid conclusion…

 

There’s nothing quite like an international emergency—say, a global pandemic—to lay bare the gap between scientific models and the real world, between projections and speculations and what’s really going on in cities and hospitals around the world. 

A previous article discussed varieties of information about COVID-19, including information that’s true; information that’s false; information that’s trivially true (true but unhelpful); and speculation, opinion, and conjecture. Here we take a closer look at the role of uncertainty in uncertain times. 

Dueling Projections and Predictions

The record of wrong predictions about the coronavirus is long and grows by the hour. Around Valentine’s Day, the director of policy and emergency preparedness for the New Orleans health department, Sarah Babcock, said that Mardi Gras celebrations two weeks later should proceed, predicting that “The chance of us getting someone with coronavirus is low.” That projection was wrong, dead wrong: a month later the city would have one of the worst outbreaks of COVID-19 in the country, with correspondingly high death rates. Other projections have overestimated the scale of infections, hospitalizations, and/or deaths. 

It’s certainly true that many, if not most, news headlines about the virus are scary and alarmist; and that many, if not most, projections and predictions about COVID-19 are wrong to a greater or lesser degree. There’s a plague of binary thinking, and it’s circulating in many forms. One was addressed in the previous article: that of whether people are underreacting or overreacting to the virus threat. A related claim involves a quasi-conspiracy that news media and public health officials are deliberately inflating COVID-19 statistics. Some say it’s being done to make President Trump look incompetent at handling the pandemic; others say it’s being done on Trump’s behalf to justify coming draconian measures including Big Brother tracking. 

Many have suggested that media manipulation is to blame, claiming that numbers are being skewed by those with social or political agendas. There’s undoubtedly a grain of truth to that—after all, information has been weaponized for millennia—but there are more parsimonious (and less partisan) explanations for much of it, rooted in critical thinking and media literacy.

The Media Factors

In many cases, it’s not experts and researchers who skew information but instead news media who report on them. News and social media, by their nature, highlight the aberrant extremes. Propelled by human nature and algorithms, they selectively show the worst in society—the mass murders, the dangers, the cruelty, the outrages, and the disasters—and rarely profile the good. This is understandable, as bad things are inherently more newsworthy than good things.

To take one example, social media was recently flooded with photos of empty store shelves due to hoarding, and newscasts depict long lines at supermarkets. They’re real enough—but are they representative? Photos of fully stocked markets and calm shopping aren’t newsworthy or share-worthy, so they’re rarely seen (until recently when they in turn became unusual). The same happens when news media covers natural disasters; journalists (understandably) photograph and film the dozens of homes that were flooded or wrenched apart by a tornado, not the intact tens or hundreds of thousands of neighboring homes that were unscathed. This isn’t some conspiracy by the news media to emphasize the bad; it’s just the nature of journalism. But this often leads to a public who overestimates the terrible state of the world—and those in it—as well as fear and panic. 

Another problem are news stories (whether about dire predictions or promising new drugs or trends) that are reported and shared without sufficient context. An article in Health News Review discussed the problem of journalists stripping out important caveats: “Steven Woloshin, MD, co-director of the Center for Medicine and Media at The Dartmouth Institute, said journalists should view preprints [rough drafts of journal studies that have not been published nor peer-reviewed] as ‘a big red flag’ about the quality of evidence, similar to an animal study that doesn’t apply to humans or a clinical trial that lacks a control group. ‘I’m not saying the public doesn’t have the right to know this stuff,’ Woloshin said. ‘But these things are by definition preliminary. The bar should be really high’ for reporting them. In some cases, preprints have shown to be completely bogus … . Readers might not heed caveats about ‘early’ or ‘preliminary’ evidence, Woloshin said. ‘The problem is, once it gets out into the public it’s dangerous because people will assume it’s true or reliable.’”

One notable example of an unvetted COVID-19 news story circulating widely “sprung from a study that ran in a journal. The malaria medicine hydroxychloroquine, touted by President Trump as a potential ‘cure,’ gained traction based in part on a shaky study of just 42 patients in France. The study’s authors concluded that the drug, when used in combination with an antibiotic, decreased patients’ levels of the virus. However, the findings were deemed unreliable due to numerous methodological flaws. Patients were not randomized, and six who received the treatment were inappropriately dropped from the study.” Recently, a Brazilian study of the drug was stopped when some patients developed heart problems. 

Uncertainties in Models and Testing

In addition to media biases toward sensationalism and simplicity, experts and researchers often have limited information to work with, especially in predictions. There are many sources of error in the epidemiological data about COVID-19. Models are only as good as the information that goes into them; as they say: Garbage In, Garbage Out. This is not to suggest that all the data is garbage, of course, so it’s more a case of Incomplete Data In, Incomplete Data Out. As a recent article noted, “Models aren’t perfect. They can generate inaccurate predictions. They can generate highly uncertain predictions when the science is uncertain. And some models can be genuinely bad, producing useless and poorly supported predictions … .” But as to the complaint that the outbreak hasn’t been as bad as some earlier models predicted, “earlier projections showed what would happen if we didn’t adopt a strong response, while new projections show where our current path sends us. The downward revision doesn’t mean the models were bad; it means we did something.”

One example of the uncertainty of data is the number of COVID-19 deaths in New York City, one of the hardest-hit places. According to The New York Times, “the official death count numbers presented each day by the state are based on hospital data. Our most conservative understanding right now is that patients who have tested positive for the virus and die in hospitals are reflected in the state’s official death count.” 

All well and good, but “The city has a different measure: Any patient who has had a positive coronavirus test and then later dies—whether at home or in a hospital—is being counted as a coronavirus death, said Dr. Oxiris Barbot, the commissioner of the city’s Department of Health. A staggering number of people are dying at home with presumed cases of coronavirus, and it does not appear that the state has a clear mechanism for factoring those victims into official death tallies. Paramedics are not performing coronavirus tests on those they pronounce dead. Recent Fire Department policy says that death determinations on emergency calls should be made on scene rather than having paramedics take patients to nearby hospitals, where, in theory, health care workers could conduct post-mortem testing. We also don’t really know how each of the city’s dozens of hospitals and medical facilities are counting their dead. For example, if a patient who is presumed to have coronavirus is admitted to the hospital, but dies there before they can be tested, it is unclear how they might factor into the formal death tally. There aren’t really any mechanisms in place for having an immediate, efficient method to calculate the death toll during a pandemic. Normal procedures are usually abandoned quickly in such a crisis.”

People who die at home without having been tested of course won’t show up in the official numbers: “Counting the dead after most disasters—a plane crash, a hurricane, a gas explosion, a terror attack or a mass shooting, for example—is not complex. A virus raises a whole host of more complicated issues, according to Michael A.L. Balboni, who about a decade ago served as the head of the state’s public safety office. ‘A virus presents a unique set of circumstances for a cause of death, especially if the target is the elderly, because of the presence of comorbidities,’ he said—multiple conditions. For example, a person with COVID-19 may end up dying of a heart attack. ‘As the number of decedents increase,’ Mr. Balboni said, ‘so does the inaccuracy of determining a cause of death.’”

So while it might seem inconceivably Dickensian (or suspicious) to some that in 2020 quantifying something as seemingly straightforward as death is complicated, this is not evidence of deception or anyone “fudging the numbers” but instead an ordinary and predictable lack of uniform criteria and reporting standards. The international situation is even more uncertain; different countries have different guidelines, making comparisons difficult. Not all countries have the same criterion for who should be tested, for example, or even have adequate numbers of tests available. 

In fact, there’s evidence suggesting that if anything the official numbers are likely undercounting the true infections. Analysis of sewage in one metropolitan area in Massachusetts that officially has fewer than 500 confirmed cases revealed that there may be exponentially more undetected cases. 

Incomplete Testing

Some people have complained that everyone should be tested, suggesting that only rich are being tested for the virus. There’s a national shortage of tests, and in fact many in the public are being tested (about 1 percent of the public so far), but such complaints rather miss a larger point: Testing is of limited value to individuals.  

Testing should be done in a coordinated way, starting not with the general public but instead with the most seriously ill. Those patients should be quarantined until the tests come back, and if the result is positive, further measures should be taken including tracking down people who that patient may have come in contact with; in Wuhan, for example, contacts were asked to check their temperature twice a day and stay at home for two weeks. 

But testing people who may be perfectly healthy is a waste of very limited resources and testing kits; most of the world is asymptomatic for COVID-19. Screening the asymptomatic public is neither practical nor possible. Furthermore, though scientists are working on creating tests that yield faster and more accurate results, the ones so far have taken days. Because many people who carry the virus show no symptoms (or mild symptoms that mimic colds or even seasonal allergies), it’s entirely possible that a person could have been infected between the time they took the test and gotten a negative result back. So, it may have been true that a few days, or a week, earlier they hadn’t been infected, but they are now and don’t know it because they are asymptomatic or presymptomatic. The point is not that the tests are flawed or that people should be afraid, but instead that testing, by itself, is of little value to the patient because of these uncertainties. If anything, it could provide a false sense of security and put others at risk. 

As Dr. Paul Offit noted in a recent interview, testing for the virus is mainly of use to epidemiologists. “From the individual level, it doesn’t matter that much. If I have a respiratory infection, stay home. I don’t need to find out whether I have COVID-19 or not. Stay home. If somebody gets their test and they find out they have influenza, they’ll be relieved, as compared to if they have COVID-19, where they’re going to assume they’re going to die matter how old they are.” 

If you’re ill, on a practical level—unless you’re very sick or at increased risk, as mentioned above—it doesn’t really matter whether you have COVID-19 or not because a) there’s nothing you can do about it except wait it out, like any cold or flu; and b) you should take steps to protect others anyway. People should assume that they are infected and act as they would for any communicable disease: isolate, get rest, avoid unnecessary contact with others, wash hands, don’t touch your face, and so on. 

 

A version of this article appeared on the CFI Coronavirus Response Page, here.

Part 2 will be posted in a few days.

Apr 182020
 

The new episode of Squaring the Strange is now out!

We chat about various Covid-related topics and then dive into a few examples of bad or misleading polls. First we go over a couple that don’t really set off alarm bells, like whether beards are sexy or what determines people’s beer-buying habits.

Then I dissect some bad reporting on polls and surveys that relate to much more important topics like Native American discrimination or the Holocaust, and we see how a bit of media literacy on how polls can be twisted around is a vital part of anyone’s skeptical toolbox.

 

Listen HERE!

 

 

Apr 122020
 

In February a headline widely shared on social media decried poor reviews of the new film Birds of Prey and blamed it on male film critics hating the film for real or perceived feminist messages (and/or skewed expectations; it’s not clear). The article, by Sergio Pereira, was headlined “Birds of Prey: Most of the Negative Reviews Are from Men.”

The idea that the film was getting bad reviews because hordes of trolls or misogynists hated it was certainly plausible, and widely discussed for example in the case of the all-female Ghostbusters reboot a few years ago. As a media literacy educator and a film buff, I was curious to read more, and when I saw it on a friend’s Facebook wall I duly did what the writer wanted me (and everyone else) to do: I clicked on the link.

I half expected the article to contradict its own headline (a frustratingly common occurrence, even in mainstream news media stories), but in this case Pereira’s text accurately reflected its headline: “Director Cathy Yan’s Birds of Prey (and the Fantabulous Emancipation of One Harley Quinn), starring Margot Robbie as the Clown Princess of Crime, debuted to a Fresh rating on Rotten Tomatoes. While the score has dropped as more reviews pour in, the most noticeable thing is that the bulk of the negative reviews come from male reviewers. Naturally, just because the film is a first for superhero movies—because it’s written by a woman, directed by a woman and starring a mostly all-female cast—doesn’t absolve it from criticism. It deserves to be judged for both its strengths and weaknesses like any other piece of art. What is concerning, though, is how less than 10% of the negative reviews are from women.” In the article and later on Twitter Pereira attributed the negative reviews to an alleged disparity between what male film reviewers expected from the film and what they actually saw, describing it as “literally… where a bunch of fools got upset about the movie they THOUGHT it was, instead of what it ACTUALLY was.”

I was reminded of the important skeptical dictum that before trying to explain why something is the case, be sure that it is the case; in other words question your assumptions. This is a common error on social media, in journalism, and of course in everyday life. We shouldn’t just believe what people tell us—especially online. To be fair, the website was CBR.com (formerly known as Comic Book Resources) and not, for example, BBC News or The New York Times. It’s pop culture news, but news nonetheless.

Curious to see what Pereira was describing, I clicked the link to the Rotten Tomatoes listing and immediately knew that something wasn’t right. The film had a rating of 80% Fresh rating—meaning that most of the reviews were positive. In fact according to MSNBC, “The film charmed critics [and is] the third-highest rating for any movie in the DCEU, just behind Wonder Woman and Shazam.” Birds of Prey may not have lived up to its expectations, but the film was doing fairly well, and hardly bombing—because of male film reviewers or for any other reason.

I know something about film reviewing; I’ve been a film reviewer since 1994, and attended dozens of film festivals, both as an attendee and a journalist. I’ve also written and directed two short films and taken screenwriting courses. One thing I’ve noticed is that for whatever reason most film critics are male (a fact I double checked, learning that the field is about 80% male). So, doing some very basic math in my head, I knew there was something very wrong with the headline—and not just the headline, but the entire premise of Pereira’s article.

Here’s a quick calculation: Say there are 100 reviewers. 80 of them are men; 20 are not. If half of each gender give it a positive review, that’s 40 positive reviews from men and 10 from women, for a total of 50% approval (or “Fresh”) rating. If half of men (40) and 100% of the women (20) gave it a positive review, that’s a 60% Fresh rating. If three-quarters of men (60) and 100% of the women (20) gave it a positive review, that’s an 80% Fresh rating.

Birds of Prey had an 80% Fresh rating. So even if every single female reviewer gave the film a positive review—which we know didn’t happen just from a glance at the reviews on RottenTomatoes—then that means that at least three out of four men gave it a positive review. Therefore it might statistically be true that “most of the negative reviews are from men,” but the exact opposite is also true: most of the positive reviews are from men, simply because there are more male reviewers. The article’s claim that “the bulk of the negative reviews come from male reviewers” is misleading at best and factually wrong at worst.

I also thought it strange that Pereira didn’t specify which male reviewers he was talking about. By “fools” did he mean professional film critics such as Richard Roeper and Richard Brody were overwhelmingly writing scathing reviews of the film? Or did he mean reviews from random male film fans? And if the latter, how did he determine the gender of the anonymous reviewers? It was possible that his statistic was correct, but readers would need much more information about where he got his numbers. Did he take the time to gather data, create a spreadsheet, and do some calculations? Did he skim the reviews for a minute and do a rough estimate? Did he just make it up?

I was wary of assuming the burden of proof regarding this claim. After all, Sergio Pereira was the one who claimed that most of the negative reviews were by men. The burden of proof is always on the person making the claim; it’s not up to me to show he’s wrong, but up to him to show he’s right. Presumably he got that number from somewhere—but where?

I contacted Pereira via Twitter and asked him how he arrived at the calculations. He did not reply, so I later contacted CBR directly, emailing the editors with a concise, polite note saying that the headline and articles seemed to be false, and asking them for clarification: “He offers no information at all about how he determined that, nor that less than 10% of the negative reviews are from women. The RottenTomatoes website doesn’t break reviewers down by gender (though named and photos offer a clue), so Pereira would have to go through one by one to verify each reviewer’s gender. It’s also not clear whether he’s referring to Top Reviewers or All Reviewers, which are of course different datasets. I spent about 20 minutes skimming the Birds of Prey reviews and didn’t see the large gender imbalance reported in your article (and didn’t have hours to spend verifying Pereira’s numbers, which I couldn’t do anyway without knowing what criterion he used). Any clarification about Pereira’s methodology would be appreciated, thank you.”  They also did not respond.

Since neither the writer nor the editors would respond, I resignedly took a stab at trying to figure out where Pereira got his numbers. I looked at the Top Critics and did a quick analysis. I found 41 of them whose reviews appeared at the time: 26 men and 15 women. As I suspected, men had indeed written the statistical majority of both the positive and negative reviews.

Reactions

On my friend’s Facebook page where I first saw the story being shared I posted a comment noting what seemed to be an error, and offering anyone an easy way to assess whether the headline was plausible: “A quick-and-dirty way to assess whether the headline is plausible is to note that 1) 80% of film critics are male, and that 2) Birds of Prey has a 80% Fresh rating, with 230 Fresh (positive) and 59 Rotten (negative). So just glancing at it, with 80% of reviewers male, how could the film possibly have such a high rating if most of the men gave it negative reviews?”

The reactions from women on the post were interesting—and gendered: one wrote, “did you read the actual article or just the headline? #wellactually,” and “haaaaard fucking eyeroll* oh look y’all. The dude who thinks he’s smarter than the author admits its maybe a little perhaps possible that women know what they’re talking about.”

The latter comment was puzzling, since Sergio Pereira is a man. It wasn’t a man questioning whether women knew what they were talking about; it was a man questioning whether another man’s harmful stereotypes about women highlighted in his online article were true. I was reminded of the quote attributed to Mark Twain: “It’s easier to fool people than to convince them they’ve been fooled.” Much of the reaction I got was critical of me for questioning the headline and the article. I got the odd impression that some thought I was somehow defending the supposed majority male film critics who didn’t like the film, which was absurd. I hadn’t (and haven’t) seen the film and have no opinion about it, and couldn’t care less whether most of the male critics liked or didn’t like the film. My interest is as a media literacy educator and someone who’s researched misleading statistics.

To scientists, journalists, and skeptics, asking for evidence is an integral part of the process of parsing fact from fiction, true claims from false ones. If you want me to believe a claim—any claim, from advertising claims to psychic powers, conspiracy theories to the validity of repressed memories—I’m going to ask for evidence. It doesn’t mean I think (or assume) you’re wrong or lying, it just means I want a reason to believe what you tell me. This is especially true for memes and factoids shared on social media and designed to elicit outrage or scorn.

The problem is when the person does occasionally encounter someone who is sincerely trying to understand an issue or get to the bottom of a question, their knee-jerk reaction is often to assume the worst about them. They are blinded by their own biases and they project those biases on others. This is especially true when the subject is controversial, such as with race, gender, or politics. To them, the only reason a person would question a claim is if they are trying to discredit that claim, or a larger narrative it’s being offered in support of.

Of course that’s not true; people should question all claims, and especially claims that conform to their pre-existing beliefs and assumptions; those are precisely the ones most likely to slip under the critical thinking radar and become incorporated into your beliefs and opinions. I question claims from across the spectrum, including those from sources I agree with. To my mind the other approach has it backwards: How do you know whether to believe a claim if you don’t question it?

If the reviews are attributable to sexism or misogyny due to feminist themes in the script—instead of, for example, lackluster acting, clunky dialogue, lack of star power, an unpopular title (the film was renamed during its release, a very unusual marketing move), or any number of other factors unrelated to its content—then presumably that same effect would be clear in other similar films.

Ironically, another article on CBR by Nicole Sobon published a day earlier—and linked to in Sergio Pereira’s piece—offers several reasons why Birds of Prey wasn’t doing as well at the box office, and misogyny was conspicuously not among them: “Despite its critical success, the film is struggling to take flight at the box office…. One of the biggest problems with Birds of Prey was its marketing. The trailers did a great job of reintroducing Robbie’s Quinn following 2017’s Suicide Squad, but they failed to highlight the actual Birds of Prey. Also working against it was the late review embargo. It’s widely believed that when a studio is confident in its product, it will hold critics screenings about two weeks before the film’s release, with reviews following shortly afterward to buoy audience anticipation and drive ticket sales. However, Birds of Prey reviews didn’t arrive until three days before the film’s release. And, while the marketing finally fully kicked into gear by that point, for the general audience, it might’ve been too late to care. Especially when Harley Quinn’s previous big-screen appearance was in a poorly received film.”

The Cinematic Gender Divide

A gender divide in positive versus negative reviews of ostensibly feminist films (however you may want to measure that, whether by the Bechdel Test or some other way—such as an all-female cast, or female writer/directors), is eminently provable. It’s not a subject that I’ve personally researched and quantified, but since Pereira didn’t reference any of this in his article, I did some research on it.

For example Salon did a piece on gender divisions in film criticism, though not necessarily finding that it was rooted in sexism or a reaction to feminist messages: “The recent Ghostbusters reboot, directed by Paul Feig, received significantly higher scores from female critics than their male counterparts. While 79.3 percent of women who reviewed the film gave it a positive review, just 70.8 percent of male critics agreed with them. That’s a difference of 8.5 percent… In total, 84 percent of the films surveyed received more positive reviews from female reviewers than from men. The movies that showed the greatest divide included A Walk to Remember, the Nicholas Sparks adaptation; Twilight, the 2008 vampire romance; P.S. I Love You, a melodrama about a woman (Hilary Swank) grieving the loss of her partner; Divergent, the teen dystopia; and Divine Secrets of the Ya-Ya Sisterhood… Men tended to dislike Young Adult literary adaptations and most films marketed to teenage girls. Pitch Perfect, which was liked by 93.8 percent of female critics, was rated much lower by men—just 76.9 percent of male reviewers liked it.” (There was nothing supporting Pereira’s assertion that critics, male or female, didn’t like films marketed to the opposite gender because of a perceived gap between what the reviewers expected from a film versus what the film delivered.)

The phrasing “just 76.9 percent of male reviewers liked Pitch Perfect” of course invites an ambiguous comparison (how many should have liked the film? 90%? 95%? 100%?). More to the point, if over three-quarters of men liked the obviously female-driven film Pitch Perfect, that rather contradicts Pereira’s thesis. In fact Pitch Perfect has an 80% Fresh rating on RottenTomatoes—exactly the same score that Birds of Prey did. We have two female-driven films with the majority of male reviewers giving both films a positive review—yet Pereira suggests that male reviewers pilloried Birds of Prey.

Perpetuating Harmful Stereotypes

Journalists making errors and writing clickbait headlines based on those errors is nothing new, of course. I’ve written dozens of media literacy articles about this sort of thing. As I’ve discussed before, the danger is that these articles mislead people, and reinforce harmful beliefs and stereotypes. In some cases I’ve researched, misleading polls and surveys create the false impression that most Americans are Holocaust deniers—a flatly false and highly toxic belief that can only fuel fears of anti-Semitism (and possibly comfort racists). In other cases these sorts of headlines exaggerate fear and hatred of the transgender community.

As noted, Pereira’s piece could have been titled, “Birds of Prey: Most of the Positive Reviews Are from Men.” That would have empowered and encouraged women—but gotten fewer outrage clicks.

In many cases what people think other people think about the world us just as important as what they personally think. This is due to what’s called the third-person effect, or pluralistic ignorance. People are of course intimately familiar with our own likes and desires—but where do we get our information about the 99.99% of the world we don’t and can’t directly experience or evaluate? When it comes to our understanding and assumptions about the rest of the world, our sources of information quickly dwindle. Outside of a small circle of friends and family, much of the information about what others in the world think and believe comes from the media, specifically social and news media. These sources often misrepresent the outside world. Instead they distort the real world in predictable and systemic ways, always highlighting the bad and the outraged. The media magnifies tragedy, exploitation, sensationalism and bad news, and thus we assume that others embody and endorse those traits.

We’re seeing this at the moment with shortages of toilet paper and bottled water in response to Covid-19 fears. Neither are key to keeping safe or preventing the spread of the virus, yet people are reacting because other people are reacting. There’s a shortage because people believe there’s a shortage—much in the way that the Kardashians are famous for being famous. In much the same way, when news and social media exaggerate (or in some cases fabricate) examples of toxic behavior, it creates the false perception that such behavior is more pervasive (and widely accepted) than it actually is. Whether or not male film reviewers mostly hated Birds of Prey as Pereira suggestedand they didn’t—the perception that they did can itself cause harm.

I don’t think it was done intentionally or with malice. But I hope Pereira’s piece doesn’t deter an aspiring female filmmaker who may read his widely-shared column and assume that no matter how great her work is, the male-dominated film critic field will just look for ways to shut her out and keep her down merely because of her gender.

There certainly are significant and well-documented gender disparities in the film industry, on both sides of the camera, from actor pay disparity to crew hiring. But misogynist men hating on Birds of Prey simply because it’s a female-led film isn’t an example of that. I note with some irony that Pereira’s article concludes by saying that “Birds of Prey was meant to be a celebration, but it sadly experienced the same thing as every other female-driven film: a host of negativity about nothing.” That “host of negativity” is not reflected in male film reviews but instead in Sergio Pereira’s piece. His CBR article is itself perpetuating harmful stereotypes about female-driven films, which is unfortunate given the marginalization of women and minorities in comic book and gaming circles.

You can find more on me and my work with a search for “Benjamin Radford” (not “Ben Radford”) on Vimeo, and please check out my podcast Squaring the Strange! 

A longer version of this article appeared on my Center for Inquiry blog; you can read it here. 

Apr 102020
 

In recent months there’s been plenty of rumors, myths, and misinformation about the newest coronavirus pandemic, Covid-19. I’ve written several pieces on the topic, tackling both intentional and accidental bogus information. Some of the most pernicious, of course, involves misinformation about healthcare decisions (such as fake cures), but there are others.

One of the most curious is the recent resurrection of a prediction by Sylvia Browne. In her 2008 book End of Days, Browne (who died in 2013) predicted that “In around [sic] 2020 a severe pneumonia-like illness will spread throughout the globe, attacking the lungs and the bronchial tubes and resisting all known treatments. Almost more baffling than the illness itself will be the fact that it will suddenly vanish as quickly as it arrived, attack again ten years later, and then disappear completely.”

This led to many on social media assuming that Browne had accurately predicted the Covid-19 outbreak, and no less a respected authority than Kim Kardashian shared such posts. One news writer asked, “Doesn’t it sound very similar to this novel coronavirus and the disease, Covid-19? Be it the nature of the illness, the year mentioned or the part about the resistance to treatments—the similarity with coronavirus is uncanny… Netizens are absolutely stumped with the reference of coronavirus outbreak in the book.”

While most of the commentary seems to take the proclamations about Browne’s prediction at face value, there were a few skeptics. The website Snopes did a short piece explaining the topic, giving it a rating of “Mixture” of truth and fact—which is rather generous as I’ll explain.

A Closer Look at Browne’s ‘Prediction’

Skeptics such as myself, Joe Nickell, Susan Gerbic, Massimo Polidoro, James Randi, and others have a long history of taking a closer look at psychic claims. Let’s revisit the passage in question: “In around 2020 [sic] a severe pneumonia-like illness will spread throughout the globe, attacking the lungs and the bronchial tubes and resisting all known treatments. Almost more baffling than the illness itself will be the fact that it will suddenly vanish as quickly as it arrived, attack again ten years later, and then disappear completely.”

There’s a lot packed into these two sentences, so let’s parse this out. First, we have an indefinite date range (“in around 2020”), which depends on how loosely you interpret the word “around”: Browne doesn’t write “In 2020,” which would narrow it down to one calendar year; she writes “in around” whose grammatically awkward construction suggests to the editor in me that she (or her editor) added the word “around” in a late draft to make it more general—a typical psychic technique. What “around 2020” means varies by subjective criterion, and could plausibly include a range of plus or minus three or more years: Most people would probably agree that 2017, 2018, 2019, 2021, 2022, and 2023 are “around” 2020. Using this range we see that Browne’s spread is over seven (or more) years—well over half a decade.

So what did Browne predict would happen sometime during those years? “A severe pneumonia-like illness.” Covid-19 is not “a severe pneumonia-like illness,” though it can in some cases lead to pneumonia. Most of those infected (about 80%) have mild symptoms and recover just fine, and the disease has a mortality rate of between 2% and 4%. There are two types of coronaviruses—Severe Acute Respiratory Syndrome and Middle East Respiratory Syndrome—that “can cause severe respiratory infections,” but Covid-19 is not among them; both SARS and MERS are far more deadly.

Where will it go, according to Browne? It “will spread throughout the globe, attacking the lungs and the bronchial tubes.” Covid-19 has now indeed spread throughout the globe, though the phrase “attacking the lungs and the bronchial tubes” isn’t a prediction but merely restates any “pneumonia-like illness.”

But Browne also offers another specific characteristic of this disease, that of “resisting all known treatments.” This also does not describe Covid-19, which doesn’t “resist all known treatments”; in fact doctors know exactly how to treat (though not effectively vaccinate or quarantine, which are very different measures) the disease, and it’s essentially the same for influenza or other similar respiratory infections. There’s nothing unique about Covid-19’s resistance to treatment.

In the second sentence she further describes the illness: “Almost more baffling than the illness itself will be the fact that it will suddenly vanish as quickly as it arrived, attack again ten years later, and then disappear completely.” This is false, at least as of now. Covid-19 has not “suddenly vanished as quickly as it arrived,” and even if it eventually does, its emergence pattern would have to be compared with other typical epidemiology data to know whether it’s “baffling.” Infectious diseases (especially ones such as respiratory illnesses) have predictable patterns, and modeling outbreaks is a whole branch of public health. Given a normal distribution (bell curve) of cases, it would not necessarily be “baffling” if the disease subsided as quickly as it arose. In fact what would be astonishing is if it did not; in other words if over the course of a week or two, the infection rates plummeted inexplicably as no new infections were reported at all. That would be an amazing psychic prediction. Furthermore note that the prediction couldn’t even be mostly validated until 2030, since it references a recurrence of the disease ten years later—a neat trick for a prediction made (or at least made public) nearly a quarter-century earlier. And as to whether it would “then disappear completely,” I suppose that could be determined true or false at some point around the end of time, so expect a follow-up piece from me then.

So we have a two-sentence prediction written in 2008 by a convicted felon with a long track record of failures. Half of the prediction (the second sentence) have demonstrably not happened. The other half of the prophecy describes an infectious respiratory illness that does not resemble Covid-19 in its particulars and that would happen within a few years of 2020. At best, maybe one-sixth of what she said is accurate, depending again on how much latitude you’re willing to give her in terms of dates and vague descriptions. Anyone who finds this prediction to be astonishingly accurate should contact me for information on a bridge I happen to have for sale. Keep in mind that in her books, television appearances, interviews, and elsewhere over the course of her career, Browne has made many thousands of predictions; the fact that this one happened to possibly, maybe, be partly right is meaningless. People love a mystery, and retrofitting vague predictions (whether from Browne, Nostradamus, or anyone else).

 

You can find more on me and my work with a search for “Benjamin Radford” (not “Ben Radford”) on Vimeo, and please check out my podcast Squaring the Strange! 

 

Apr 082020
 

As the world enters another month dealing with the deadly coronavirus that has dominated headlines, killed hundreds, and sickened thousands, misinformation is running rampant. For many, the medical and epidemiological aspects of the outbreak are the most important and salient elements, but there are other prisms through which we can examine this public health menace. 

There are many facets to this outbreak, including economic damage, cultural changes, and so on. However, my interest and background is in media literacy, psychology, and folklore (including rumor, legend, and conspiracy), and my focus here is a brief overview of some of the lore surrounding the current outbreak. Before I get into the folkloric aspects of the disease, let’s review the basics of what we know so far. 

First, the name is a bit misleading; it’s a coronavirus, not the coronavirus. Coronavirus is a category of viruses; this one is dubbed “Covid-19.” Two of the best known and most deadly other coronaviruses are SARS (Severe Acute Respiratory Syndrome, first identified in 2003) and MERS (Middle East Respiratory Syndrome, identified in 2012). 

The symptoms of Covid-19 are typical of influenza and include a cough, sometimes with a fever, shortness of breath, nausea, vomiting, and/or diarrhea. Most (about 80 percent) of infected patients recover within a week or two, like patients with a bad cold. The other 20 percent contract severe infections such as pneumonia, sometimes leading to death. The virus Covid-19 is spreading faster than either MERS or SARS, but it’s much less deadly than either of those. The death rate for Covid-19 is 2 percent, compared to 10 percent for SARS and 35 percent for MERS. There’s no vaccine, and because it’s not bacterial, antibiotics won’t help. 

The first case was reported in late December 2019 in Wuhan, China. About a month later the Health and Human Services Department declared a U.S. public health emergency. The average person is at very low risk, and Americans are at far greater risk of getting the flu—about 10 percent of the public gets it each year.

The information issues can be roughly broken down into three (at times overlapping) categories: 1) Lack of information; 2) Misinformation; and 3) Disinformation. 

Lack of Information

The lack of information stems from the fact that scientists are still learning about this specific virus. Much is known about it from information gathered so far (summarized above), but much remains to be learned. 

The lack of information has been complicated by a lack of transparency by the Chinese government, which has sought to stifle early alarms about it raised by doctors, including Li Wenliang, who recently died. As The New York Times reported:

On Friday, the doctor, the doctor, Li Wenliang, died after contracting the very illness he had told medical school classmates about in an online chat room, the coronavirus. He joined the more than 600 other Chinese who have died in an outbreak that has now spread across the globe. Dr. Li “had the misfortune to be infected during the fight against the novel coronavirus pneumonia epidemic, and all-out efforts to save him failed,” the Wuhan City Central Hospital said on Weibo, the Chinese social media service. Even before his death, Dr. Li had become a hero to many Chinese after word of his treatment at the hands of the authorities emerged. In early January, he was called in by both medical officials and the police, and forced to sign a statement denouncing his warning as an unfounded and illegal rumor. 

Chinese officials were slow to share information and admit the scope of the outbreak. This isn’t necessarily evidence of a conspiracy—governments are often loathe to admit bad news or potentially embarrassing or damaging information (recall that it took nearly a week for Iran to admit it had unintentionally shot down a passenger airliner over its skies in January)—but part of the Chinese government’s long standing policies of restricting news reporting and social media. Nonetheless, China’s actions have fueled anxiety and conspiracies; more on that presently. 

Misinformation

There are various types of misinformation, revolving around a handful of central concerns typical of disease rumors. In his book An Epidemic of Rumors: How Stories Shape Our Perceptions of Disease, Jon D. Lee notes:

People use certain sets of narratives to discuss the presence of illness, mediate their fears of it, come to terms with it, and otherwise incorporate its presence into their daily routines … Some of these narratives express a harsher, more paranoid view of reality than others, some are openly racist and xenophobic, and some are more concerned with issues of treatment and prevention than blame—but all revolve around a single emotion in all its many forms: fear. (169) 

As Lee mentions, one common aspect is xenophobia and contamination fears. Many reports, in news media but on social media especially, focus on the “other,” the dirty aberrant outsiders who “created” or spread the menace. Racism is a common theme in rumors and urban legends—what gross things “they” eat or do. As Prof. Andrea Kitta notes in her book The Kiss of Death: Contagion, Contamination, and Folklore

The intriguing part of disease legends is that, in addition to fear of illness, they express primarily a fear of outsiders … Patient zero [the assumed origin of the “new” disease] not only provides a scapegoat but also serves as an example to others: as long as people do not act in the same way as patient zero, they are safe. (27–28)

In the case of Covid-19, rumors have suggested that seemingly bizarre (to Americans anyway) eating habits of Chinese were to blame, specifically bats. One video circulated allegedly showing Chinese preparing bat soup, suggesting it was the cause of the outbreak, though it was later revealed to have been filmed in Palau, Micronesia. 

The idea of disease and death coming from “unclean” practices has a long history. One well known myth is that AIDS originated when someone (presumably an African man) had sex with a monkey or ape. This linked moralistic views of sexuality with the later spread of the disease, primarily among the homosexual community. More likely, however, chimps with simian immunodeficiency virus were killed and eaten for game meat, which is documented, which in turn transferred the virus to humans and spawned HIV (human immunodeficiency virus), which in turn causes AIDS. 

The fear of foreigners and immigrants bringing disease to the country was of course raised a few years ago when a Fox News contributor suggested without evidence that a migrant caravan from Honduras and Guatemala coming through Mexico carried leprosy, smallpox, and other dreaded diseases. This claim was quickly debunked

Disinformation and Conspiracies

Then there are the conspiracies, prominent among them the disease’s origin. Several are circulating, claiming for example that Covid-19 is in fact a bioweapon that has either been intentionally deployed or escaped/stolen from a secure top secret government lab. Some have claimed that it’s a plot (by the Bill and Melinda Gates Foundation or another NGO or Big Pharma) to sell vaccines—apparently unaware that there is no vaccine available at any price. 

This is a classic conspiracy trope, evoked to explain countless bad things, ranging from chupacabras to chemtrails and diseases. This is similar to urban legends and rumors in the African American community, claiming that AIDS was created by the American government to kill blacks, or that soft drinks and foods (Tropical Fantasy soda and Church’s Fried Chicken, for example) contained ingredients that sterilized the black community (for more on this, see Patricia Turner’s book I Heard It Through the Grapevine: Rumor in African-America Culture.) In Pakistan and India, public health workers have been attacked and even killed trying to give polio vaccinations, rumored to be part of an American plot.

Of course such conspiracies go back centuries. As William Naphy notes in his book Plagues, Poisons, and Potions: Plague Spreading Conspiracies in the Western Alps c. 1530-1640, people were accused of intentionally spreading the bubonic plague. Most people believed that the plague was a sign of God’s wrath, a pustular and particularly punitive punishment for the sin of straying from Biblical teachings. “Early theories saw causes in: astral conjunctions, the passing of comets; unusual weather conditions … noxious exhalations from the corpses on battlefields” and so on (vii). Naphy notes that “In 1577, Claude de Rubys, one of the city’s premier orators and a rabid anti-Protestant, had openly accused the city’s Huguenots of conspiring to destroy Catholics by giving them the plague” (174). Confessions, often obtained under torture, implicated low-paid foreigners who had been hired to help plague victims and disinfect their homes. 

Other folkloric versions of intentional disease spreading include urban legends of AIDS-infected needles placed in payphone coin return slots. Indeed, that rumor was part of an older and larger tradition; as folklorist Gillian Bennett notes in her book Bodies: Sex Violence, Disease, and Death in Contemporary Legend, in Europe and elsewhere “Stories proliferated about deliberately contaminated doorknobs, light switches, and sandboxes on playgrounds” (115).

How to Get, Prevent, or Cure It

Various theories have surfaced online suggesting ways to prevent the virus. They include avoiding spicy food (which doesn’t work); eating garlic (which also doesn’t work); and drinking bleach (which really, really doesn’t work). 

In addition, there’s also something called MMS, or “miracle mineral solution,” and the word miracle in the name should be a big red flag about its efficacy. The solution is 28 percent sodium chlorite mixed in distilled water, and there are reports that it’s being sold online for $900 per gallon (or if that’s a bit pricey, you can get a four-ounce bottle for about $30).

The FDA takes a dim view of this, noting that it 

has received many reports that these products, sold online as “treatments,” have made consumers sick. The FDA first warned consumers about the products in 2010. But they are still being promoted on social media and sold online by many independent distributors. The agency strongly urges consumers not to purchase or use these products. The products are known by various names, including Miracle or Master Mineral Solution, Miracle Mineral Supplement, MMS, Chlorine Dioxide Protocol, and Water Purification Solution. When mixed according to package directions, they become a strong chemical that is used as bleach. Some distributors are making false—and dangerous—claims that Miracle Mineral Supplement mixed with citric acid is an antimicrobial, antiviral, and antibacterial liquid that is a remedy for autism, cancer, HIV/AIDS, hepatitis, flu, and other conditions. 

It’s true that bleach can kill viruses—when used full strength on surfaces, not when diluted and ingested. They’re two very different things; confuse the two at your great peril. 

Folk remedies such as these are appealing because they are something that victims (and potential victims) can do—some tangible way they can take action and assume control over their own health and lives. Even if the treatment is unproven or may be just a rumor, at least they feel like they’re doing something.

There have been several false reports and rumors of outbreaks in local hospitals across the country, including in Los Angeles, Santa Clarita, and in Dallas County, Texas. In all those cases, false social media posts have needlessly alarmed the public—and in some cases spawned conspiracy theories. After all, some random, anonymous mom on Facebook shared a screen-captured Tweet from some other random person who had a friend of a friend with “insider information” about some anonymous person in a local hospital who’s dying with Covid-19—but there’s nothing in the news about it! Who are you going to believe? 

Then there’s Canadian rapper/YouTube cretin James Potok, who stood up near the end of his WestJet flight from Toronto to Jamaica and announced loudly to the 240 passengers that he had just come from Wuhan, China, and “I don’t feel too well.” He recorded it with a cell phone, planning to post it online as a funny publicity stunt. Flight attendants reseated him, and the plane returned to Toronto where police and medical professionals escorted him off the plane. Of course he tested negative and was promptly arrested.

When people are frightened by diseases, they cling to any information and often distrust official information. These fears are amplified by the fact that the virus is of course invisible to the eye, and the fears are fueled by ambiguity and uncertainty about who’s a threat. The incubation period for Covid-19 seems to be between two days and two weeks, during which time asymptomatic carriers could potentially infect others. The symptoms are common and indistinguishable from other viruses, except when confirmed with lab testing, which of course requires time, equipment, a doctor visit, and so on. Another factor is that people are very poor at assessing relative risk in general anyway (for example, fearing plane travel over statistically far more dangerous car travel). They often panic over alarmist media reports and underestimate their risk of more mundane threats.

The best medical advice for dealing with Covid-19: Thoroughly cook meat, wash your hands, and stay away from sick people … basically the same advice you get for avoiding any cold or airborne virus. Face masks don’t help much, unless you are putting them on people who are already sick and coughing. Most laypeople use the masks incorrectly anyway, and hoarding has led to a shortage for medical workers. 

Hoaxes, misinformation, and rumors can cause real harm during public health emergencies. When people are sick and desperately afraid of a scary disease, any information will be taken seriously by some people. False rumors can not only kill but can hinder public health efforts. The best advice is to keep threats in perspective, recognize the social functions of rumors, and heed advice from medical professionals instead of your friend’s friend on Twitter. 

Further Reading

An Epidemic of Rumors: How Stories Shape Our Perceptions of Disease, Jon D. Lee

Bodies: Sex Violence, Disease, and Death in Contemporary Legend, Gillian Bennett

I Heard It Through the Grapevine: Rumor in African-America Culture, Patricia Turner

Plagues, Poisons, and Potions: Plague Spreading Conspiracies in the Western Alps c. 1530-1640, William Naphy

The Global Grapevine: Why Rumors of Terrorism, Immigration, and Trade Matter, Gary Alan Fine and Bill Ellis

The Kiss of Death: Contagion, Contamination, and Folklore, Andrea Kitta

 

 

A longer version of this article appeared in my CFI blog; you can find it here. 

Apr 012020
 

There’s a natural—almost Pavlovian—tendency to follow the news closely, especially during times of emergency such as wars, terrorism, and natural disasters. People are understandably desperate for information to keep their friends and family safe, and part of that is being informed about what’s going on. 

News and social media are awash with information about the covid-19 pandemic. But not all the information is equally valid, useful, or important. It’s easy to become overwhelmed, and science-informed laypeople are likely suffering this information overload keenly, as we absorb the firehose of information from a wide variety of sources: from the White House to the CDC, and from conspiracy cranks to Goop contributors. It’s a never ending stream—often a flood—of information, and those charged with trying to sort it out are quickly inundated. As important as news is, there is such a thing as medical TMI.

We have a Goldilocks situation when it comes to covid-19 material. There’s too little, too much, and just the right amount of information about the covid-19 virus in the news and social media. This sounds paradoxical until we break down each type of information. 

Types of Covid-19 Information

In thinking about the covid-19 outbreak and the deluge of opinion, rumor, and news out there, it’s helpful to parse out the different types of information. 

1) Information that’s true

This includes the most important, practical information—how to avoid it: Wash your hands, avoid crowds, don’t touch your face, sanitize surfaces, and so on. This type of information has been proven accurate and consistent since the outbreak began. This is of course the smallest category of information: mundane but vital. 

2) Information that’s false 

Information that’s false includes a wide variety of rumors, miracle cures, misinformation, and so on. The Center for Inquiry’s Covid Resource Center has been set up precisely to help journalists and the public debunk this false information. The problem is made worse by the fact that Russian disinformation organizations—which have a long and proven history of sowing false and misleading information in social media around the world, and particularly in the United States—have seized on the covid-19. 

3) Speculation, opinion, and conjecture

In times of uncertainty, prediction and speculation are rampant. Dueling projections about the outbreak vary by orders of magnitude as experts and social media pundits alike share their speculation. Of course, epidemiological models are only as good as the data that goes into them and are based on many premises, variables, and numerous unknowns. 

Wanting to accurately know the future is of course a venerable tradition. But as a recent post on Medium written by an epidemiologist noted: “Here is a simple fact: every prediction you’ve read on the numbers of COVID-19 cases or deaths is almost certainly wrong. All models are wrong. Some models are useful. It is very easy to draw a graph using an exponential curve and tell everyone that there will be 10 million cases by next Friday. It is far harder to model infectious disease epidemics with any accuracy. Stop making graphs and putting them online. Stop reading the articles by well-meaning people who have no idea what they are doing. The real experts aren’t posting random Excel graphs on twitter, because they are working flat-out to try and get a handle on the epidemic.” 

4) Information that’s true but not helpful

Finally, there’s another, less-recognized category: information that is true but not helpful on an individual level, or what might be called “trivially true.” We usually think of false information being shared as harmful—and it certainly is—but trivially true information can also be harmful to public health. Even when it’s not directly harmful, it adds to the background of noise.

News media and social media are flooded with information and speculation that—even if accurate—is of little practical use to the average person. Much of the information is not helpful, useful, actionable, or applicable to daily life. It’s like in medicine and psychology what’s called “clinical significance”: the practical importance of a treatment effect—whether it has a real, genuine, palpable, and noticeable effect on daily life. A finding may be true, may be statistically significant, but be insignificant in the real world. A new medicine may reduce pain by 5 percent but nobody would create or market it because it’s not clinically significant; a 5 percent reduction in pain isn’t useful compared to other pain relievers with better efficacy. 

One example might include photos of empty store shelves widely shared on social media, depicting the run on supplies such as sanitizer and toilet paper. The information is both true and accurate; it’s not being faked or staged. But it’s not helpful, because it leads to panic buying, social contagion, and hoarding as people perceive a threat to their welfare and turn an artificial scarcity into a real one. 

Another example is Trump’s recent reference to the covid-19 virus as “the China virus.” Ignoring the fact that diseases aren’t named for where they emerge, we can acknowledge that it’s technically accurate that, as Trump claimed, covid-19 was first detected in China—and also that it’s not a relevant or useful detail. It doesn’t add to the discussion or help anyone’s understanding of what the disease is or how to deal with it. If anything, referring to it by other terms such as “the China virus” or “Wuhan flu” is likely to cause confusion and even foment racism.  

Before believing or sharing information on social media, ask yourself questions such as: Is it true? Is it from a reliable source? But there are other questions to ask: Even if it may be factually true, is it helpful or useful? Does it promote unity or encourage divisiveness? Are you sharing it because it contains practical information important to people’s health? Or are you sharing it just to have something to talk about, some vehicle to share your opinions about? The signal-to-noise ratio is already skewed against useful information, being drowned out by false information, speculation, opinion, and trivially true information.  

Social Media Distancing

While self-isolating from the disease (and those who might carry it) is vital to public health, there’s a less-discussed aspect: self-distancing from social media information on the virus, which is a form of social media hygiene. Six feet is enough distance in physical space, but doesn’t apply to cyberspace where viral misinformation spreads unchecked (until it hits this site).

The analogy between disease and misinformation is apt. Just as you can be a vector for a virus if you get and spread it, you can be a vector for misinformation and fear. But you can stop it by removing yourself from it. You don’t need hourly updates on most aspects of the pandemic. Most of what you see and read isn’t relevant to you. The idea is not to ignore important and useful information about the coronavirus; in fact, it’s exactly the opposite: to better distinguish the news from the noise, the relevant from the irrelevant. 

Doctors around the world have been photographed sharing signs that say “We’re at work for you. Please stay home for us.” That’s excellent advice, but we can take it further. While at home not becoming a vector for disease, also take steps not to become a vector for misinformation. After all, doing so can have just as much of an impact on public health. 

During a time when people are isolated, it’s cathartic to vent on social media. Humans are social creatures, and we find ways to connect even when we can’t physically. Especially during a time of international crisis, it’s easy to become outraged about one or another aspect of the pandemic. Everyone has opinions about what is (or isn’t) being done, what should (or shouldn’t) be done. Everyone’s entitled to those opinions, but they should be aware that those opinions expressed on social media have consequences and may well harm others, albeit unintentionally. Just as it feels good to physically hang out with other people (but may in fact be dangerous to them), it feels good to let off steam to others in your social circles (but may be dangerous to them). Your steam makes others in your feed get steamed too, and so on. Again, it’s the disease vector analogy. 

You don’t know who will end up seeing your posts and comments (such is the nature of “viral” posts and memes), and while you may think little of it, others may be more vulnerable. Just as people take steps to protect those with compromised immune systems, it may be wise to take similar steps to protect those with compromised psychological defenses on social media—those suffering from anxiety, depression, or other issues who are especially vulnerable at this time. 

This isn’t about self-censorship; there are many ways to reach out to others and share concerns and feelings in a careful and less public way through email, direct messaging, video calls, and even—gasp—good old fashioned letters. Like anything else, people can express feelings and concerns in measured, productive ways, ways that are more (or less) likely to harm others (referring to it as “covid-19” instead of “the Chinese virus” is one example). 

Though the public loves to blame the news media for misinformation—and deservedly so—we are less keen to see the culprit in the mirror. Many people, especially on social media, fail to recognize that they have become de facto news outlets through the stories and posts they share. The news media helps spread myriad “fake news” stories—gleefully aided by ordinary people like us. We cannot control what news organizations (or anyone else) publishes or puts online. But we can—and indeed we have an obligation to—help stop the spread of misinformation in all its forms. 

It’s overwhelming; it’s too much. In psychology there’s what’s called the Locus of Control. It basically means the things which a person has control over: themselves, their immediate family, their pets, most aspects of their lives, and so on. It’s psychologically healthy to focus on those things you can do something about. You can’t do anything about how many deaths there are in China or Italy. You can’t do anything about whether or not medical masks are being manufactured and shipped quickly enough. But you can do something about bad information online. 

It can be as simple as not forwarding, liking, or sharing that dubious news story before checking the facts, especially if that story seems crafted to encourage social outrage. We can help separate the truth from the myths, but we can’t force people to believe the truth. Be safe, practice social and cyber distancing, and wash your hands. 

 

A longer version of this appeared on the Center for Inquiry site; you can find it here. 

Mar 232020
 

Our recent episode of Squaring the Strange is about literary hoaxes!

I discuss some “misery memoirs,” stories of victims triumphing over incredible hardships (Spoiler: “Go Ask Alice” was fiction). Celestia discusses newspaper reports of horny bat-people on the moon, and we break down the cultural factors that contribute to the popularity and believability of hoaxes. We end with the heart-wrenching story of a literary version of Munchausen by proxy, one that moved both Oprah and Mr. Rogers. Check it out HERE! 

 

Mar 082020
 

So this is cool: I’m quoted in an article on Bigfoot in The Mountaineer: 

If you wear a size 14 shoe, chances are some of your high-school classmates called you “Bigfoot.” But that doesn’t mean you are an ape-like beast who may — or may not — just be a myth. A 1958 newspaper column began the whole thing. The Humboldt Times received a letter from a reader reporting loggers in California who had stumbled upon mysterious and excessively large footprints. The two journalists who reported the discovery treated it as a joke. But to their great surprise, the story caught on and soon spread far and wide. Bigfoot was born. Of course, reports of large beasts were not exactly new. The Tibetans had a Yeti, familiarly known as the “Abominable Snowman,” and an Indian Nation in Canada had its “Sasquatch.”

Guess what? Cochran found out the hair did not belong to Bigfoot. It was sent back to Byrne, with the conclusion it belonged to the deer family. Four decades later, the FBI declassified the “bigfoot file” about having done this analysis.“Byrne was one of the more prominent Bigfoot researchers,” said Benjamin Radford, deputy editor of the Skeptical Inquirer magazine. “In the 1970s, Bigfoot was very popular.”

You can read the rest of the article HERE! 

 

As my awesome podcast Squaring the Strange (co-hosted by Pascual Romero and Celestia Ward) nears its three-year anniversary, I will be posting episode summaries from the past year to remind people some of the diverse topics we’ve covered on the show, ranging from ghosts to folklore to mysteries and topical skepticism. If you haven’t heard it, please give a listen!

Mar 052020
 

As the world enters its third  full month dealing with the deadly coronavirus, misinformation is running rampant. For many, the medical and epidemiological aspects of the outbreak are the most important and salient elements, but there are other prisms through which we can examine this public health menace. 

There are many facets to this outbreak, including economic damage, cultural changes, and so on. However, my interest and background is in media literacy, psychology, and folklore (including rumor, legend, and conspiracy), and my focus here is a brief overview of some of the lore surrounding the current outbreak. Before I get into the folkloric aspects of the disease, let’s review the basics of what we know so far. 

First, the name is a bit misleading; it’s a coronavirus, not the coronavirus. Coronavirus is a category of viruses; this one is dubbed “Covid-19.” Two of the best known and most deadly other coronaviruses are SARS (Severe Acute Respiratory Syndrome, first identified in 2003) and MERS (Middle East Respiratory Syndrome, identified in 2012). 

The symptoms of Covid-19 are typical of influenza and include a cough, sometimes with a fever, shortness of breath, nausea, vomiting, and/or diarrhea. Most (about 80 percent) of infected patients recover within a week or two, like patients with a bad cold. The other 20 percent contract severe infections such as pneumonia, sometimes leading to death. The virus Covid-19 is spreading faster than either MERS or SARS, but it’s much less deadly than either of those. The death rate for Covid-19 is 2 percent, compared to 10 percent for SARS and 35 percent for MERS. There’s no vaccine, and because it’s not bacterial, antibiotics won’t help. 

The first case was reported in late December 2019 in Wuhan, China. About a month later the Health and Human Services Department declared a U.S. public health emergency. The average person is at very low risk, and Americans are at far greater risk of getting the flu—about 10 percent of the public gets it each year. Three cruise ships and several airplanes have been quarantined. There are about a dozen confirmed cases in the U.S., and most of the infected are in China or are people who visited there. Though the number of people infected in China sounds alarming, keep in mind the country’s population of 1.4 billion. 

The information issues can be roughly broken down into three (at times overlapping) categories: 1) Lack of information; 2) Misinformation; and 3) Disinformation. 

Lack of Information

The lack of information stems from the fact that scientists are still learning about this specific virus. Much is known about it from information gathered so far (summarized above), but much remains to be learned. 

The lack of information has been complicated by a lack of transparency by the Chinese government, which has sought to stifle early alarms about it raised by doctors, including Li Wenliang, who recently died. As The New York Times reported:

On Friday, the doctor, the doctor, Li Wenliang, died after contracting the very illness he had told medical school classmates about in an online chat room, the coronavirus. He joined the more than 600 other Chinese who have died in an outbreak that has now spread across the globe. Dr. Li “had the misfortune to be infected during the fight against the novel coronavirus pneumonia epidemic, and all-out efforts to save him failed,” the Wuhan City Central Hospital said on Weibo, the Chinese social media service. Even before his death, Dr. Li had become a hero to many Chinese after word of his treatment at the hands of the authorities emerged. In early January, he was called in by both medical officials and the police, and forced to sign a statement denouncing his warning as an unfounded and illegal rumor. 

Chinese officials were slow to share information and admit the scope of the outbreak. This isn’t necessarily evidence of a conspiracy—governments are often loathe to admit bad news or potentially embarrassing or damaging information (recall that it took nearly a week for Iran to admit it had unintentionally shot down a passenger airliner over its skies in January)—but part of the Chinese government’s long standing policies of restricting news reporting and social media. Nonetheless, China’s actions have fueled anxiety and conspiracies; more on that presently. 

Misinformation

There are various types of misinformation, revolving around a handful of central concerns typical of disease rumors. In his book An Epidemic of Rumors: How Stories Shape Our Perceptions of Disease, Jon D. Lee notes:

People use certain sets of narratives to discuss the presence of illness, mediate their fears of it, come to terms with it, and otherwise incorporate its presence into their daily routines … Some of these narratives express a harsher, more paranoid view of reality than others, some are openly racist and xenophobic, and some are more concerned with issues of treatment and prevention than blame—but all revolve around a single emotion in all its many forms: fear. (169) 

As Lee mentions, one common aspect is xenophobia and contamination fears. Many reports, in news media but on social media especially, focus on the “other,” the dirty aberrant outsiders who “created” or spread the menace. Racism is a common theme in rumors and urban legends—what gross things “they” eat or do. As Prof. Andrea Kitta notes in her book The Kiss of Death: Contagion, Contamination, and Folklore

The intriguing part of disease legends is that, in addition to fear of illness, they express primarily a fear of outsiders … Patient zero [the assumed origin of the “new” disease] not only provides a scapegoat but also serves as an example to others: as long as people do not act in the same way as patient zero, they are safe. (27–28)

In the case of Covid-19, rumors have suggested that seemingly bizarre (to Americans anyway) eating habits of Chinese were to blame, specifically bats. One video circulated allegedly showing Chinese preparing bat soup, suggesting it was the cause of the outbreak, though it was later revealed to have been filmed in Palau, Micronesia. 

The idea of disease and death coming from “unclean” practices has a long history. One well known myth is that AIDS originated when someone (presumably an African man) had sex with a monkey or ape. This linked moralistic views of sexuality with the later spread of the disease, primarily among the homosexual community. More likely, however, chimps with simian immunodeficiency virus were killed and eaten for game meat, which is documented, which in turn transferred the virus to humans and spawned HIV (human immunodeficiency virus), which in turn causes AIDS. 

The fear of foreigners and immigrants bringing disease to the country was of course raised a few years ago when a Fox News contributor suggested without evidence that a migrant caravan from Honduras and Guatemala coming through Mexico carried leprosy, smallpox, and other dreaded diseases. This claim was quickly debunked

Disinformation and Conspiracies

Then there are the conspiracies, prominent among them the disease’s origin. Several are circulating, claiming for example that Covid-19 is in fact a bioweapon that has either been intentionally deployed or escaped/stolen from a secure top secret government lab. Some have claimed that it’s a plot (by the Bill and Melinda Gates Foundation or another NGO or Big Pharma) to sell vaccines—apparently unaware that there is no vaccine available at any price. 

This is a classic conspiracy trope, evoked to explain countless bad things, ranging from chupacabras to chemtrails and diseases. This is similar to urban legends and rumors in the African American community, claiming that AIDS was created by the American government to kill blacks, or that soft drinks and foods (Tropical Fantasy soda and Church’s Fried Chicken, for example) contained ingredients that sterilized the black community (for more on this, see Patricia Turner’s book I Heard It Through the Grapevine: Rumor in African-America Culture.) In Pakistan and India, public health workers have been attacked and even killed trying to give polio vaccinations, rumored to be part of an American plot.

Of course such conspiracies go back centuries. As William Naphy notes in his book Plagues, Poisons, and Potions: Plague Spreading Conspiracies in the Western Alps c. 1530-1640, people were accused of intentionally spreading the bubonic plague. Most people believed that the plague was a sign of God’s wrath, a pustular and particularly punitive punishment for the sin of straying from Biblical teachings. “Early theories saw causes in: astral conjunctions, the passing of comets; unusual weather conditions … noxious exhalations from the corpses on battlefields” and so on (vii). Naphy notes that “In 1577, Claude de Rubys, one of the city’s premier orators and a rabid anti-Protestant, had openly accused the city’s Huguenots of conspiring to destroy Catholics by giving them the plague” (174). Confessions, often obtained under torture, implicated low-paid foreigners who had been hired to help plague victims and disinfect their homes. 

Other folkloric versions of intentional disease spreading include urban legends of AIDS-infected needles placed in payphone coin return slots. Indeed, that rumor was part of an older and larger tradition; as folklorist Gillian Bennett notes in her book Bodies: Sex Violence, Disease, and Death in Contemporary Legend, in Europe and elsewhere “Stories proliferated about deliberately contaminated doorknobs, light switches, and sandboxes on playgrounds” (115).

How to Get, Prevent, or Cure It

Various theories have surfaced online suggesting ways to prevent the virus. They include avoiding spicy food (which doesn’t work); eating garlic (which also doesn’t work); and drinking bleach (which really, really doesn’t work). 

In addition, there’s also something called MMS, or “miracle mineral solution,” and the word miracle in the name should be a big red flag about its efficacy. The solution is 28 percent sodium chlorite mixed in distilled water, and there are reports that it’s being sold online for $900 per gallon (or if that’s a bit pricey, you can get a four-ounce bottle for about $30).

The FDA takes a dim view of this, noting that it 

has received many reports that these products, sold online as “treatments,” have made consumers sick. The FDA first warned consumers about the products in 2010. But they are still being promoted on social media and sold online by many independent distributors. The agency strongly urges consumers not to purchase or use these products. The products are known by various names, including Miracle or Master Mineral Solution, Miracle Mineral Supplement, MMS, Chlorine Dioxide Protocol, and Water Purification Solution. When mixed according to package directions, they become a strong chemical that is used as bleach. Some distributors are making false—and dangerous—claims that Miracle Mineral Supplement mixed with citric acid is an antimicrobial, antiviral, and antibacterial liquid that is a remedy for autism, cancer, HIV/AIDS, hepatitis, flu, and other conditions. 

It’s true that bleach can kill viruses—when used full strength on surfaces, not when diluted and ingested. They’re two very different things; confuse the two at your great peril. 

Folk remedies such as these are appealing because they are something that victims (and potential victims) can do—some tangible way they can take action and assume control over their own health and lives. Even if the treatment is unproven or may be just a rumor, at least they feel like they’re doing something.

There have been several false reports and rumors of outbreaks in local hospitals across the country, including in Los Angeles, Santa Clarita, and in Dallas County, Texas. In all those cases, false social media posts have needlessly alarmed the public—and in some cases spawned conspiracy theories. After all, some random, anonymous mom on Facebook shared a screen-captured Tweet from some other random person who had a friend of a friend with “insider information” about some anonymous person in a local hospital who’s dying with Covid-19—but there’s nothing in the news about it! Who are you going to believe? 

Then there’s Canadian rapper/YouTube cretin James Potok, who last week stood up near the end of his WestJet flight from Toronto to Jamaica and announced loudly to the 240 passengers that he had just come from Wuhan, China, and “I don’t feel too well.” He recorded it with a cell phone, planning to post it online as a funny publicity stunt. Flight attendants reseated him, and the plane returned to Toronto where police and medical professionals escorted him off the plane. Of course he tested negative and was promptly arrested.

When people are frightened by diseases, they cling to any information and often distrust official information. These fears are amplified by the fact that the virus is of course invisible to the eye, and the fears are fueled by ambiguity and uncertainty about who’s a threat. The incubation period for Covid-19 seems to be between two days and two weeks, during which time asymptomatic carriers could potentially infect others. The symptoms are common and indistinguishable from other viruses, except when confirmed with lab testing, which of course requires time, equipment, a doctor visit, and so on. Another factor is that people are very poor at assessing relative risk in general anyway (for example, fearing plane travel over statistically far more dangerous car travel). They often panic over alarmist media reports and underestimate their risk of more mundane threats.

The best medical advice for dealing with Covid-19: Thoroughly cook meat, wash your hands, and stay away from sick people … basically the same advice you get for avoiding any cold or airborne virus. Face masks don’t help much, unless you are putting them on people who are already sick and coughing. Most laypeople use the masks incorrectly anyway, and hoarding has led to a shortage for medical workers. 

Hoaxes, misinformation, and rumors can cause real harm during public health emergencies. When people are sick and desperately afraid of a scary disease, any information will be taken seriously by some people. False rumors can not only kill but can hinder public health efforts. The best advice is to keep threats in perspective, recognize the social functions of rumors, and heed advice from medical professionals instead of your friend’s friend on Twitter. 

Further Reading

An Epidemic of Rumors: How Stories Shape Our Perceptions of Disease, Jon D. Lee

Bodies: Sex Violence, Disease, and Death in Contemporary Legend, Gillian Bennett

I Heard It Through the Grapevine: Rumor in African-America Culture, Patricia Turner

Plagues, Poisons, and Potions: Plague Spreading Conspiracies in the Western Alps c. 1530-1640, William Naphy

The Global Grapevine: Why Rumors of Terrorism, Immigration, and Trade Matter, Gary Alan Fine and Bill Ellis
The Kiss of Death: Contagion, Contamination, and Folklore, Andrea Kitta

 

A different version of this article originally appeared in my blog for the Center for Inquiry; you can find it HERE. 

 

As my awesome podcast Squaring the Strange (co-hosted by Pascual Romero and Celestia Ward) nears its three-year anniversary, I will be posting episode summaries from the past year to remind people some of the diverse topics we’ve covered on the show, ranging from ghosts to folklore to mysteries and topical skepticism. If you haven’t heard it, please give a listen!

Mar 032020
 

Did you catch our recent bonus episode of Squaring the Strange? I gather some myths and misinformation going round about Wuhan Coronovirus, aka Novel Coronavirus, aka “we’re all gonna die,” aka COVID-19. Then special guest Doc Dan breaks down some virus-busting science for us and talks about the public health measures in place. Check it out HERE! 

 

Mar 022020
 

I’m quoted in a new article about real estate omens and superstitions at Realtor.com! 

“An outdated kitchen and a lack of curb appeal aren’t the only things that can keep buyers from biting. When it seems like there’s just no explanation for a perfectly good home sitting on the market, you might consider other possible causes. Certain items, colors, and symbols have been thought to attract malicious forces to an otherwise peaceful abode. And while some people scoff at such beliefs, others take them seriously—and not just around Halloween.

“There are countless folkloric beliefs, and savvy homeowners are smart to acknowledge and respect such beliefs, whether they share them or not,” says Benjamin Radford, deputy editor of Skeptical Inquirer science magazine and co-host of the “Squaring the Strange” podcast.”

 

You can see the rest HERE! 

 

As my awesome podcast Squaring the Strange (co-hosted by Pascual Romero and Celestia Ward) nears its three-year anniversary, I will be posting episode summaries from the past year to remind people some of the diverse topics we’ve covered on the show, ranging from ghosts to folklore to mysteries and topical skepticism. If you haven’t heard it, please give a listen!

Feb 282020
 

Last month Neil Peart, the drummer and main lyricist for the rock band Rush, died. He’d been living in California and privately battled brain cancer for several years. The Canadian trio (Alex Lifeson on guitar, Geddy Lee on vocals, bass, and keyboards, and Neil Peart on drums) announced they’d stopped touring in 2015, after 40 years.

As a Rush fan and a skeptic I thought it would be a good opportunity to reflect on Peart’s passing and his skepticism-infused lyrics. There are over 150 Rush songs written or co-written by Neil, and many themes can be found among them, including alienation, skepticism, libertarianism, fantasy, and humanism. The discussion here is not comprehensive; my interest here is to briefly highlight some of the more potent lyrics and songs expressing doubt, skepticism, the frailty of perception, the fallibility of knowledge, and the dangers of certainty. Peart was likely one of the most widely-read lyricists in rock and roll, on topics ranging from philosophy to humanism to science. He was, as described in The New Yorker, “wildly literate.” George Hrab is among the many skeptics who offered a memorial to Peart (as well as Geo’s initial skepticism about the news of Peart’s death, and why Peart and the band seemed relatable), on his Geologic Podcast (episode 646).

As has been written elsewhere, Rush was a polarizing band that either you “got,” or you didn’t. I’ve met people who have barely heard of them, but few who were ambivalent about them. At the risk of employing the “I liked them when they weren’t cool” trope, I’ll note that my love of the band dates back to hearing “Tom Sawyer” on the radio for the first time in 1981 and being blown away. I joined the nascent Rush Backstage Club. This was back in a day when Rush fans such as myself connected via letters; a Pen Pals section offered a dozen or so addresses for Rush fans to meet each other and share their enthusiasm, at the comfortable pace of postal delivery.

I proceeded to buy all their albums and saw them live a dozen times over the years. Most of the albums were great, a few were good, and some of the later albums (Vapor TrailsCounterparts, and Test For Echo, for example) left me a bit cold. But Rush had earned my loyalty and I’d buy anything they put out, just on principle. The most mediocre Rush song—and there are many—was usually head and shoulders above most of the other rock music I was hearing.

For much of Rush’s history Peart was the shy, retiring member. He rarely did interviews or fan meet-and-greets after concerts; that was a role that Geddy and Alex happily—or, surely, sometimes dutifully—fulfilled. It wasn’t that he didn’t appreciate fans or thought it was beneath him, he just didn’t enjoy it and would rather be alone, read, or plan his solo motorcycle trip to the next venue (something he often did).

But that wasn’t always the case; as a member of the Rush Backstage Club I got their newsletter in which Neil would respond to questions from fans. This was the mid-1980s, of course, long before the internet; that’s how things were done in those days. I never wrote in, partly because I didn’t know what I’d ask him if he actually responded.

The quality of the questions varied widely, ranging from the insightful to the banal. Neil typically responded in earnest, though occasionally his replies revealed a latent and understandable irritation. One got the impression that Neil didn’t suffer fools lightly, but he also recognized that Rush fans were a broad lot that included (or perhaps dominated by) nerdy, misfit teenagers and young adults, mostly male, perhaps not unlike himself as a teen in St. Catharines, Ontario. (Peart wrote about this inevitable gap between performer and audience, expert and layman, in the song Limelight.)

The three performers, lifelong friends, often made better music than bands with two or three times the number of members. Watching other, larger, bands I was often confused: What the hell are those other musicians doing? Why are there three guitarists, two keyboardists, a singer, a drummer, and some woman on a tambourine? And the backup singers? Is this a flash mob or a rock band? The answer, of course, is that none of them were Geddy, Alex, or Neil.

Peart was widely known as “The Professor” because of his intellectualism, his analytical approach to percussion, and the fact that he taught and influenced a generation of musicians. I’m not a musician, and didn’t learn drumming from him (though I did learn about some of the history and techniques from him). I’m not a lyricist and didn’t learn songwriting from him either. But we had some shared interests including the 1960s British television show The Prisoner, as evinced by some of his lyrics and his wearing of the distinctive Number Six pennyfarthing badge used in the series. The Prisoner is widely regarded as one of the most innovative and cerebral series of the 1960s—or, really, ever. Had I gotten the chance to meet him, I’d have avoided talking about drumming—or even music in general—and instead steered the conversation to shared interests such as Africa, travel, writing, belief, skepticism, and so on.

To be clear: Geddy and Alex are no slouches in the intellectual and reading departments either, the latter having been photographed reading the Christopher Hitchens classic God Is Not Great. Lee and Lifeson are enormously accomplished outside of music as well, but here I focus on Peart’s contribution as a lyricist (I hear he’s regarded as a passable drummer as well).

I’m not going to engage in extensive dives on various meanings, allegories and interpretations of the lyrics. I believe that most of the lyrics speak for themselves; one of the qualities of Peart’s writing is that it’s (usually) accessible. In a 1992 interview with Roger Catlin Peart noted that “For a lot of people, lyrics just aren’t that important. I can enjoy a band when the lyrics are shallow. But I can enjoy it more if the lyrics are good.” Here are some lyrics I find especially resonant.

Tom Sawyer / Moving Pictures (1981)

No, his mind is not for rent
To any god or government
Always hopeful, yet discontent
He knows changes aren’t permanent

But change is

Freewill / Permanent Waves (1980)

You can choose a ready guide
In some celestial voice
If you choose not to decide
You still have made a choice

You can choose from phantom fears
And kindness that can kill
I will choose a path that’s clear
I will choose free will

The “Fear” Series

Rush released four songs related to the topic of fear: Witch Hunt (Moving Pictures); The Enemy Within (Grace Under Pressure); The Weapon (Signals), and, much later, Freeze (Vapor Trails). I want to focus on Peart’s plea for reason and rationality in Witch Hunt:

Witch Hunt / Moving Pictures (1981)

The night is black
Without a moon
The air is thick and still

The vigilantes gather on
The lonely torchlit hill

Features distorted in the flickering light
The faces are twisted and grotesque
Silent and stern in the sweltering night
The mob moves like demons possessed
Quiet in conscience, calm in their right
Confident their ways are best

The righteous rise
With burning eyes
Of hatred and ill-will

Madmen fed on fear and lies
To beat, and burn, and kill

The lyrics reference xenophobia, moral guardians, moral panics, and censorship in the second half of the song:

They say there are strangers, who threaten us
In our immigrants and infidels
They say there is strangeness, too dangerous
In our theatres and bookstore shelves
That those who know what’s best for us –
Must rise and save us from ourselves

Quick to judge,
Quick to anger
Slow to understand

Ignorance and prejudice
And fear

Walk hand in hand

Totem / Test for Echo (1996)

I believe in what I see
I believe in what I hear
I believe that what I’m feeling
Changes how the world appears

In his book Ghost Rider: Travels on the Healing Road, Peart wrote, “At the time of writing those lines [before the death of his daughter Selena], I had in mind the contradiction between a skeptic’s dismissal of anything not tangible (true agnosticism) and the entirely subjective way many people tend to view and judge the world, through the filters of ever-changing emotions and moods” (p. 79).

Angels and demons dancing in my head
Lunatics and monsters underneath my bed
Media messiahs preying on my fears
Pop culture prophets playing in my ears

Roll the Bones / Roll the Bones (1991)

Faith is cold as ice
Why are little ones born only to suffer
For the want of immunity
Or a bowl of rice?
Well, who would hold a price
On the heads of the innocent children
If there’s some immortal power
To control the dice?

We come into the world and take our chances
Fate is just the weight of circumstances

That’s the way that lady luck dances
Roll the bones

Jack, relax
Get busy with the facts
No zodiacs or almanacs
No maniacs in polyester slacks
Just the facts

Brought Up To Believe (BU2B) / Clockwork Angels (2010)

I was brought up to believe
The universe has a plan
We are only human
It’s not ours to understand

The universe has a plan
All is for the best
Some will be rewarded
And the devil take the rest

All is for the best

Believe in what we’re told
Blind men in the market
Buying what we’re sold
Believe in what we’re told
Until our final breath
While our loving Watchmaker
Loves us all to death

In a world of cut and thrust
I was always taught to trust
In a world where all must fail
Heaven’s justice will prevail

There’s one final song I’d like to mention because it captures the mission of an inquisitive, Enlightenment-fueled mind:

Available Light / Presto (1989)

All four winds together
Can’t bring the world to me
Shadows hide the play of light
So much I want to see
Chase the light around the world
I want to look at life—In the available light

The “light” Peart is talking about is the same light of reason that Carl Sagan mentioned in his (borrowed) aphorism, “It’s better to light a candle than curse the darkness.” Peart was open about his agnosticism (some would consider it atheism) and wrote eloquently about the dangers of religion.

As an avid Rush fan I collected several tourbooks and one thing that stood out to me was how often Peart was photographed reading books. He could have been photographed drinking and partying, living the rock star life (see the accompanying artwork for pretty much any Guns N Roses album, for example). Peart was thoughtful and literate. In one album photo he poses with Aristotle’s classic Poetics, and it’s clear that it’s not done ironically. Peart didn’t grab a book to read when photographers were around; he just didn’t bother to put it down when they were. He was who he was, and he didn’t care whether he looked the part of a rock star. The band seemed down to earth, taking their music—but not themselves—too seriously (see their speech at Rush’s induction into the Rock and Roll Hall of Fame in 2013 for example).

Neil Peart isn’t resting in peace or anywhere else; he’s gone but remains with us. As he said during the Hall of Fame induction, quoting Bob Dylan: “The highest purpose of art is to inspire. What else can you do for anyone but inspire them?” He and his band have inspired tens of millions of people in ways large and small. As Neil wrote, “A spirit with a vision is a dream with a mission.”

 

 

You can find more on me and my work with a search for “Benjamin Radford” (not “Ben Radford”) on Vimeo, and please check out my podcast Squaring the Strange! 

Feb 182020
 

I’ve investigated hundreds—probably thousands—of things in my career as a skeptic and researcher, from misleading polls to chupacabra vampire legends. Some investigations take hours or days; others take weeks or months, and a rare few take years. It all depends on the scope of the investigation and how much information you have to analyze. In some cases a mystery can’t be solved until some other information is released or revealed, such as a medical or forensics test.

However there are some mysteries that can be solved in less than a minute. This short piece offers one quick example.

I came across a “news” headline on several Facebook friend’s walls stating that “85% of People Hate Their Jobs, Gallup Poll Finds.”

It’s a false story. In this case, three clicks of a mouse, each on a different link seeking a source, led me to the origin of the myth. If you’re a slow reader, of course, it will take longer than if you can quickly skim the page or report, but with practice this could be done in just a few minutes.

The first step is a sort of skeptical sense that there’s maybe something to investigate, some claim that is false or exaggerated. After all, we see countless news stories and social media posts online daily, and the average person rarely if ever fact-checks them. One red flag is the source: where did the information come from? Is it a reputable, known news organization or is it some random website you’ve never heard of? To be clear: reputable news organizations sometimes get information wrong, and perfectly valid and accurate information often appears on obscure sites and blogs.

But in this case the information was clearly presented as a news story. It is formatted as a news headline and offers a surprising or alarming statistic from a reputable polling organization, Gallup.

When I clicked on the link I went to some site called Return to Now. The red flags popped up when the “news” article was revealed to be three years old. As I’ve written about before, old news is often fake news. But this “news” story was also uncredited. Someone wrote it, obviously, but who? A respected journalist or someone cranking out clickbait copy?

There’s no name attached to the piece, and the About section of the site isn’t any more helpful: “Return to Now is dedicated to helping humans live fully in the present, while gleaning tips on how to do so from our distant past. It’s a new kind of ‘news’ website, whose contributors are not as concerned with current events as we are with the whole of the human experience. Topics will include rewilding, primitivism, modern ‘tribal’ living, tips for getting off the grid, sustainability, natural health, peaceful parenting, and sexual and spiritual awakening.” It’s not clear what that means, though the fact that they put the word news in quote marks is revealing; I want—and readers deserve—news, not “news.”

In this case the piece offered a link to the Gallup poll it referenced. Many people would likely stop there and assume that the existence of the link meant that what the link contains had been accurately and fairly characterized. After all, someone must have at least looked at the content at that link in order to have written the sentence it contains, and the headline. Unless of course the person is lying to you, intentionally misleading their readers (or, perhaps has reading comprehension issues and badly misunderstood what it said).

If the article had not provided a link at all, that too would have been a red flag. Not everyone embeds links in their writing, but those who don’t at least provide a source or reference for their information, such as a book or published journal article. Otherwise it looks an awful lot like you’re just making it up.

In this case the click link was accurate and did work, and led me to the promised Gallup poll referenced in the headline and article. It’s a one-page blog and I skimmed it for the alleged statistic: that 85% of workers hate their jobs. It didn’t appear anywhere. Nor, for that matter, did any reference to “hate” or “hating” a job. Always be skeptical of polls and survey results reported in news pieces, and when possible consult the original reports; they often contradict the headlines they generate.

But I quickly realized that there was another prominent statistic that seemed to be the other half of the 85% figure: 15% (since 85% + 15% = 100% of a polled statistic). The Gallup poll found that 15% of the world’s one billion full-time workers “are engaged at work.”

But not being “engaged at work” is not at all the same thing as “hating your job.” You can love or hate your job, and be engaged or not engaged at it. The two measures have little or nothing to do with each other, and it’s clear that someone saw the poll and decided to mischaracterize its results and spin it into a clickbait article, recognizing that few would read a piece headlined, “15% of People Are Engaged At Work, Gallup Poll Says.”

The problem of misinformation and fake news on social media is of course made worse when people share the information without checking it. Those who share bogus stories like this are both victims of manipulation, and promoters of misinformation. It’s a good reminder to think before you share. You don’t need to invest hours fact-checking information; as this case shows in some cases you can do it in just seconds. Or better yet, avoid the problem entirely by only sharing news stories from reputable news organizations.

 

Note: This piece, originally appearing in a different form on my CFI blog, was inspired in part by a FB friend named Rich, one of those whose post caught my eye. After the quick search described above I diplomatically pointed this out to him, and Rich not only thanked me for doing the research, but quickly corrected the headline and vowed to do better. Be like Rich.

 

You can find more on me and my work with a search for “Benjamin Radford” (not “Ben Radford”) on Vimeo, and please check out my podcast Squaring the Strange! 

Feb 152020
 

My buddy Kenny Biddle recently did a great investigation into a new book by TV ghost hunter Zak Bagans. You can read it HERE. 

Below is an excerpt: 

I really did not want to read the book this article is about. I know that will likely give away the tone of this overall piece, but it’s just my honest reaction. When I saw the first announcements on social media that semi-celebrity Zak Bagans was releasing a new book titled Ghost Hunting for Dummies, I immediately groaned, deciding I’d pass on reviewing it. I’ve amassed quite a collection of “How to Ghost Hunt” type books since the 1990s, and I didn’t see any possibility of Bagans offering anything new—especially given his spotless track record of completely failing to find good evidence of ghosts during his decade-plus on television. At the time, I had no idea how right I’d be about that.

A close friend and colleague, Mellanie Ramsey, mentioned she was going to review the book on a podcast. After a brief conversation, she urged me to read it and participate in the podcast. I reluctantly agreed, placing an Amazon order and receiving my copy of Ghost Hunting for Dummies two days later. The book is over 400 pages and published by John Wiley & Sons, Inc., under the For Dummies brand, which boasts over 2,400 titles (Wiley 2020a). The “Dummies” books are meant to “transform the hard-to-understand into easy-to-use,” according to the company’s website (Wiley 2020b).

My first impression comes from the front cover, which I found to be an overall poor design compared to the Dummies format I was used to seeing: the slanted title, a pronounced and stylized Dummies logo, and either a character with a triangle-shaped head or a photo representing the content of the book. The cover of Ghost Hunting features the title printed straight across with a much smaller and less stylized version of the Dummies logo. The word for is so small that when I showed the book to my wife, she asked “Why did you buy a book called ‘Ghost Hunting Dummies’?” The cover also features a photograph of a basement stairway and door, along with an odd photograph of Bagans with his right hand extended toward the camera, like he’s reaching out to take your money. Overall, it’s just not an attractive cover.

Inside the book, the first thing I noticed was a lack of references; there are no citations or references listed anywhere and no bibliography at the end of the book. For me, this is a red flag; references tell us where the author obtained their information, quotes, study results, and so on. When a book is supposed to be educating you on a specific topic (or in this case, multiple topics), I expect to know the source material from which the information came. However, because this is the first book from the Dummies brand that I’ve purchased, I wasn’t sure if the lack of a bibliography was the standard format. I headed over to my local Barnes & Noble store and flipped through more than forty different Dummies titles, none of which contained references. I also noticed that all of the titles I checked, from Medical Terminology to 3D Printing, were copyrighted by Wiley Publishing/John Wiley & Sons. Ghost Hunting for Dummies is instead copyrighted by Zak Bagans.

There are several indications this book was rushed into publication for the 2019 holiday season. Chief among them are the extensive number of errors: typos, misspellings, repeated words, and missing words are littered throughout the pages. Another indication of premature release comes from the lack of the classic Dummies icons. On page 2, it’s explained that “Throughout the margins of this book are small images, known as icons. These icons mark important tidbits of information” (Bagans 2020). We are presented with four icons: the Tip (a lightbulb), the Remember (hand with string tied around one finger), the Warning (triangle with exclamation point inside), and the “Zak Says” (Zak’s face), which “Highlights my [Zak’s] words of wisdom or personal experiences” (Bagans 2020, 3). Over the 426 pages, there are only thirteen icons to be found throughout: five Tips, four Remembers, three Warnings, and one “Zak Says.” I guess Bagans didn’t have much wisdom to impart upon his readers.

Throughout much of the book, Bagans displays a strong bias against skeptics and scientists, even going as far as to claim to understand scientific concepts better than actual scientists. For example, while relating why he believes human consciousness can exist outside of the body, Bagans mentions Albert Einstein’s well-known quote, “Energy cannot be created or destroyed; it can only be changed from one form to another.” Bagans follows this with, “it’s baffling why this concept is so easy to understand for a paranormal investigator but not for a mainstream scientist” (Bagans 2020, 108). It’s actually mainstream scientists who understand this and Bagans who’s confused. The answer is very simple. Ben Radford addressed this common mistake in his March/April 2012 Skeptical Inquirer column “Do Einstein’s Laws Endorse Ghosts?”:

 

You can find more on me and my work with a search for “Benjamin Radford” (not “Ben Radford”) on Vimeo, and please check out my podcast Squaring the Strange! 

Feb 052020
 

This week we were joined by Erik Kristopher Myers to discuss a short history of a particular sort of easter egg: the dreaded “hidden subversive element” stuck into a kids’ show or game, either by a perverse animator or a much more sinister coalition bent on corrupting the youth of America. Disney has made a cottage industry of hiding adult content in cartoons–whether real or simply rumored. And the rumors of subversive dangers in D&D both plagued and popularized the once-obscure RPG. From pareidolia to pranks to the people who wring hands over such dangers, we break down a long list of memorable legends.

You can listen to it HERE. 

Feb 032020
 

This is the second of a two-part piece. 

The recent Clint Eastwood film Richard Jewell holds interesting lessons about skepticism, media literacy, and both the obligations and difficulties of translating real events into fictional entertainment.

Reel vs. Real

The film garnered some offscreen controversy when the Atlanta Journal-Constitution issued a statement complaining about the film, specifically how it and its journalism were portrayed. They and other critics complained particularly that the film unfairly maligns Scruggs, who (in real life) co-wrote the infamous AJC newspaper article that wrongly implicated Jewell in the public’s mind based on unnamed insider information. Scruggs, who isn’t alive to respond, is depicted as sleeping with FBI agent Shaw—with whom she had a previous relationship, at least according to Wilde—in return for information about Jewell.

The AJC letter to Warner Bros. threatened legal action and read in part, “Significantly, there is no claim in Ms. Brenner’s Vanity Fair piece on which the film is based that the AJC’s reporter unethically traded sex for information. It is clear that the film’s depiction of an AJC reporter trading sex for stories is a malicious fabrication contrived to sell movie tickets.” Such a depiction, the newspaper asserts, “makes it appear that the AJC sexually exploited its staff and/or that it facilitated or condoned” such behavior.

The newspaper’s response was widely seen in the public (and by many journalists) as a full-throated defense of Scruggs’s depiction in the film as being baseless and a sexist trope fabricated by Clint Eastwood and screenwriter Billy Ray to bolster the screenplay.

Richard Brody of The New Yorker writes that “It’s implied that she has sex with a source in exchange for a scoop; those who knew the real-life Scruggs deny that she did any such thing. It’s an ignominious allegation, and one that Eastwood has no business making, particularly in a movie about ignominious allegations.”

Becca Andrews, assistant news editor at Mother Jones, had a similar take: “Wilde plays Kathy Scruggs, who was, by all accounts, a hell-on-wheels shoe-leather reporter who does not appear to have any history of, say, sleeping with sources…. Despite Scruggs’ standing as a respected reporter who, to be clear, does not seem to have screwed anyone for a scoop over the course of her career, the fictional version of her in the film follows the shopworn trope.”

It all seems pretty clear cut and outrageous: the filmmakers recklessly and falsely depicted a female reporter (based on a real person, using her real name) behaving unethically, in a way that had no basis in fact.

A Closer Look

But a closer look reveals a somewhat different situation. It is true, as the AJC letter to Warner Bros. states several times, that the film was based on Brenner’s Vanity Fair article. However the letter conspicuously fails to mention that the film was not based only on Brenner’s article: There was a second source credited in the film—one which does in fact suggest that Scruggs had (or may have had) sex with her sources.

Screenwriter Ray didn’t make that detail up; one of the sources the film credits, The Suspect, by two respected journalists, Kent Alexander and Kevin Salwan, specifically refers to Scruggs’s “’reputation’ for sleeping with sources” (though not necessarily in the Jewell case specifically) according to The New York Times. Ray fictionalized and dramatized that part of the story, in the same way that all the events and characters are fictionalized to some degree. This explains why Scruggs was depicted as she was: that’s what the source material suggested.

The defense that, well, while it may be true that she was thought by colleagues to have had affairs with some of her sources—but not necessarily in that specific case—is pretty weak. It’s not as if there was no basis whatsoever for her depiction in the film, with Eastwood and Ray carelessly and maliciously manufacturing a sexist trope out of thin air. Ironically this book—the one that refers to Scruggs’s reputation for sleeping with her sources—was described by the Atlanta Journal-Constitution itself as “exhaustively researched” and “unsparing but not unfair.” It’s not clear why mentioning her reputation for sleeping with sources was “not unfair” when Alexander and Salwan did it in their (nonfiction) book about Richard Jewell, but is “false” and “extraordinarily reckless” when Ray and Eastwood did it in their (fictional) screenplay based in part on that very book.

True Stories in Fiction

The issues surrounding the portrayal of Scruggs in Richard Jewell—just like the portrayal of Jewell himself in the film—are more nuanced and complex than they first appear. Eastwood and Ray were not accused of tarnishing a dead reporter’s image by including a true-but-unseemly aspect of her real life in her fictional depiction. Nor were they accused of failing to confirm that information contained in one of their sources. Instead they were accused of completely fabricating that aspect of Scruggs’s life to sensationalize their film—which is demonstrably not true.

More fundamentally, complaints that the film isn’t the “real story” miss the point. It is not—and was never claimed to be—the “real story.” The film is not a documentary, it’s a scripted fictional narrative film (as it says on posters) “based on the true story.” (The full statement that appears in the film reads, “The film is based on actual historical events. Dialogue and certain events and characters contained in the film were created for the purposes of dramatization.”) That is, the film is based on some things that actually happened; that doesn’t mean that everything that really happened is in the film, and it doesn’t mean that everything in the film really happened. It means exactly what it says: the movie is “based on actual historical events.” Complaints about historical inaccuracy are of course very common in movies about real-life people and events.

Similar complaints were raised about Eastwood’s drama American Sniper about the film’s historical accuracy as it relates to the true story of the real-life Chris Kyle; these pedantic protests rather miss the point. Much of the “controversy” over whether it’s a 100% historically accurate account of Kyle’s life is a manufactured controversy sown of a misunderstanding, a straw man argument challenge to a strict historicity no one claimed.

In an interview with The New York Times, “Kelly McBride, a onetime police reporter who is the senior vice president of the Poynter Institute, a nonprofit organization that supports journalism, said the portrayal of Ms. Scruggs did not reflect reality” (emphasis added). It’s not clear why McBride or anyone else would believe or assume that a scripted film would “reflect reality.” There is of course no reason why fictional entertainment should necessarily accurately reflect real life–in dialogue, plot, or in any other way. Television and film are escapist entertainment, and the vast majority of characters in scripted shows and films lead far more interesting, dramatic, and glamorous lives than the audiences who watch them. While fictional cops on television shows regularly engage in gunfire and shootouts, in reality over 90% of police officers in the United States never fire their weapons at another person during the course of their career. TV doctors seem to leap from one dramatic, life-saving situation to another, while most real doctors spend their careers diagnosing the flu and filling out paperwork. I wrote about this a few years ago.

Richard Jewell is one of many “based on a true story” films currently out, including BombshellFord v. FerrariA Beautiful Day in the NeighborhoodSebergDark WatersMidwayHoney BoyHarriet, and others. Every one of these has scenes, dialogue, and events that never really happened, and characters that either never existed or existed but never did some of the specific things they’re depicted as having done on the screen.

It’s understandable for audiences to wonder what parts of the film are historically accurate and which parts aren’t, but making that distinction and parsing out exactly which characters are real and which are made up, and which incidents really happened and precisely when and how, is not the responsibility of the film or the filmmakers. The source material is clearly and fully credited and so anyone can easily see for themselves what the true story is. There are many books (such as Based on a True Story—But With More Car Chases: Fact and Fiction in 100 Favorite Movies, by Jonathan Vankin and John Whalen) and websites devoted specifically to parsing out what’s fact and what’s fiction in movies. There are also a handful of online articles comparing the true story of Richard Jewell with the fictional one.

There’s no deception going on, no effort to “trick” audiences into mistaking the film for a documentary. It is a scripted drama, with events carefully chosen for dramatic effect and dialogue written by a screenwriter and performed by actors. It’s similar in some ways to the complaint that a film adaptation of a book doesn’t follow the same story. That’s because books and films are very different media that have very different storytelling structures and demands. It’s not that one is “right” and the other is “wrong;” they’re different ways of telling roughly the same story.

Similarly, to ask “how accurate” a film is doesn’t make sense. A fictional film is not judged based on how “accurate” it is (whatever that would mean) but instead how well the story is told. Screenwriters taking dramatic license with bits and pieces of something that happened in real life in order to tell an effective story is their job. Writers can add characters, combine several real-life people into a single character, play with the chronology of events, and so on.

Ray certainly could—and arguably should—have changed the name of the character, but since in real life it was Scruggs specifically who broke the news about Jewell, and it was Scruggs specifically who in real life was rumored to have been romantically involved with sources, the decision not to do so is understandable. It’s likely, of course, that complaints would still have arisen even if her name had been changed, since Scruggs’s name is so closely connected to the real story.

The question of fictional representation is a valid and thorny one. Films and screenplays based (however loosely) on real events and people are, by definition, fictionalized and dramatized (this seems obvious, but may be more clear to me, as I have attended several screenwriting workshops taught by Hollywood screenwriters). Plots need conflict, and in stories based on things that actually happened, there will be heroes (who really existed in some form) and there will be villains (who also really existed in some form). The villains in any story will, by definition—and rightly or wrongly—typically not be happy with their depiction; villains are heroes in their own story.

The question is instead what obligations a screenwriter has to the real-life people cast in that villain role—keeping in mind of course that interesting fictional characters are a blend of hero and villain, good and bad. Heroes will have flaws and villains will have positive attributes, and may even turn out to be heroes in some cases.

You can argue that if Ray was going to suggest that Scruggs’s character slept with an FBI agent (as The Suspect suggested), that he should have confirmed it. But screenwriters, like non-fiction writers, typically don’t fact-check the sources of their sources. In other words they assume that the information in a seemingly reputable source (such as a Vanity Fair article or a well-reviewed book by the U.S. Attorney for the Northern District of Georgia and a former Wall Street Journal columnist, for example), is accurate as written. If they report that Scruggs had a reputation for sleeping with sources, or hid in the back of Jewell’s lawyer’s car hoping for an interview, or met with FBI agents in a bar, or any number of other things, then the screenwriter believing that she did so—or may have done so—is not unreasonable nor malicious.

In the end, the dispute revolves around a minor plot point in a single scene, and the sexual quid pro quo is implied, not explicit. Reasonable people can disagree about whether or not Scruggs was portrayed fairly in the film (and if not, where the blame lies) as well as the ethical limits of dramatic license in portraying real historical events and figures in fictional films, but the question here is more complex than has been portrayed—about, ironically, a film with themes of rushing to judgment and binary thinking—and should not detract from what is overall a very good film.

For those interested in the real, true story of how Richard Jewell was railroaded, bullied, and misjudged—instead of the obviously fictionalized version portrayed in the film—people can consult Marie Brenner’s book Richard Jewell: And Other Tales of Heroes, Scoundrels, and Renegades, based on her 1997 Vanity Fair article; and The Suspect, mentioned above.

The Social Threat of Richard Jewell

In addition to the potential harm to Scruggs’s memory, several critics have expressed concern about presumed social consequences of the film, suggesting, for example that Richard Jewell could potentially change the way Americans think about journalism (and female journalists in particular), as well as undermine public confidence in investigative institutions such as the FBI.

There is of course a long history of fears about the consequences of fictional entertainment on society. I’ve previously written about many examples, such as the concern that the 50 Shades of Grey book and film franchise would lead to harmful real-world effects, and that the horror film Orphan, about a murderous dwarf posing as a young girl, would literally lead to a decline in international adoptions. Do heavy metal music, role-playing games, and “occult” Halloween costumes lead to Satanism and drug use? Does exposure to pornography lead to increased sexual assault? Does seeing Richard Jewell decrease trust in journalism and the FBI? All these are (or were) plausible claims to many.

The public need not turn to a fictional film—depicting events that happened nearly 25 years ago—to find reasons to be concerned with the conduct of (today’s) Federal Bureau of Investigations. Earlier this month, a story on the front page of The New York Times reported that “The Justice Department’s inspector general… painted a bleak portrait of the F.B.I. as a dysfunctional agency that severely mishandled its surveillance powers in the Russia investigation, but told lawmakers he had no evidence that the mistakes were intentional or undertaken out of political bias rather than ‘gross incompetence and negligence.’”

No one would suggest that fictional entertainment have no effect at all on society, of course—there are clear examples of copycat acts, for example—and the topic of media effects is far beyond the scope here. I’ll just note that the claim that Richard Jewell (or any other film) affects public opinions about its subjects is a testable hypothesis, and could be measured using pre- and post-exposure measures such as questionnaires. This would be an interesting topic to explore, and of course it’s much easier to simply assume that a film has a specific effects than to go to the considerable time, trouble, and expense of actually testing it. Who needs all the hassle of creating and implementing a scientific research design (and tackling thorny causation issues) when you can just baldly assert and assume that they do?

There are certainly valid reasons to criticize the film, including its treatment of Scruggs, the FBI, and Jewell himself (who is also not alive to comment or defend himself). Good films provoke conversation, and those conversations should be informed by facts and thoughtful analysis instead of knee-jerk reactions and unsupported assumptions. Richard Jewell is a moving, important, and powerful film about a rush to judge and an otherwise ordinary guy—flawed and imperfect, just like the rest of us—who was demonized by institutional indifference and a slew of well-meaning but self-serving people in power.

 

A longer version of this piece appeared on my CFI blog; you can read it HERE. 

 

Jan 222020
 

A June 6, 2018, article from ChurchandState.org titled “Propaganda Works – 58 Percent of Republicans Believe Education Is Bad” was shared on social media by liberals and Democrats, gleeful that their assumptions about conservative anti-intellectualism had been borne out in objective, quantifiable data from a respected polling organization.

The widely-shared article states that “Fox News, like Republicans and Trump, could never succeed as an ultra-conservative propaganda outlet without an ignorant population. The only way to continue having success at propagandizing is convincing Americans that being educated and informed is detrimental to the nation and its citizens; something Fox and Republicans have been very successful at over the past two years. The idea that education is bad for the country is contrary to the belief of the Declaration of Independence’s author, Founding Father and third American President Thomas Jefferson…. Obviously, the current administration and the Republican Party it represents wholly disagrees with the concept of a well-educated citizenry, or that it is beneficial to America if the populace is educated and informed. Likely because the less educated the people are, the more electoral support Republicans enjoy and the more success Trump has as the ultimate purveyor of ‘fake news.’”

There are a few things to unpack here. The first is that, like UFO coverups, this blanket anti-education effort allegedly spans many administrations and generations. Democrat and Republican alike are allegedly participants in this institutional conspiracy. The effects of Republican education cuts would not be seen in general education levels for years—if at all—and thus any “benefit” to keeping the public ignorant would of course boost the goals of future Democratic presidents and administrations as well. If true, it’s heartening to see this common ground and shared agenda between otherwise deeply polarized political parties.

Americans are in fact better educated than at any other time in history. In July 2018, the U.S. Census Bureau revealed that 90% of Americans twenty-five and older completed at least high school—an all-time high and a remarkable increase from 24% in 1940. Education has risen dramatically in the population as a whole, across genders, races, and economic statuses. In 1940, 3.8% of women and 5.5% of men had completed four years or more of college; by 2018 it was 35.3% for women and 34.6% for men. The United States spends $706 billion on education, according to the U.S. Department of Education (2019), which comes to about $13,850 per public school student. Not only does the government provide free, mandatory grade school education, but it also offers low-interest student loans for those who wish to pursue higher education. All this is puzzling behavior for a government that wants to keep its citizens ignorant, but perhaps someone didn’t get the memo. I discussed—and debunked—this idea in my recent column in Skeptical Inquirer magazine (see “Is America a Sheeple Factory?” in the January/February 2020 issue).

The article states, “Seriously, it is beyond the pale that anyone in this country thinks education is bad for America, but nearly 60 percent of Republicans is a mind boggling number. It is not entirely unexpected, but it is stunning that they have no issue telling a pollster that they believe education is a negative for the country.”

It seems damning indeed… but did 60% of Republicans really tell pollsters that? As always, it’s best to consult original sources, and in this case we can easily look at the Pew poll to find out. The question the 2017 Pew poll asked was not “Do you believe education is bad?,” but instead “Do you believe that _____ (colleges and universities / churches / labor unions / banks / news media) have a positive or negative effect on the country?”

These are, obviously, very different measures; you can’t generalize “colleges and universities” to “education” and “having a negative effect on the country” to “bad.” If you’re going to report on how people respond to certain questions in a poll or survey, you can’t retroactively change the question—even slightly—and keep the same results. That is, in a word, misleading and dishonest. Curiously, the fact that one in five Democrats (and left-leaning independents) believed that higher education has a negative effect on the country was completely ignored. That’s a third the number of Republicans who responded that way, but still a surprisingly high rate for self-identified social progressives (depending, of course, on how they interpreted the question).

I have often written about the perils of bad journalism in reporting poll and survey results. Having researched and written about misleading polls and news articles on many topics, including hatred of transgendered people (see, for example, my article  “Do 60% Of People Misgender Trans People To Insult Them?”); Holocaust denial (see, for example, my article “Holocaust Denial Headlines: Hatred, Ignorance, Or Innumeracy?”); I also recently debunked an article by Global News reporter Josh K. Elliott who wrote an article falsely claiming that “Nearly 50% of Canadians Think Racist Thoughts Are Normal.”

A closer look at this poll finds that the most likely reason for this response is not that Republicans think education is inherently bad (as the headline suggests) but instead that colleges and universities (which is what the question asked about) are bastions of liberal politics. (The original piece eventually and briefly acknowledges that—contrary to its clickbait headline—Republicans don’t actually think education is inherently bad.)

One recent example is Tennessee senator Kerry Roberts, who on a conservative radio talk show stated that higher education should be eliminated because it would cut off the “breeding ground” of ideas, a proposal that he said would “save America.” When asked about his comments Roberts later stated that “his listeners understood it was hyperbole” and that he was “was clearly joking” in his comments that eliminating higher education would save the country from ruin. Sen. Roberts has neither proposed nor supported any legislation that would in fact eliminate higher education. He said, however, that he absolutely stood by his criticism of American liberal arts education “one hundred percent.” So it’s pretty clear both that: a) he was indeed joking about wanting to abolish higher education in America (and that his words were mischaracterized for political gain, a routine occurrence); and b) that nevertheless he does believe that colleges and universities spawn liberal beliefs anathema to his own.

There is a very real and demonstrable anti-education and anti-science streak among many conservatives, on topics ranging from creationism to medical research (see for example, Chris Mooney’s book The Republican War on Science). But this headline is not clear evidence of that.

While it may be fun for liberals and progressives to paint conservatives as inherently opposed to education, doing so creates an intellectually dishonest straw man fallacy. It is also ultimately counterproductive, obscuring the real forces behind those beliefs. If you approach the subject wrongly believing that many Republicans fundamentally disapprove of the idea of education, then you will badly misunderstand the problem. You can’t hope to meaningfully address a problem if you’re operating on flawed assumptions.

The marketplace of ideas is served when data and polls are presented honestly, and opposing views are portrayed fairly. Skepticism and media literacy are more important than ever, so check your sources—especially when you agree with the message—to avoid sharing misinformation.

 

A version of this article first appeared on my Center for Inquiry blog; you can see it HERE. 

Jan 182020
 

The new episode of Squaring the Strange is out, it’s “The Head Show!” It’s all about heads–too many and too few. Folklore about multi-headed or headless monsters, multi-headed people throughout history, beheadings, experiments on heads, and shrunken heads. Oh, and the frozen head of Walt Disney, of course. Heads up!

 

You can find it HERE!

 

Jan 122020
 

Human rights advocate Dr. Leo Igwe joins us to discuss the dangers posed by so-called “witch hunters” in his home nation of Nigeria and other parts of Africa today. He discusses the entrenched nature of magical beliefs in the region, as well as the complicated power structure that props up those who call out fellow citizens as witches. Religions brought from Europe now play into the mix, with Islam and Christianity working alongside traditional beliefs; witch hunters are often pastors or church leaders, solidifying their power further. Victims are often powerless–the elderly, disabled, or children–and once accused they must run for their lives, abandoned by family and often the state authorities as well. Dr. Igwe talks about the challenges of getting the message across to international agencies and the UN, whose members are sometimes hesitant to speak out against these atrocities for fear of seeming racist or Islamophobic, a trend Igwe decries as stifling critical debate and much-needed open dialogue.  

 

Please check out this important topic; you can listen to it HERE. 

Jan 012020
 

I have a new book out! Or at least some contributions in a new book: Imagining the End: The Apocalypse in American Pop Culture. I wrote several sections including on the Antichrist, the Mark of the Beast, the Rapture, Latter-Day Saints Prophecy, and more. 

 

You can see more about it HERE.

You can find more on me and my work with a search for “Benjamin Radford” (not “Ben Radford”) on Vimeo, and please check out my podcast Squaring the Strange! 

 

 

 

Dec 282019
 

In a previous blog I discussed my research into an ugly episode of racial hatred that tainted the 2016 holiday season. The Mall of America hired its first African-American Santa Claus, an Army veteran named Larry Jefferson. A local newspaper, the Minneapolis Star Tribune, carried a story about it on Dec. 1. Later that night an editorial page editor for the Tribune, Scott Gillespie, tweeted: “Looks like we had to turn comments off on story about Mall of America’s first black Santa. Merry Christmas everyone!” Overnight and the next morning his tweet went viral and served as the basis for countless news stories with headlines such as “Paper Forced to Close Comments On Mall Of America’s First Black Santa Thanks to Racism” (Jezebel) and “Racists Freak Out Over Black Santa At Mall Of America” (Huffington Post).

George Takei responded the next day via Twitter: “Watching people meltdown over a black Santa in the Mall of America. ‘Santa is white!’ Well, in our internment camp he was Asian. So there.” It was also mocked by Trevor Noah on Comedy Central, and elsewhere.

Yet every major news outlet missed the real story. They failed to check facts. My research (including an interview with Gillespie) eventually revealed that the racial incident never actually occurred, and that–despite public opinion and nearly two million news articles to the contrary–the Star Tribune did not receive a single hate-filled message in the comments section of its story on Jefferson. What happened was the product of a series of misunderstandings and a lack of fact-checking, fueled in part by confirmation bias and amplified by the digital age (for a detailed look at the case see my CFI blog “The True, Heartwarming Story of the Mall of America’s Black Santa.”)

I’ve been writing about journalism errors and media literacy for two decades (including in my book Media Mythmakers: How Journalists, Activists, and Advertisers Mislead Us), and usually there’s relatively little pushback (except, perhaps, from journalists reluctant to acknowledge errors). However a curious part of this story was the criticism I received on social media for even researching it. Perhaps the best example was when I responded to a post about the initial story on a fellow skeptic’s Facebook page. She and all of her friends on the thread took the erroneous news story at face value (which didn’t surprise me, as virtually everyone did) but what did surprise me was the suggestion that trying to uncover the truth was unseemly or even “a distraction tactic.”

One person wrote, “I actually can’t believe that a self proclaimed skeptic is even having this argument in a country that just elected Donald Trump. It’s not skepticism when it disregards the proven fact that a great deal of the country, enough to elect a president, are straight up racist.” Of course I never questioned whether many or most Americans were racist. My question was very specific, clear, and about the factual basis for this one specific incident. Neither Trump’s election nor the existence of racism in America are relevant to whether or not the Tribune had to shut down its comments section in response to a deluge of hatred against a black Santa.

The ‘Distraction’ Tactic

One person wrote that me asking how many people objected to the black Santa was “a distraction tactic–now we can talk about how most people are not racist and change the subject from racism.” I was stunned. I had no idea that asking if anyone knew how many people complained would or could be construed as somehow trying to distract people (from what to what?). I replied, “Trying to quantify and understand an issue is not a ‘distraction tactic.’ I have no interest in distracting anyone from anything.’” No one–and certainly not me–was suggesting that a certain number of racists upset over a black Santa was okay or acceptable. I never suggested or implied that if it was “only” ten or twenty or a hundred, that everyone should be fine with it.

But knowing the scope of the issue does help us understand the problem: Is it really irrelevant whether there were zero, ten, or ten thousand racist commenters? If Trump can be widely (and rightly) criticized for exaggerating the crowd at his inauguration speech as “the largest audience to ever witness an inauguration–period” when in fact it was several orders of magnitude smaller, why is asking how many people complained about a mall Santa so beyond the pale?

Usually when I encounter claims of investigating being a distraction in my research it was itself a distraction tactic, an attempt to head off inquiry that might debunk a claim or show that some assumption or conclusion was made in error–not unlike the Wizard of Oz pleading for Dorothy and her gang not to look behind the curtain. (“Why are you asking questions about where I suddenly got this important UFO-related document?” or “Asking for evidence of my faith healer’s miracle healings is just a distraction from his holy mission” doesn’t deter any journalist or skeptic worth his or her salt.) If a claim is valid and factual, there’s no reason why anyone would object to confirming that; as Thomas Paine noted, “It is error only, and not truth, that shrinks from inquiry.”

I tried to remember where else I’d heard the phrase used, when someone who was asked about something called the questions a “distraction.” Finally I realized where that tactic had become common: In the Trump administration. When Donald Trump was asked about a leaked Access Hollywood recording of him bragging about groping women sexually, he dismissed the questions–and indeed the entire issue–as “nothing more than a distraction from the important issues we’re facing today.”

Similarly, when Vice-President Pence was asked in January 2017 about whether the Trump campaign had any contacts with Russia during the campaign, he replied, “This is all a distraction, and it’s all part of a narrative to delegitimize the election.” Others in the Trump administration (including White House spokespeople) have repeatedly waved off journalists’ questions as distractions as well.

This is not particularly surprising, but it was odd to see some of my most virulent anti-Trump friends (and Facebook Friends) using and embracing exactly the same tactics Trump does to discourage questions.

There is one important difference: In my judgment Trump and his surrogates use the tactic cynically (knowing full well that the issues and questions being asked are legitimate), while those who criticized me were using the tactic sincerely; being charitable, I have no reason to think that they realized that the black Santa story and reportage had been widely (if not universally) misunderstood. But the intention and effect were the same: An attempt to discourage someone from looking beyond the surface to see what’s really going on, and attempt to separate truth from fact.

Importance of Due Diligence

A recent news story highlights the value and importance of bringing at least some skepticism to claims: Recently a woman approached reporters at The Washington Post with a potentially explosive story: that embattled Republican Senate candidate Roy Moore had impregnated her as a teenager and forced her to have an abortion. This would of course be a potentially devastating revelation for the conservative Moore, already under fire for dating (and allegedly sexually assaulting) teenagers.

According to the Post, “In a series of interviews over two weeks, the woman [Jaime T. Phillips] shared a dramatic story about an alleged sexual relationship with Moore in 1992 that led to an abortion when she was 15. During the interviews, she repeatedly pressed Post reporters to give their opinions on the effects that her claims could have on Moore’s candidacy if she went public. The Post did not publish an article based on her unsubstantiated account. When Post reporters confronted her with inconsistencies in her story and an Internet posting that raised doubts about her motivations, she insisted that she was not working with any organization that targets journalists. Monday morning, Post reporters saw her walking into the New York offices of Project Veritas, an organization that targets the mainstream news media and left-leaning groups. The organization sets up undercover ‘stings’ that involve using false cover stories and covert video recordings meant to expose what the group says is media bias.”

The Post reporter, Beth Reinhard, “explained to Phillips that her claims would have to be fact-checked. Additionally, Reinhard asked her for documents that would corroborate or support her story.” Reinhard and the Washington Post did not ask for evidence to establish the truth of Phillips’s account because they doubted that sexual assaults occur, or that Phillips may indeed have been sexually assaulted by Moore–in fact quite the opposite, since the Post was the first to break the story and publish accusations by Moore’s accusers–but instead because they were doing their due diligence as journalists. Investigative journalists and skeptics don’t question claims and ask for evidence because they necessarily doubt what they’re being told; they do it because they want to be sure they understand the facts.

Had The Washington Post not questioned the story–or been deterred by accusations that trying to establish the truth of Phillips’s claims was some sort of “distraction” tactic–the paper’s credibility would have been damaged when Phillips’s false accusation would have quickly been revealed, and the Post’s failure to do basic research used to cast doubt on the previous women’s accusations against Moore. Martin Baron, the Post‘s executive editor, said that the false accusations were “the essence of a scheme to deceive and embarrass us. The intent by Project Veritas clearly was to publicize the conversation if we fell for the trap. Because of our customary journalistic rigor, we weren’t fooled.”

What Happened?

There are several critical thinking and media literacy failures here. Perhaps the most basic is where the burden of proof lies: with the person making the claim. In fact I wasn’t making a claim at all; I was merely asking for evidence of a widely-reported claim. I honestly had no idea how many or how few Tribune readers had complained about Jefferson, and I wouldn’t have even thought to question it if Gillespie hadn’t issued a tweet that contradicted the thesis of the then-viral news story.

The black Santa outrage story is full of assumptions, mostly about the bad intentions of other people. To the best of my knowledge I’m the only person who dug deeper into the story to uncover what really happened–and for that I was told that I was causing a “distraction” and even hints that I had some unspecified unseemly motive.

It’s also important to understand why a person’s questions are being challenged in the first place. It’s often due to tribalism and a lack of charity. CSCIOP cofounder Ray Hyman, in his influential short piece titled “Proper Criticism discusses eight principles including the principle of charity. “The principle of charity implies that, whenever there is doubt or ambiguity about a paranormal claim, we should try to resolve the ambiguity in favor of the claimant until we acquire strong reasons for not doing so. In this respect, we should carefully distinguish between being wrong and being dishonest. We often can challenge the accuracy or validity of a given paranormal claim. But rarely are we in a position to know if the claimant is deliberately lying or is self-deceived. Furthermore, we often have a choice in how to interpret or represent an opponent’s arguments. The principle tell us to convey the opponent’s position in a fair, objective, and non-emotional manner.”

To scientists, journalists, and skeptics, asking for evidence is an integral part of the process of parsing fact from fiction, true claims from false ones. If you want me to believe a claim–any claim, from advertising claims to psychic powers, conspiracy theories to the validity of repressed memories–I’m going to ask for evidence. It doesn’t mean I think (or assume) you’re wrong or lying, it just means I want a reason to believe what you tell me. This is especially true for memes and factoids shared on social media and designed to elicit outrage or scorn.

But to most people who don’t have a background in critical thinking, journalism, skepticism, or media literacy, asking for evidence is akin to a challenge to their honesty. Theirs is a world in which personal experience and anecdote are self-evidently more reliable than facts and evidence. And it’s also a world in which much of the time when claims are questioned, it’s in the context of confrontation. To a person invested in the truth of a given narrative, any information that seems to confirm that idea is much more easily seen and remembered than information contradicting the idea; that’s the principle of confirmation bias. Similarly, when a person shares information on social media it’s often because they endorse the larger message or narrative, and they get upset if that narrative is questioned or challenged. From a psychological point of view, this heuristic is often accurate: Much or most of the time when a person’s statement or claim is challenged (in informal settings or social media for example), the person asking the question does indeed have a vested interest.

The problem is when the person does encounter someone who is sincerely trying to understand an issue or get to the bottom of a question, their knee-jerk reaction is often to assume the worst about them. They are blinded by their own biases and they project those biases on others. This is especially true when the subject is controversial, such as with race, gender, or politics. To them, the only reason a person would question a claim is if they are trying to discredit that claim, or a larger narrative it’s being offered in support of.

Of course that’s not true; people should question all claims, and especially claims that conform to their pre-existing beliefs and assumptions; those are precisely the ones most likely to slip under the critical thinking radar and become incorporated into your beliefs and opinions. I question claims from across the spectrum, including those from sources I agree with. To my mind the other approach has it backwards: How do you know whether to believe a claim if you don’t question it?

My efforts to research and understand this story were borne not of any doubt that racism exists, nor that Jefferson was subjected to it, but instead of a background in media literacy and a desire to reconcile two contradictory accounts about what happened. Outrage-provoking stories on social media–especially viral ones based on a single, unconfirmed informal tweet– should concern all of us in this age of misinformation and “fake news.”

The real tragedy in this case is what was done to Larry Jefferson, whose role as the Mall of America’s first black Santa has been tainted by this social media-created controversy. Instead of being remembered for bringing hope, love, and peace to girls and boys, he will forever be known for enduring a (fictional) deluge of bilious racist hatred.

The fact that Jefferson was bombarded by love and support from the general public (and most whites) should offer hope and comfort this holiday season. A few anonymous cranks, trolls, and racists complained on social media posts from the safety of their keyboards, but there was very little backlash–and certainly nothing resembling what the sensational headlines originally suggested.

The true story of Jefferson’s stint as Santa is diametrically the opposite of what most people believe: He was greeted warmly and embraced by people of all colors and faiths as the Mall of America’s first black Santa. I understand that “Black Santa Warmly Welcomed by Virtually Everyone” isn’t a headline that any news organization is going to see as newsworthy or eagerly promote, nor would it go viral. But it’s the truth–and the truth matters.

This piece appeared in a slightly different form in my Center for Inquiry blog. 

 

You can find more on me and my work with a search for “Benjamin Radford” (not “Ben Radford”) on Vimeo, and please check out my podcast Squaring the Strange! 

Dec 242019
 

Amid the encroaching commercialization of Christmas, Black Friday sales, and annual social media grumblings about the manufactured controversy over whether “Merry Christmas” or “Happy Holidays” is appropriate, an ugly episode of racial hatred tainted the beginning of the 2016 holiday season.

blacksantatweet

It began when the Mall of America hired a jolly bearded man named Larry Jefferson as one of its Santas. Jefferson, a retired Army veteran, is black–a fact that most kids and their parents neither noticed nor cared about. The crucial issue for kids was whether a Playstation might be on its way or some Plants vs. Zombies merchandise was in the cards given the particular child’s status on Santa’s naughty-or-nice list. The important thing for parents was whether their kids were delighted by the Santa, and all evidence suggests that the answer was an enthusiastic Yes. “What [the children] see most of the time is this red suit and candy,” Jefferson said in an interview. “[Santa represents] a good spirit. I’m just a messenger to bring hope, love, and peace to girls and boys.”

The fact that Santa could be African-American seemed self-evident (and either an encouraging sign or a non-issue) for all who encountered him. Few if any people at the Mall of America made any negative or racist comments. It was, after all, a self-selected group; any parents who might harbor reservations about Jefferson simply wouldn’t wait in line with their kids to see him and instead go somewhere else or wait for another Santa. Like anything that involves personal choice, people who don’t like something (a news outlet, brand of coffee, or anything else) will simply go somewhere else–not erupt in protest that it’s available to those who want it.

However a black Santa was a first for that particular mall, and understandably made the news. On December 1 the local newspaper, the Minneapolis Star Tribune, carried a story by Liz Sawyer titled “Mall of America Welcomes Its First Black Santa.

Scott Gillespie, the editorial page editor for the Tribune, tweeted later that night (at 9:47 PM): “Looks like we had to turn comments off on story about Mall of America’s first black Santa. Merry Christmas everyone!” The tweet’s meaning seemed both clear and disappointing: On a story that the Star Tribune posted about an African-American Santa, the racial hostility got so pervasive in the comments section that they had to put an end to it, out of respect for Jefferson and/or Star Tribune readers. He ended with a sad and sarcastic, “Merry Christmas” and sent the tweet into cyberspace.

Overnight and the next morning his tweet went viral and served as the basis for countless news stories with titles such as “Paper Forced to Close Comments On Mall Of America’s First Black Santa Thanks to Racism” (Jezebel); “Santa is WHITE. BOYCOTT Mall of America’: Online Racists Are Having a Meltdown over Mall’s Black Santa” (RawStory); “Racists Freak Out Over Black Santa At Mall Of America” (Huffington Post); “Mall of America Hires Its First Black Santa, Racists of the Internet Lose It” (Mic.com), and so on. If you spend any time on social media you get the idea. It was just another confirmation of America’s abysmal race relations.

There’s only one problem: It didn’t happen.

At 1:25 PM the following day Gillespie, after seeing the stories about the scope and nature of the racist backlash the Tribune faced, reversed himself in a follow-up tweet. Instead of “we had to turn off comments,” Gillespie stated that the commenting was never opened for that article in the first place: “Comments were not allowed based on past practice w/stories w/racial elements. Great comments on FB & Instagram, though.”

This raised some questions for me: If the comments had never been opened on the story, then how could there have been a flood of racist comments? Where did that information come from? How many racist comments did the paper actually get? Fewer than a dozen? Hundreds? Thousands? Something didn’t add up about the story, and as a media literacy educator and journalist I felt it was important to understand the genesis of this story.

It can serve as an object lesson and help the public understand the role of confirmation bias, unwarranted assumptions, and failure to apply skepticism. In this era of attacks on “fake news” it’s important to distinguish intentional misinformation from what might be simply a series of mistakes and assumptions.

While I have no doubt that the Tribune story on Jefferson would likely have been the target of some racist comments at some point, the fact remains that the main point of Gillespie’s tweet was false: the Tribune had not in fact been forced to shut down the comments on its piece about the Mall of America’s black Santa because of a deluge of racist comments. That false information was the centerpiece of the subsequent stories about the incident.

The idea that some might be upset about the topic is plausible; after all, the question of a black Santa had come up a few times in the news and social media (perhaps most notably Fox News’s Megyn Kelly’s infamous incredulity at the notion three years earlier–which she later described as an offhand jest). Racist, sexist, and otherwise obnoxious comments are common in the comments section of many articles online on any number of subjects, and are not generally newsworthy. There were of course some racists and trolls commenting on the secondary stories about the Star Tribune‘s shutting down its comment section due to racist outrage (RawStory collected about a dozen drawn from social media), but fact remains that the incident at the center of the controversy that spawned outrage across social media simply did not happen.

A few journalists added clarifications and corrections to the story after reading Gillespie’s second tweet or being contacted by him. The Huffington Post, for example, added at the bottom of its story: “CLARIFICATION: This story has been updated to reflect that the Minneapolis Star Tribune‘s comment section was turned off when the story was published, not in response to negative comments.” But most journalists didn’t, and as of this writing nearly two million news articles still give a misleading take on the incident.

The secondary news reports could not, of course, quote from the original non-existent rage-filled comments section in the Star Tribune, so they began quoting from their own comments sections and those of other news media. This became a self-fulfilling prophecy, wherein the worst comments from hundreds of blogs and websites were then selected and quoted, generating another round of comments. Many people saw racist comments about the story and assumed that they had been taken from the Star Tribune page at the center of the story, and couldn’t be sure if they were responding to the original outrage or the secondary outrage generated by the first outrage. As with those drawn to see and celebrate Jefferson as the mall’s first black Santa, this was also a self-selected group of people–namely those who were attracted to a racially charged headline and had some emotional stake in the controversy, enough to read about it and comment on it.

Unpacking the Reporting

I contacted Gillespie and he kindly clarified what happened and how his tweet inadvertently caused some of the world’s most prominent news organizations to report on an ugly racial incident that never occurred.

Gillespie–whose beat is the opinion and editorial page–was at home on the evening of December 1 and decided to peruse his newspaper’s website. He saw the story about Larry Jefferson and clicked on it to see if the black Santa story was getting any comments. He noticed that there were no comments at all and assumed that the Star Tribune‘s web moderators had shut them off due to inflammatory posts, as had happened occasionally on previous stories.

Understandably irritated and dismayed, he tweeted about it and went to bed, thinking no more of it. The next day he went into work and a colleague noticed that his tweet had been widely shared (his most shared post on social media ever) and asked him about it. Gillespie then spoke with the newspaper’s web moderators, who informed him that the comments had never been turned on for that particular post–a practice at the newspaper for articles on potentially sensitive subjects such as race and politics, but also applied to many other topics that a moderator for whatever reason thinks might generate comments that may be counterproductive.

“I didn’t know why the comments were off,” he told me. “In this case I assumed we followed past practices” about removing inflammatory comments. It was a not-unreasonable assumption that in this case just happened to be wrong. Gillespie noted during our conversation that a then-breaking Star Tribune story about the death of a 2-year-old girl at a St. Paul foster home also had its commenting section disabled–presumably not in anticipation of a deluge of racist or hateful comments.

“People thought–and I can see why, since I have the title of editorial page editor–that I must know what I’m talking about [in terms of web moderation],” Gillespie said. He was commenting on a topic about his newspaper but outside his purview, and to many his tweet was interpreted as an official statement and explanation of why comments did not appear on the black Santa story.

When Gillespie realized that many (at that time dozens and, ultimately, millions) of news stories were (wrongly) reporting that the Star Tribune‘s comments section had been shut down in response to racist comments based solely on his (admittedly premature and poorly phrased) Dec. 1 tweet, he tried to get in touch with some of the journalists to correct the record (hence the Huffington Post clarification), but by that time the story had gone viral and the ship of fools had sailed. The best he could do was issue a second tweet trying to clarify the situation, which he did.

“I can see why people would jump to the conclusion they did,” he told me. Gillespie is apologetic and accepts responsibility for his role in creating the black Santa outrage story, and it seems clear that his tweet was not intended as an attempt at race-baiting for clicks.

In the spirit of Christmas maybe one lesson to take from this case is charity. Instead of assuming the worst about someone or their intentions, give them the benefit of the doubt. Assuming the worst about other people runs all through this story. Gillespie assumed that racists deluged his newspaper with racist hate, as did the public. The web moderator(s) at the Star Tribune who chose not to open the comments on the Santa story may (or may not) have assumed that they were pre-empting a deluge of racism (which may or may not have in fact followed). I myself was assumed to have unsavory and ulterior motives for even asking journalistic questions about this incident (a topic I’ll cover next week).

In the end there are no villains here (except for the relative handful of racists and trolls who predictably commented on the secondary stories). What happened was the product of a series of understandable misunderstandings and mistakes, fueled in part by confirmation bias and amplified by the digital age.

The Good News

Gillespie and I agreed that this is, when fact and fiction are separated, a good news story. As noted, Gillespie initially assumed that the newspaper’s moderators had been inundated with hostile and racist comments, and finally turned the comments off after having to wade through the flood of hateful garbage comments to find and approve the positive ones. He need not have feared, because exactly the opposite occurred: Gillespie said that the Star Tribune was instead flooded with positive comments applauding Jefferson as the Mall of America’s first black Santa (he referenced this in his Dec. 2 tweet). The tiny minority of nasty comments were drowned out by holiday cheer and goodwill toward men–of any color. He echoed Jefferson, who in a December 9 NPR interview said that the racist comments he heard were “only a small percentage” of the reaction, and he was overwhelmed by support from the community.

The fact that Jefferson was bombarded by love and support from the general public (and most whites) should offer hope and comfort. Gillespie said that he had expected people to attack and criticize the Mall of America for succumbing to political correctness, but the imagined hordes of white nationalists never appeared. A few anonymous cranks and racists complained on social media posts from the safety of their keyboards, but there was very little backlash–and certainly nothing resembling what the sensational headlines originally suggested.

The real tragedy is what was done to Larry Jefferson, whose role as the Mall of America’s first black Santa has been tainted by this social media-created controversy. Instead of being remembered for, as he said, bringing “hope, love, and peace to girls and boys,” he will forever be known for enduring a (fictional) deluge of bilious racist hatred. The true story of Jefferson’s stint as Santa is diametrically the opposite of what most people believe: He was greeted warmly and embraced by people of all colors and faiths as the Mall of America’s first black Santa.

Some may try to justify their coverage of the story by saying that even though in this particular case Jefferson was not in fact inundated with racist hate, it still symbolizes a very real problem and was therefore worthy of reporting if it raised awareness of the issue. The Trump administration adopted this tactic earlier this week when the President promoted discredited anti-Muslim videos via social media; his spokeswoman Sarah Huckabee Sanders acknowledged that at least some of the hateful videos Trump shared were bogus (and did not happen as portrayed and described), but insisted that their truth or falsity was irrelevant because they supported a “larger truth”–that Islam is a threat to the country’s security: “I’m not talking about the nature of the video,” she told reporters. “I think you’re focusing on the wrong thing. The threat is real, and that’s what the President is talking about.”

This disregard for truth has been a prominent theme in the Trump administration. Yes, some tiny minority of Muslims are terrorists; no one denies that, but that does not legitimize the sharing of bogus information as examples supposedly illustrating the problem. Similarly, yes, some tiny minority of Americans took exception to Jefferson as a black Santa, but that does not legitimize sharing false information about how a newspaper had to shut down its comments because of racist rage. There are enough real-life examples of hatred and intolerance that we need not invent new ones.

In this Grinchian and cynical ends-justifies-the-means worldview, there is no such thing as good news and the import of every event is determined by how it can be used to promote a given narrative or social agenda–truth be damned.

I understand that “Black Santa Warmly Welcomed by Virtually Everyone” isn’t a headline that any news organization is going to see as newsworthy or eagerly promote, nor would it go viral. But it’s the truth.

Merry Christmas.

 

This piece originally appeared on my Center for Inquiry blog in 2017; you can see it HERE! 

 

 

Dec 122019
 

Celestia and I are especially pleased with a recent episode of Squaring the Strange, in which we spoke to Leo Igwe, the tireless skeptic, humanist, and human rights advocate in Nigeria. His work on behalf of people persecuted as witches in sub-Saharan Africa is both daunting and vitally important. Skepticism and critical thinking can sometimes mean the difference between life and death.

Human rights advocate Dr. Leo Igwe joins us to discuss the dangers posed by so-called “witch hunters” in his home nation of Nigeria and other parts of Africa today. He discusses the entrenched nature of magical beliefs in the region, as well as the complicated power structure that props up those who call out fellow citizens as witches. Religions brought from Europe now play into the mix, with Islam and Christianity working alongside traditional beliefs; witch hunters are often pastors or church leaders, solidifying their power further. Victims are often powerless–the elderly, disabled, or children–and once accused they must run for their lives, abandoned by family and often the state authorities as well. Dr. Igwe talks about the challenges of getting the message across to international agencies and the UN, whose members are sometimes hesitant to speak out against these atrocities for fear of seeming racist or Islamophobic, a trend Igwe decries as stifling critical debate and much-needed open dialogue.  

Please check it out, you can listen HERE. 

Dec 072019
 

The issue of racism in Canada was recently brought into sharp focus when, shortly before the Canadian election, photos and videos of Prime Minister Justin Trudeau in blackface and brownface emerged. They had been taken on at least three occasions in the 1990s and early 2000s. Trudeau—widely praised for his socially progressive agendas—quickly apologized and promised to do better. 

Trudeau’s repeated use of blackface (and his subsequent re-election despite public knowledge of it) angered many and left Canadians wondering just how common racism is in their country. Veteran hockey commentator Don Cherry was recently fired by Sportsnet following contentious comments about immigrants. The broadcaster issued a statement that “Following further discussions with Don Cherry after Saturday night’s broadcast, it has been decided that it is the right time for him to immediately step down. During the broadcast, he made divisive remarks that do not represent our values or what we stand for.” 

Americans—and the Trump administration specifically—are often characterized as inherently racist; New York Times writer Brent Staples, for example, wrote on Twitter (on January 12, 2018) that “Racism and xenophobia are as American as apple pie.” Whether racism and xenophobia are as Canadian as poutine is of course another question. Earlier this year, on May 21, 2019, Canadian news organization Global News reported on a survey that seemed to shed light on that question. The article was titled “Nearly 50% of Canadians Think Racist Thoughts Are Normal: Ipsos poll.” 

The article began, “Almost half of Canadians will admit to having racist thoughts, and more feel comfortable expressing them today than in years past, a new Ipsos poll reveals … The poll, conducted exclusively for Global News, found that 47 per cent of respondents thought racism was a serious problem in the country, down from 69 per cent in 1992. More than three-quarters of respondents said they were not racist, but many acknowledged having racist thoughts they did not share with others. (All of the Ipsos poll data is available online.) ‘We found that (almost) 50 per cent of Canadians believe it’s OK and actually normal to have racist thoughts,’ said Sean Simpson, vice-president of Ipsos Public Affairs.” 

Having researched and written about misleading polls and news articles on many topics, including hatred of transgendered people (see, for example, my article  “Do 60% Of People Misgender Trans People To Insult Them?”); Holocaust denial (see, for example, my article “Holocaust Denial Headlines: Hatred, Ignorance, Or Innumeracy?”); and even whether or not the public believes that Native Americans exist, something about that headline struck me as off. I didn’t necessarily doubt the statistic—racism is a serious problem in Canada, America, and elsewhere—but my journalistic skeptical sense urged a closer look. The poll was conducted between April 8 and 10, 2019, sampling 1,002 Canadian adults and had a margin of error of ±3.5 percent. 

I clicked through the link to the original poll by the Ipsos organization. Their About Us page explains that “In our world of rapid change, the need for reliable information to make confident decisions has never been greater. At Ipsos we believe our clients need more than a data supplier, they need a partner who can produce accurate and relevant information and turn it into actionable truth.” 

The Ipsos page referencing the poll displayed a large headline “Nearly half (47%) of Canadians think racism is a serious problem in Canada today, down 22 points since 1992 (69%).” Just below this, in much smaller size, was the line “Even so, almost half (49%) admit to having racist thoughts.” 

That seemed to provide a clue, as of course 49 percent may be the “nearly half” referred to in the Global News headline, but I noticed that the wording had changed: The headline stated that about half of “Canadians think racist thoughts are normal”—not that half of Canadians say they have racist thoughts. Just because you acknowledge having a racist thought does not logically mean that you think it’s “normal” or acceptable to do so; plenty of surveys and polls ask about socially and morally unacceptable behavior, ranging from infidelity to murder (a 2018 survey in Japan found that more than one in four Japanese workers admitted that the thought of killing their boss had crossed their mind on at least one occasion). 

But I know that sometimes headlines are misleading, and I assumed that the statistic was contained in the poll. Many people of course don’t read past the headline; of those who do read the full article, very few will bother to click on the link to the polling organization’s data page; even fewer will actually open the original report; of those who do, most will read only the executive summary or highlights section. Vanishingly few people—if anyone—will read the full report. 

This is understandable, as audiences naturally assume that a journalist, news organization, or pollster is accurately reporting the results of a poll or survey. If a news headline says that 40 percent of hockey fans drink beer during games or 85 percent of airplane pilots have college degrees, we assume that’s what the survey or research found. As I discuss in my media literacy book Media Mythmakers: How Journalists, Activists, and Advertisers Mislead Us, that’s not always the case. 

Like a game of Telephone, each step away from the original findings may change (usually toward simplifying and/or sensationalizing) that information. Whether intentionally or accidentally, errors can creep in every time the data are explained, summarized, or “clarified.” Usually these changes are minor and go unnoticed, because of course a person would have to check the original report to catch any discrepancies. But now and then another journalist, pedant, or researcher will take the time to check and see that something’s amiss.

Because the poll is available online, I read through it. There were many questions about many facets of racism among the Canadian respondents, but I found no reference whatsoever to the statistic mentioned in the headline. I checked again and still found nothing. 

I reached out to the author of the piece, Global News Senior National Online Journalist Josh K. Elliott, and the author of the report, Sean Simpson, the Ipsos vice-president of public affairs, asking for clarifications, including which specific question item was referred to in the article. 

I wrote, in part:

I read through the original Ipsos report but was unable to find the poll results you referenced in the headline, and that Sean Simpson references in your quote. I did a document search for the specific term used, “normal,” assuming that it would appear in the survey question. I found three matches, on pages 3, 19, and 20, but in none of the cases was I able to find results suggesting that “nearly 50% of Canadians think racist thoughts are normal.”

I have been unable to find that data anywhere in the Ipsos report. The closest I could find was the statistic that half of Canadians say they sometimes have racist thoughts (Question 7). But of course just because you acknowledge having racist thoughts does not logically mean that you think it’s “normal” or acceptable to do so; plenty of surveys and polls ask about socially and morally unacceptable behavior, ranging from infidelity to murder. Question wording is of course critically important in interpreting polls and surveys, and I’m concerned that “having racist thought” was mistakenly mistranslated to “think it’s normal to have racist thought” in your piece. If that statistic appears in the Ipsos report cited, please direct me to it, either by question or page number. If that statistic does not appear in the report, please clarify where it came from. Thank you.  

After repeated inquiries, I was informed that Mr. Elliott no longer worked at that desk, but I got a response from Drew Hasselback, a copy editor at GlobalNews (and, eventually, a cursory and seemingly reluctant reply from Mr. Simpson).

I was directed to four questions that they said were used as the basis for the headline. I looked again at each of them.

• The first, Question 7.6, asks “To what extent do you agree or disagree that racism is a terrible thing?” In response, nearly nine in ten (88 percent) of Canadians agree that racism is terrible. It didn’t speak to whether Canadians think racist thoughts are normal, but if anything seemed to contradict the claim. 

• The second, Question 7.5, asked “To what extent do you agree or disagree with the following: I can confidently say that I am not racist.” Of those polled, over three quarters (78 percent) agree that they can confidently say they’re not racist. Again, this hardly suggests that racism is considered normal among the respondents, and it contradicts the reporting and the headline associated with it.

Frankly, I’m surprised the number is that high. Why might a minority of otherwise non-racist Canadians not be able to “confidently” say that they are not racist? In part because there is a presumption that everyone is racist, whether they realize it or not. This is a widely held view among many, especially progressives and liberals (it’s so common in fact that it serves as the basis for Question 7 in the poll). In other words, even if they sincerely and truly don’t consider themselves racist and have no racist thoughts ever, they would be reluctant to go so far as to state categorically and confidently to others that they are not at all racist. (You see the same issue with polls asking women if they would use the word beautiful to describe themselves; very few do, though they will call themselves prettyattractive, etc. Doing so is seen as vain, just as stating “I’m confident I’m not racist” would be considered by many to be boasting or virtue signaling.)

• The third was Question 7.3, which asks to what extent people agree or disagree with the statement, “While I sometimes think racist thoughts, I wouldn’t talk about them in public.” This, once again, does not support the news headline. It is vitally important when interpreting polls and surveys to parse out the precise question asked. Note that it is a compound question framed in a very specific way (asking about whether one would express a thought in public); the question was not “Do you sometimes think racist thoughts?” But even if it were, you cannot generalize “people sometimes do X” to “it’s normal for people to do X.” Merriam-Webster, for example, defines normal as “average” or “a widespread or usual practice.” Thus, a poll or survey question trying to capture the incidence of a normal behavior or event would use the word usually instead of sometimes

• Finally, we came Question 7.1, the only question that specifically uses the word normal and asks if Canadians agree that “It’s perfectly normal to be prejudiced against people of other races.” 

As I noted, this question and its response do not accurately capture the question of whether or not “X% of Canadians think racist thoughts are normal” (as the Global News headline reads), but even if it did, we find that the headline is still wrong. From this statistic alone, the correct headline would be “22% of Canadians think racist thoughts are normal”—which is less than half the number reported in the headline. About one in five whites and one in three minorities said that it’s normal to be prejudiced against people of other races, as did one in four men and one in five women. Instead of nearly half of Canadians thinking racism is normal, nearly 70 percent of Canadians disagreed that racial prejudice is normal

The Ipsos poll itself seems well-researched, sound, and contains important information. Unfortunately, its conclusions got mangled along the way. The question is not whether specific Canadians (such as Trudeau or Cherry) are racist but instead whether or not those views are widely held; it’s the difference between anecdote and data. Polls and surveys can provide important information about the public’s beliefs. But to be valid, they must be based on sound methodologies, and media-literate news consumers should always look for information about the sample size, representativeness of the population, whether the participants were random or self-selected, and so on. And, when possible, read the original research data. News reports, such as the one I’ve focused on here, leave the false impression that racism is more widespread and socially acceptable than it really is. Racism is a serious issue, and understanding its nature is vital to stemming it; indeed, as Iposos notes, “In our world of rapid change, the need for reliable information to make confident decisions has never been greater.” 

 

 

You can find more on me and my work with a search for “Benjamin Radford” (not “Ben Radford”) on Vimeo, and please check out my podcast Squaring the Strange! 

This article has been adapted from my Center for Inquiry blog, available HERE. 

Dec 042019
 

Last month a Maine man made national news for finding tampered Halloween candy. He posted on social media that he found a needle in candy his son had bitten into. Police investigated and found he lied, hoaxing the whole thing (probably for attention).

He’s now been charged according to news reports

 

 

 

 

Nov 282019
 

I’m quoted in a new article by Rob Lea on Medium about why people see faces in everything from ghost photos to clouds to photos of the galaxy…

A collision of two galaxies of equal size 704-million-light-years from Earth has created what appears to be a ghostly visage staring through the cosmos. But, why do humans see images such as this in random data?

The haunting image of the collision that created the Arp-Madore system was captured on 19th June 2019 by the NASA/ESO operated Hubble Space Telescope.

Galactic collisions are quite common throughout the Universe — but the collision that formed Arp-Madore and its skull-like appearance is somewhat more unique. The collision in the question here was a head-on impact — if you’ll excuse the pun.

It was this violent collision that gave the system in question its striking face-like ring structure. The impact between the two galaxies also stretched the galaxies’ respective discs of gas, dust and stars outwards forming an area of intense star-formation that gives our phantom face its ‘nose,’ ‘jaws’ and other ‘facial features.’

Our phantom’s ‘eyes’ are also evidence of a rare occurrence. This glowing and penetrating stare is formed by the central bulges of the respective galaxies. The fact that they are of roughly the same size implies to astronomers that the two colliding galaxies were also of similar sizes.

You can read the rest HERE!

 

You can find more on me and my work with a search for “Benjamin Radford” (not “Ben Radford”) on Vimeo, and please check out my podcast Squaring the Strange!