Does Correcting Misinformation Really Work?
In an era of “fake news” and alternative facts, is correcting misinformation an effective tactic for disseminating truth? A new analysis published in NCA’s journal Communication Monographs explores this question. The article was written by Nathan Walter of Northwestern University and Sheila T. Murphy of the University of Southern California, who conducted a study on attempts to correct misinformation – defined as “cases in which people’s beliefs about factual matters are not supported by clear evidence and expert opinion” – across contexts of science, health, politics, marketing, and crime.
Misinformation and Why People Believe It
The authors’ meta-analysis explored studies published from 1994 to 2015 that focused on misinformation and correction found in databases such as Google Scholar, JSTOR, and PubMed. The studies included conference papers, book chapters, and journal publications. Search terms included effect, persuasion, climate, vaccine, health, and more. The studies also included three key components: the generation of explanations in line with the misinformation, the generation of counterarguments to the misinformation, and the level of detail of the debunking message. They looked at the following strategies employed to debias misinformation: appeals to consensus, coherence, source credibility, fact-checking, and providing general warnings. In collecting the data, Walter and Murphy acknowledge the impact that educational attainment, geographic location, and culture all have on people’s susceptibility to believing misinformation. They note that “preexisting beliefs and ideology seem to override correction attempts…Thus, it is hard to predict the interplay between sample type and adoption of misinformation.”
The authors also note that specific topics impact both the likelihood of people believing misinformation and the effectiveness of correction attempts. “Generally speaking, misconceptions regarding climate change, evolution, and healthcare reform could be harder to correct, as people’s religious beliefs and political identities are deeply implicated,” they write. And, “exposure to real-world misinformation tends to pose more challenging tests for the power of debiasing, compared to constructed misinformation (e.g., fictional plane crash).”
So how do people assess the truth? Walter and Murphy outline several questions to consider:
- Is the information compatible with what I believe?
- Is the information internally coherent?
- Does it come from a credible source?
- Do other people believe it?
The authors walk through assumptions about the effectiveness of the above message characteristics, such as the increasing importance of fact-checking in media coverage, and that credible sources generally lead people to be more trusting of information. They also note that “The placement of the corrective message with respect to the statement of misinformation can, presumably, also affect debiasing.”
Study Selection and Coding
For their analysis, the authors used specific screening criteria to select 45 research reports that documented the results of 65 separate studies. The studies were coded based on these topics: politics, crime, health, science, marketing, and other; they also were coded based on the type of correction attempt, as mentioned earlier.
After analyzing the 65 individual studies, the authors found that the mean effect of correcting misinformation was “moderate, positive, and significant.” They also found that corrective messages were more effective for student populations compared with nonstudent samples. Further, the data showed that “beliefs in constructed misinformation (fictional events or studies) were easier to debunk, whereas beliefs in real-world misinformation tended to be more resilient to change.”
Which of the debiasing techniques were the most effective? Coherence (providing alternative explanations to misleading information) beat out fact-checking and appeals to credibility. “When debiasing strategies rely solely on retractions (e.g., fact-checking), they run the risk of painting an incoherent image of the events,” the authors note. “Once people are exposed to a coherent message that can explain the chain of events, they will be more likely to substitute the false information with the retraction.” Additionally, rebuttals were significantly more effective than forewarnings.
Walter and Murphy conclude that “corrective attempts can reduce misinformation across diverse domains, audiences, and designs.” They also specifically note that corrections appear to be more successful for health-related topics than politics. It’s perhaps not surprising that “people are more resistant to change when it comes to their political identity,” they write. And, educational attainment is a big factor in that difference, too – higher levels of education tend to make people more accepting of health and scientific authority, but not as accepting when it comes to politics.
The authors believe their results “offer an optimistic perspective on debiasing of misinformation,” and they offer some practical implications for debunking falsehoods, with the hope that the study will spur additional research on this topic.