Three years ago, Edgar Welch sent a text message to a friend announcing he was “Raiding a pedo ring, possibly sacraficing [sic] the lives of a few for the lives of many.” Two days later, he drove 350 miles to a Washington, D.C., pizza parlor called Comet Ping Pong and entered with a .38 revolver and an AR-15 semiautomatic rifle. He fired shots inside in an attempt to investigate what he believed was a child sex ring with ties to top Democratic Party leaders and sent restaurant patrons and staff fleeing in fear. The sex ring was fake news. The consequences, however, were real. Welch left the premises under arrest and later pled guilty to local and federal weapons charges.
At the time of Welch’s disinformation-driven rampage, “post-truth” had just recently entered the public imagination. A few weeks before Welch’s arrest, Oxford Dictionaries declared it the word of the year. Many people still struggled to understand how a polite, soft-spoken person like Welch could be led so far from reality. But as the disinformation age has continued to develop over the past three years, science has not stood still. It has given us a more detailed picture than ever of the ways that disinformation hacks our truth judgments.
If the picture is detailed, it is also disconcerting. It suggests that you and I are probably not so different from Edgar Welch as we might like to think. Take for example, what happens when we are subjected to repeated false claims. In a recent study, a research team led by Jonas De keersmaecker found that even those of us who are intelligent, analytical and comfortable with ambiguity find statements more believable simply because we have heard them repeated.
This phenomenon, known as the “illusory truth effect,” was first documented in the 1970s, but it is more relevant than ever in the era of fake news. One might immediately think of Donald Trump, who is a prolific peddler of this type of untruth. The Washington Post recently reported that there are “more than 350 instances in which [Trump] has repeated a variation of the same claim at least three times.” In fact, Trump has repeated some false claims more than 200 times—for example, his claim that his border wall is being built. Of course, there’s nothing new about this type of huckster’s grift. But online environments supercharge it. They give repeated false claims instant global distribution. More importantly, they allow the person making false claims to go on doing so while dodging the pressure (and potential legal repercussions) that accompany similar claims in public or in traditional news sources.
Psychologists say that what makes repeated claims seem truer is their “fluency.” Fluency means the cognitive ease with which we process a claim. Repeated claims are easier to represent and comprehend. For that reason, they just feel good. Our minds take this feeling as a cue that the claim is true.
In a recent review of the research, Nadia M. Brashier and Elizabeth J. Marsh identify two additional ways disinformation hacks our truth judgments. One that is closely related to fluency and the good feelings it generates is memory. The information and experiences stored in our memory are powerful weapons in the fight for truth. But, as with fluency, we take our memories as cues, not as the raw materials for forming well-considered judgments. We tend, in other words, to go with “good enough.” We often accept claims as true when they only partially fit with what we know or remember.
Additionally, we can fall prey to the “illusion of explanatory depth,” a tendency to overestimate our knowledge and understanding of the issues we care about. Research shows that when we do, we are more likely to hold extreme beliefs and to accept fake news as true.
Unfortunately, digital tools may be making our memories even weaker and less effective for judging truth. As Brashier and Marsh point out, “search algorithms return content based on keywords, not truth. If you search ‘flat Earth,’ for example, Google dutifully returns photoshopped pictures for a 150-ft. wall of ice that keeps us from slipping off the planet.” For this reason, relying on the internet as truth-on-demand rather than looking to our memories and acquired knowledge can backfire in serious ways.
Brashier and Marsh also point out a more basic mismatch between our brains and the digital environment: We tend to make truth our default judgment. This is especially true for visual information. As with the other cues we use to form truth judgments, this is a handy and useful adaptation in other contexts. After all, humans lived for millennia in an environment where we could trust most of our senses most of the time. Now, however, we find ourselves in a new information ecosystem, one in which, according to some sources, we will soon consume more false media than true media. When it comes to coping with that magnitude of misinformation, our brains are simply not well equipped.
Is there anything we can we do to keep our guard up in the post-truth era? We know that simply fact checking claims is not enough. After all, Edgar Welch’s “pedo ring” conspiracy theory had been debunked long before he showed up armed at Comet Ping Pong’s door.
There are, however, causes for hope. Once we recognize our vulnerabilities, we can recognize many other ways to design our information consumption with them in mind. Along with Emmaline Drew Eliseev, Brashier and Marsh found they could wipe out the illusory truth effect by simply prompting study participates to behave like fact checkers.
One of the most interesting solutions may be a collaborative one. Ziv Epstein, Gordon Pennycook, and David G. Rand have found that crowdsourced judgments about the trustworthiness of news sources can be surprisingly accurate. They suggest allowing users of social media to train algorithms to spot fake news as a scalable, decentralized solution. After ignoring warnings from friends and trying unsuccessfully to recruit them, Edgar Welch went it alone. Perhaps if we come together to protect against the vulnerabilities we all share, no one else will make the same mistake.
"how" - Google News
December 27, 2019 at 12:00AM
https://ift.tt/2ER4CDs
How Disinformation Hacks Your Brain - Scientific American
"how" - Google News
https://ift.tt/2MfXd3I
Bagikan Berita Ini
0 Response to "How Disinformation Hacks Your Brain - Scientific American"
Post a Comment