As I was walking back from university one day, a respectable-looking middle-aged man accosted me. He spun a good story: he was a doctor working in the local hospital, he had to rush to some urgent doctorly thing, but he鈥檇 lost his wallet, and he had no money for a cab ride. He was in dire need of twenty euros. He gave me his business card, told me I could call the number and his secretary would wire the money back to me shortly.
After some more cajoling I gave him twenty euros.
There was no doctor of this name, and no secretary at the end of the line.
How stupid was I?
And how ironic that, twenty years later, I would be writing a book arguing that people aren鈥檛 gullible.
The Case for Gullibility
If you think I鈥檓 gullible, wait until you meet, in the pages that follow, people who believe that the earth is a flat disk surrounded by a two-hundred-foot wall of ice, 鈥Game of Thrones鈥搒迟测濒别,鈥1 that witches poison their cattle with magical darts, that the local Jews kill young boys to drink their blood as a Passover ritual, that high-up Democratic operatives oversee a pedophile ring out of a pizza joint, that former North Korean leader Kim Jong-il could teleport and control the weather, or that former U.S. president Barack Obama is a devout Muslim.
Look at all the gibberish transmitted through TV, books, radio, pamphlets, and social media that ends up being accepted by large swaths of the population. How could I possibly be claiming that we aren鈥檛 gullible, that we don鈥檛 accept whatever we read or hear?
Arguing against widespread credulity puts me in the minority. A long line of scholarship鈥攆rom ancient Greece to twenty-first-century America, from the most progressive to the most reactionary鈥攑ortrays the mass of people as hopelessly gullible. For most of history, thinkers have based their grim conclusions on what they thought they observed: voters submissively following demagogues, crowds worked up into rampages by bloodthirsty leaders, masses cowing to charismatic personalities. In the mid-twentieth century, psychological experiments brought more grist to this mill, showing participants blindly obeying authority, believing a group over the clear evidence of their own eyes. In the past few decades, a series of sophisticated models have appeared that provide an explanation for human gullibility. Here is the core of their argument: we have so much to learn from others, and the task of figuring out who to learn from is so difficult, that we rely on simple heuristics such as 鈥渇ollow the majority鈥 or 鈥渇ollow prestigious individuals.鈥 Humans would owe their success as a species to their capacity to absorb their local culture, even if that means accepting some maladaptive practices or mistaken beliefs along the way.
The goal of this book is to show this is all wrong. We don鈥檛 credulously accept whatever we鈥檙e told鈥攅ven if those views are supported by the majority of the population, or by prestigious, charismatic individuals. On the contrary, we are skilled at figuring out who to trust and what to believe, and, if anything, we鈥檙e too hard rather than too easy to influence.
The Case against Gullibility
Even if suggestibility might have some advantages in helping us acquire skills and beliefs from our cultural environment, it is simply too costly to be a stable, persistent state of affairs, as I will argue in chapter 2. Accepting whatever others are communicating only pays off if their interests are aligned with ours鈥攖hink cells in a body, bees in a beehive. As far as communication between humans is concerned, such commonality of interests is rarely achieved; even a pregnant mother has reasons to mistrust the chemical signals sent by her fetus. Fortunately, there are ways of making communication work even in the most adversarial of relationships. A prey can convince a predator not to chase it. But for such communication to occur, there must be strong guarantees that those who receive the signal will be better off believing it. The messages have to be kept, on the whole, honest. In the case of humans, honesty is maintained by a set of cognitive mechanisms that evaluate communicated information. These mechanisms allow us to accept most beneficial messages鈥攖o be open鈥攚hile rejecting most harmful messages鈥攖o be vigilant. As a result, I have called them open vigilance mechanisms, and they are at the heart of this book.2
What about the 鈥渙bservations鈥 used by so many scholars to make the case for gullibility? Most are merely popular misconceptions. As the research reviewed in chapters 8 and 9 shows, those who attempt to persuade the masses鈥攆rom demagogues to advertisers, from preachers to campaign operatives鈥攏early always fail miserably. Medieval peasants in Europe drove many a priest to despair with their stubborn resistance to Christian precepts. The net effect on presidential elections of sending flyers, robocalling, and other campaign tricks is close to zero. The supposedly all-powerful Nazi propaganda machine barely affected its audience鈥攊t couldn鈥檛 even get the Germans to like the Nazis.
Sheer gullibility predicts that influence is easy. It is not. Still, indubitably, people sometimes end up professing the most absurd views. What we must explain are the patterns: why some ideas, including good ones, are so hard to get across, while others, including bad ones, are so popular.
Mechanisms of Open Vigilance
Understanding our mechanisms of open vigilance is the key to making sense of the successes and failures of communication. These mechanisms process a variety of cues to tell us how much we should believe what we鈥檙e told. Some mechanisms examine whether a message is compatible with what we already believe to be true, and whether it is supported by good arguments. Other mechanisms pay attention to the source of the message: Is the speaker likely to have reliable information? Does she have my interests at heart? Can I hold her accountable if she proves mistaken?
I review a wealth of evidence from experimental psychology showing how well our mechanisms of open vigilance function, including in small children and babies. It is thanks to these mechanisms that we reject most harmful claims. But these mechanisms also explain why we accept a few mistaken ideas.
For all their sophistication, and their capacity to learn and incorporate novel information, our mechanisms of open vigilance are not infinitely malleable. You, dear reader, are in an information environment that differs in myriad ways from the one your ancestors evolved in. You are interested in people you鈥檒l never meet (politicians, celebrities), events that don鈥檛 affect you (a disaster in a distant country, the latest scientific breakthrough), and places you鈥檒l never visit (the bottom of the ocean, galaxies far, far away). You receive much information with no idea of where it came from: Who started the rumor that Elvis wasn鈥檛 dead? What is the source of your parents鈥 religious beliefs? You are asked to pass judgment on views that had no practical relevance whatsoever for our ancestors: What is the shape of the earth? How did life evolve? What is the best way to organize a large economic system? It would be surprising indeed if our mechanisms of open vigilance functioned impeccably in this brave new, and decidedly bizarre, world.
Our current informational environment pushes open vigilance mechanisms outside of their comfort zone, leading to mistakes. On the whole, we are more likely to reject valuable messages鈥攆rom the reality of climate change to the efficacy of vaccination鈥攖han to accept inaccurate ones. The main exceptions to this pattern stem not so much from a failure of open vigilance itself, but from issues with the material it draws on. People sensibly use their own knowledge, beliefs, and intuitions to evaluate what they鈥檙e told. Unfortunately, in some domains our intuitions appear to be quite systematically mistaken. If you had nothing else to go on, and someone told you that you were standing on a flat surface (rather than, say, a globe), you would spontaneously believe them. If you had nothing else to go on, and someone told you all your ancestors had always looked pretty much like you (and not like, say, fish), you would spontaneously believe them. Many popular yet mistaken beliefs spread not because they are pushed by masters of persuasion but because they are fundamentally intuitive.
If the flatness of the earth is intuitive, a two-hundred-foot-high, thousands-of-miles-long wall of ice is not. Nor is, say, Kim Jong-il鈥檚 ability to teleport. Reassuringly, the most out-there beliefs out there are accepted only nominally. I bet a flat-earther would be shocked to actually run into that two-hundred-foot wall of ice at the end of the ocean. Seeing Kim Jong-il being beamed Star Trek鈥搒tyle would have confused the hell out of the dictator鈥檚 most groveling sycophant. The critical question for understanding why such beliefs spread is not why people accept them, but why people profess them. Besides wanting to share what we take to be accurate views, there are many reasons for professing beliefs: to impress, annoy, please, seduce, manipulate, reassure. These goals are sometimes best served by making statements whose relation to reality is less than straightforward鈥攐r even, in some cases, statements diametrically opposed to the truth. In the face of such motivations, open vigilance mechanisms come to be used, perversely, to identify not the most plausible but the most implausible views.
From the most intuitive to the most preposterous, if we want to understand why some mistaken views catch on, we must understand how open vigilance works.
This essay is an excerpt from Not Born Yesterday: The Science of Who We Trust and What We Believe by Hugo Mercier.
av福利社 the Author
Hugo Mercier is a cognitive scientist at the Jean Nicod Institute in Paris and the coauthor of The Enigma of Reason. He lives in Nantes, France. Twitter @hugoreasoning
Notes
1. Mark Sargant, prominent flat-earther, in the documentary Behind the Curve.
2. Although they are elsewhere called epistemic vigilance; see Sperber et al., 2010.