Skip to Content

Placing reasoning research in evolutionary context, a new book probes the haphazard ways we form opinions

The Enigma of Reason

Hugo Mercier and Dan Sperber
Harvard University Press
404 pp.
Purchase this item now

Readers of Science are unlikely to be surprised that decision-makers often disregard the best, most empirically informed conclusions. A string of recent editorials and articles have highlighted the devastating consequences and unique challenges of entering what some have called a “postfactual era.” Turning evidence into policy demands an artful negotiation of often competing social, political, and religious ideologies, and the path through such considerations is a winding one.

Because our day-to-day decisions are not subjected to the scrutiny of strangers with competing motives, it is tempting to hope that we fare better in our personal lives. And in some respects, we do.

Evidence suggests that when asked to evaluate others’ arguments, people often do a reasonably good job of separating cogent positions from nonsense. However, we do dramatically less well at evaluating our own arguments, and this asymmetry is at the heart of Hugo Mercier and Dan Sperber’s The Enigma of Reason.

By surveying many of the most compelling experimental findings from the past 40 years of reasoning research, Mercier and Sperber argue that cognitive biases are likely evolutionary consequences of what our reasoning was optimized to do. They insist that our earliest ancestors reasoned in argumentative social contexts with real and urgent consequences. To them, losing arguments meant forgoing meals and being relegated to risky duties like hunting or guarding the others. In other words, there was adaptive pressure to critically assess others’ viewpoints while safeguarding our own.


Highlighting the role of unconscious inference in perception, Ptolemy observed that sailing can elicit the sensation that the shoreline, not the boat, is moving.

Understanding human reasoning, the authors claim, means understanding that it evolved in this context, not to apply sound scientific conclusions to policy proposals or in many of the other abstract situations that now typify human reasoning. This assertion unites a plausible evolutionary perspective with the now famous experimental results showing that we generally overestimate our competence and attend too much to distracting or irrelevant features of reasoning tasks (1, 2).

But surely these tendencies are restricted to exceptional circumstances when we are led to err by devious experimenters. When we really try, our capacity to draw sound inferences must be reasonably objective. Otherwise, how could we account for the many scientific and technological advances humans have made?

Here the authors double down. Not only do we very rarely, if ever, reason in an impartial way, the general process of forming conclusions happens effortlessly and intuitively, they argue.

We do not track and chase our reasons. We do not coolly and objectively size them up from a stoic perch. The reasons we give for choosing one thing over another tend to be postdecision justifications of those impressions that came to us, more or less fully formed, without any effort at all. Here, too, there is ample evidence.

In 2012, a team of Swedish researchers surveyed people on the street, asking them to rank their agreement with several statements, such as “It is more important for a society to protect the personal integrity of its citizens than to promote their welfare.” By occasionally inverting the scale on which the pedestrians had made their selections before reviewing them together, the experimenters committed the participants to the very opposite of what they had originally stated. Most reversals were unnoticed, and many people offered impassioned justifications of the opposite position (3).

Were we not accustomed to quickly justifying already-formed conclusions on the fly, such results would be hard to explain, Mercier and Sperber suggest. They offer countless other examples, beginning with low-level inferences about our perceptions of common objects such as chairs and checkerboards. Even here, the authors remind us, we are not quite as objective as we might like to think.

Given the project’s ambition, Mercier and Sperber’s book will be met with no shortage of opposition. Much of the force of this argument relies on an evolutionary perspective that seems, at points, rather speculative. Although the mechanisms for adaption and selection are well known in many contexts, applying these insights to reasoning is still tricky business. Some readers will likely remain unconvinced.

Nevertheless, The Enigma of Reason is a comprehensive and well-motivated overview of the contemporary scientific and philosophical literature on reasoning. This is especially timely as we struggle to make sense of how it is that individuals and communities persist in holding beliefs that have been thoroughly discredited.


  1. R. M. Byrne, Cognition 31, 61 (1989)

  2. D. Dunning et al., Curr. Direc. Psychol. Sci. 12, 83 (2003)

  3. L. Hall, P. Johansson, T. Strandberg, PLOS ONE 7, e45457 (2012)

About the author

The reviewer is at the Laboratory for the Psychology of Child Development and Education (LaPsyDÉ), Paris-Descartes University and Caen University, Paris, France.

  • Jorge Senior

    Could critical thinking training change this results?