Skip to main content
SearchLoginLogin or Signup

3. Why Are We Irrational? The Evolutionary Origin of Irrationality

Published onJun 30, 2022
3. Why Are We Irrational? The Evolutionary Origin of Irrationality
·

The peculiar architect of our thinking

In the previous chapter you experienced how your thinking is predictably irrational in certain contexts. In this chapter you will discover why this is the case. To answer this question, we must first turn to the architect of our thinking. Just as we would turn to the technicians of, say, a calculator, if it turned out that the calculator performs certain calculations incorrectly. Since Darwin, the architect of our thinking apparatus is known: evolution by natural selection. This remarkable architect appears to be blind (i.e. without foresight or plan), has no say in the materials with which it works, and has only one goal: reproduction. This has a series of important consequences for our thinking. Before outlining these consequences, it is useful to briefly describe the process of evolution by natural selection.

Evolution by natural selection

Evolution - the fact that species change over time and that all life forms (on our planet at least) share a common ancestor - is mainly driven by natural selection. Other factors influencing evolution are sexual selection, and, to a lesser extent, genetic drift and epigenetics. For the purpose of this book, we can limit ourselves to evolution by natural selection. That process consists of three steps. Firstly, there are random genetic mutations (copying errors in the DNA of an organism). This creates genetic variation (genetic differences between the individual organisms in a population). Secondly, those genetic mutations are passed on to offspring. Thirdly, mutations that help the organism survive and reproduce in its environment are ’selected’. Because organisms with those mutations have a greater chance of surviving and reproducing, more organisms with these genetic mutations will be present in future generations (Dawkins, 1976; Dawkins, 1986).

Take, for example, the long neck of a giraffe. That neck has grown over time because giraffes with a genetic mutation for a slightly longer neck reproduced more than giraffes with a shorter neck, since the animals with a longer neck were better able to feed on tall trees. Throughout generations, that neck continued to grow, because in every generation the animals who happen to have longest neck got the most food and therefore had the greatest chance to breed and to pass on their genetic material which coded for a long neck. Mind you, natural selection is blind: it has no plan (such as making a long neck for giraffes). Every generation, organisms with different traits had different success in reproducing and so, over time, ‘adaptations’ to the environment gradually emerge (the entire population possessing beneficial traits). Moreover, the blind architect (natural selection) has no control over the material it works with, since the mutations that arise are random (the majority of these mutations are neutral or detrimental to the organism and are therefore not selected).

What does this entail for our thinking?

Undeniably, our brains are also the product of natural selection. Because natural selection cannot anticipate the future and does not always have the optimal mutations to select from, the process often results in suboptimal designs. Take our eyes, for example. They have evolved from light-detecting cells under the skin. The nerve-bundles came together at the top in these cells. When those cells gradually evolved into the complex eye with pupil and retina, those nerves remained at the front of the eye, which means they had to be drilled through the retina to connect to the brain. That is why we have a blind spot in the visual field of each eye (a problem that is solved by combining the field of vision of the two eyes). Squid eyes have evolved separately (our common ancestor with squids had no eyes) and are better designed. In squids the nerves come together behind the eye, so they have no blind spots.

So, with regards to our brains, we can assume that more optimal designs are possible, at least in principle. But more importantly, the reason for our thinking errors, is the ‘goal’ of natural selection. Natural selection is only ‘interested’ in reproduction. It drives the evolution of a species by the different success with which genes (genetic variations) spread in the population. In a way, genes use organisms as vehicles to make copies of themselves (by enabling these organisms to reproduce). They are successful to the extent that they provide the organism with characteristics that increase its chances to reproduce. For example, by providing the organism with adaptations that make it better able to survive (such as camouflage, sharp teeth or long necks) or that make it more attractive to the opposite sex (such as the colorful tail of the peacock - that is sexual selection).

Truth is an expensive means to an end

Each characteristic of an organism, therefore, is selected only insofar as it yields a reproductive advantage. The same goes for our brains. They have not evolved to provide us with true representations of the world, but with representations that increase our chances of survival and reproduction. Truth (representing the environment in a correct way), however, is usually the best strategy to increase the chances of survival and reproduction of an organism. Take two hominids who see three tigers enter a cave and see two tigers come out. The hominid who made the right calculation and deduced that there still was a tiger in the cave, is more likely to be our ancestor (and to have passed on his mathematical genes) than the one who thought the coast was clear and moved in.

But truth comes at a cost. Representing the world in a complex and accurate way requires a lot of brain power. This, in turn, requires a lot of food. Brains are expensive organs. Our brains exhaust 20% of the energy we get out of food, while they only make up 1-2% of our body mass. More brain power (and brain mass to sustain it) can only evolve if the benefits it generates for the organism (in terms of survival and reproduction) are greater than the additional cost it requires (the extra food that must be found). So, natural selection is interested in truth only to the extent that it is relevant for survival and reproduction and wants that truth to be as inexpensive as possible. This has a series of important consequences.

System 1 and system 2

First and foremost, our thinking apparatus was developed to function rapidly and economically. Complex thinking processes require a lot of time and energy. The hominids in our example above did not have the luxury to think for a long time about if there actually was a tiger in the cave. Nor did they have the luxury to engage in overly complex forms of information processing, because that would require a brain that is (even) more costly and they would need (even) more food to sustain it.

As a result, we are equipped with a thinking system that is both fast and frugal (economical). It works automatically, quickly and intuitively. Cognitive psychologist Daniel Kahneman (2011) calls this cognitive mechanism ‘system 1’. We do, however, also have a second system at our disposal; a system that can check the output of system 1 and overwrite it, if necessary. This ‘system 2’ is slow, conscious, and requires effort. In general, system 1 is in control. When system 1 is in control, our thinking operates on ‘automatic pilot’. To have you experience the difference between system 1 and system 2 thinking (and discover which system is in control), let me present you with two additional riddles.

The first riddle is commonly known as the Moses illusion. It goes like this: “How many animals of each kind did Moses take in the Ark?” (Erickson & Mattson, 1981).

The answer, of course, is that Noah, and not Moses, took animals on his ark. But system 1 is inclined to answer “two” immediately (it works fast and automatically), because Moses fits in the biblical context.

The second riddle yields a similar result. “A baseball bat and a ball cost 1 $ and 10 cents together. The baseball bat costs 1 $ more than the ball. How much does the ball cost?” (Kahneman & Frederick, 2002).

Here, too, system 1 is inclined to answer ten cents almost immediately. If we think it through (with system 2), however, we see that the correct answer is 5 cents: the bat costs one dollar and five cents and the ball five cents, which adds up to one dollar and ten cents.

The fallibility of system 1

Heuristics: simplicity trumps complexity

So, system 1 regularly leads us astray (remember the riddles in the previous chapter!). The reason for this is that it makes use of heuristics. Heuristics are simple rules of thumb that generally produce good results but can sometimes be misleading. They are natural selection’s solution to finding as much relevant truth about the environment in the most ‘cost-effective’ manner. System 1 is a system of approximation: is applies simple rules to get as much relevant truth as possible as quickly and cheaply as possible. Although reasonably effective, it is therefore also fallible.

The application of heuristics gives rise, for example, to the availability bias (e.g. the fact that we are inclined to think that deaths from shark attacks are more common than deaths caused by dislodged aircraft components, since the former is easier to imagine or recall than the latter – see previous chapter – problem 7). In this case, the heuristic we apply unconsciously and automatically is: ‘The easier it is to recall or imagine an event, the more likely that event is’. This is often true, but not always.

Error management

As such, system 1 can be misleading, because it replaces complex reasoning processes that require a lot of information (e.g. statistical calculations) with simple reasoning processes that can be carried out in a split second. It does so to enable us to come to conclusions in a quick and economical way. In some cases, however, system 1 has been manufactured by natural selection in a ‘deliberately’ misleading way. Remember that the ‘purpose’ of natural selection is not truth, but survival and reproduction of the organism. Since some mistakes are more costly than other mistakes (i.e. more threatening to survival and reproduction), natural selection will primarily attempt to avoid these costly mistakes. To this end, it is willing to make more mistakes.

Compare it to a fire alarm. The alarm can make two errors: it can go off and signal that there is a fire when there is no fire (a false positive) or fail to go off when there actually is a fire (a false negative). The second error obviously has more serious consequences than the first one. That is why fire alarms are designed in such a way that they go off too easily (e.g. when smoking a cigar in the room) to make absolutely sure that they will not generate a false negative (failing to go off when there is a fire). The goal of the maker of the fire alarm is not to keep the total number of errors (both false positives and negatives) as low as possible, but to keep the cost of errors as low as possible. So, to avoid the costly false negatives, it yields quite a few false positives (and more errors in total).

The same goes for system 1. Natural selection has not designed it to be as accurate as possible, but to avoid costly mistakes. For that purpose, it is prepared to make a larger total number of mistakes. This phenomenon is known as ‘error management’ (Tversky & Kahneman, 1974; Haselton & Buss, 2000). A good example of error management is the tendency of men to overestimate the interest women have in them. Evolutionary, this makes sense. Since the cost of a missed opportunity for reproduction is much higher than the cost of a fruitless attempt to seduce, natural selection ‘wants’ to avoid errors of the first type and is therefore prepared to make more errors of the second type (Haselton & Buss, 2000).

Another example is detecting causal connections. Much like a fire alarm, this detection mechanism is tweaked in such a way as to make sure that we will not miss any important causal relations (such as the relation between eating something toxic and getting sick) because in general missing causal relations is more costly than seeing causal relations that are not there. The result is that we often tend to see causal relations that are not there. This, in turn, is a breeding ground for superstition, pseudoscience, and conspiracy theories. We will discuss this is more detail in the next chapter.

Finally, a third example of ‘error management’ is something commonly referred to as ‘hyperactive agency detection’ (Barrett, 2000). We tend to discern the actions of an ‘agent’ (a living being with intentions) in certain events too quickly. The classic example goes as follows: a hominid sees a bush move and hears it rustle. It can either be the wind or a predator stalking in the bushes. Thinking that it is the wind while it is a predator (a false negative) is a lot more costly than making the opposite mistake (a false positive). So, again, we are inclined to make too many false positive mistakes to avoid the costly false negative mistakes. This ‘hyperactive agency detection’ plays a particularly important role in the emergence of supernatural (religious) beliefs – as we will see in the third chapter.

Evolutionary mismatch

In addition to the fact that system 1 is an ‘approximating’ system (because it is economical) and that it sometimes makes a larger total number of mistakes to avoid costly mistakes (error management), there is a third reason why system 1 sometimes deceives us. The heuristics that make up system 1 have been designed to guide us through the environment in which we have spent most of our evolutionary history, not in the environment in which we live today. Our thinking is adapted to a nomadic existence in the Stone Age (since we spent the vast majority of our evolutionary history as nomadic hunter-gatherers). Sometimes our ‘stone age minds’ yield bad results in the modern world (Tooby & Cosmides, 1992).

A good example of this is the ‘gambler fallacy’. When we toss a coin, for instance, we tend to think that the probability of getting a ‘head’ increases the more ‘tails’ have been tossed consecutively. In other words, we tend to expect a statistical correction. Of course, the chance is always 50/50, regardless of what came up in the past. The same applies to the roulette wheel. Gamblers tend to play red when the ball ends up in a black pocket a few times in a row, ‘because 5 times black in a row, would be really unlikely’.

In a casino, this is obviously irrational. But in the natural environment in which we have evolved, this way of thinking makes sense. After all, most natural events are cyclical. When predicting the weather, for example, the probability of rain does increase as the length of a dry period increases (in a climate and season in which it rains regularly). In other words, it is mostly in artificial, modern contexts (the casino) that this way of thinking is irrational (Pinker, 1997). The same can be said for the many statistical reasoning errors we tend to make (like the base rate fallacy – see examples in chapter 2 and in the appendix). We did not encounter these problems in our ancestral environment, so our intuitions did not evolve to solve them.

Taking stock

So, let us take stock. Evolution has provided us with two cognitive systems: the first is fast and frugal. It leads to reasoning errors because of its approximating nature, error management and a ‘mismatch’ between the problems it was designed to solve (stone age problems) and the problems we face today. System 2 can put our thinking back on track, but it requires effort, and it usually stays in the background. Kahneman (2011) calls system 2 ‘the lazy controller’. System 2 only intervenes if there is no response from system 1 (for example when you calculate 25x56 – since we do not have an immediate, intuitive answer for this) and when we deliberately switch it on to check the output of system 1 (by consciously reflecting on a problem).

Now, this is precisely what the critical thinker does: they recognize the fact that their automatic, intuitive thinking is fallible in certain contexts and call upon system 2 (our conscious, reflective thinking) to check the output of system 1 in those contexts. Because system 1 can never be switched off, those answers come automatically. It is therefore a matter of checking the constant flow of system 1 when called for, and not following it blindly.

Other sources of irrationality

The social environment

So far, we have only discussed the evolutionary need to navigate our physical or natural environment. But humans are also part of a social environment. As social primates, our survival and reproductive opportunities depend to a large extent on our relationship with other members of the group. Our cognitive apparatus is therefore not only designed to navigate the physical-natural environment, but also the social environment. This has some important consequences. Bear in mind here that our thinking has not evolved for the purpose of truth but for reproduction. Truth is a means to an end. But, as I have previously pointed out, to successfully reproduce, it is usually best to represent the natural environment truthfully. However, that only applies to the natural environment. In the social environment, truth is a lot less important. In fact, in this context we often benefit from deceiving others (as long as they do not detect it, that is).

In a conflict, for instance, I benefit from the fact that my opponent believes I am a greater threat than I actually am (e.g. by overestimating my physical strength or the number of people in a group who would take my side). This increases the chance that the opponent withdraws, so I get what I want, without having to engage in a potentially costly fight. The same goes for my status in the group. My talents are better overestimated than underestimated. That way I benefit from a higher social status and see my chances of survival increase as well as my reproductive opportunities. To deceive others successfully, natural selection has equipped us with a resourceful ‘bias’. The best way to deceive others is to deceive yourself. The liar who does not know that they are lying, is often the best liar. This explains the overestimation of one’s own talents and prospects that we discussed in the previous chapter (problem 15). Here, too, our spontaneous, intuitive thinking system – system 1 – leads us astray.

The irrationality of system 2

But our irrationality is not solely attributable to system 1. Our slow and conscious thinking processes (system 2) also systematically deceive us in certain ways. This is due to its function in the social environment. Reasoning is not only used to obtain relevant insights about our environment (i.e.to navigate the environment in a way that contributes to survival and reproduction), it is primarily (!) used to argue with others. When arguing with others it is not as important to be right as it is to persuade others that you are right. As such, natural selection has equipped us with reasoning capabilities that are designed to persuade, since people who were able to persuade others and win arguments saw their social status rise, and with it, their chances of survival and reproduction. Cognitive scientists Dan Sperber & Hugo Mercier (2017) draw this very conclusion in their ‘argumentative theory of reasoning’: reasoning evolved for arguing.

As a result, we are by nature lawyers (great at arguing and persuading), rather than philosophers or scientists (not so great at finding truth by questioning our own opinions). The cognitive mechanism that helps us with persuading others that we are right, is the infamous ‘confirmation bias’. It helps us defend our opinions and convince others that we are right, by making us see and remember the evidence and arguments supporting our opinions (and filtering out the counterevidence and arguments). This often comes at the expense of being right.

If a prize were to be awarded to the bias that distorts our thinking most profoundly and regularly, it would undoubtedly be granted to the confirmation bias. The confirmation bias selectively suppresses everything that could contradict our beliefs and opinions. We are blind to counterarguments and see confirmation for our own convictions all around us. It predisposes us to seek, observe, remember, and interpret information in such a way that it reinforces our pre-established points of view. Because the confirmation bias makes us see and retain confirming information, whilst filtering out the counterevidence, it creates another bias: the overconfidence bias. This refers to our own tendency to (grossly) overestimate the odds that we are right (since we overlook and quickly forget the counterevidence).

Interesting research conducted in the 1970s clearly reveals the workings of the confirmation bias (Lord et al., 1979). Psychologists presented several studies on the relationship between the death penalty and crime rates to a group of people. Half of the group supported the death penalty, the other half was opposed to it. The studies – which were fictional, but the participants were not aware of this – contradicted each other. Some studies concluded that crime was decreasing with the introduction of the death penalty, whilst other studies indicated that there was no link between crime rates and the death penalty. One would expect that most of the participants would take a more nuanced stance regarding the issue (since no univocal conclusion emerges from the studies). However, the opposite turned out to be the case. Not only did all of the participants stick to their points of view, they did so with (even) more conviction!

Participants invariably estimated that the studies supporting their position (for proponents of the death penalty: a negative correlation between introducing death penalty and crime, for opponents a lack of such a correlation) were better than the studies contradicting their views (they only saw flaws in the latter and interpreted ambiguous information in favor of their opinion – this is known as the ‘belief bias’, it’s described in the appendix). More strikingly, they seemed to recall the studies supporting their point of view much better than the studies contradicting it. This kind of research also shows that the confirmation bias gains force when one is emotionally involved in a debate.

Emotion

This brings us to another prominent distorter of truth: emotion. We are not dispassionate robots objectively analyzing the world, but hot-headed primates who see the world through a prism of emotions. The reason that natural selection has provided us with emotions in addition to information processing is simple: natural selection is only interested in actions (that are conducive to survival and reproduction). And in order to induce an organism to act you need two things: a belief (the result of information processing) and a desire (the result of emotions or feelings). For example, I am encouraged to open the refrigerator and get something out of it (action) because I know that there is food in there (information processing) and because I am hungry (feeling). The affective component is the driving factor of action, and the cognitive component (information processing) is the guiding factor. But the two components are not separate from each other. The affective, in particular, influences the cognitive. This is known as the ‘affect heuristic’.

The affect heuristic consists of making decisions (e.g. whether or not to make an investment or get an insurance) on the basis of emotional reactions, not on the basis of the information available and an objective cost-benefit analysis (remember the Americans who were willing to pay more for a life insurance policy against terrorism than a life insurance policy where every cause of death is covered in chapter 2 - problem 9). In weighing risks against potential return, we often let ourselves be guided by our ‘gut feeling’, not by objective information and rational analysis.

The consequences of this can be dramatic. In addition to making us bad investors, it also makes us bad policy makers. With regards to climate change, for example, the effects of the affect heuristic turn out to be disastrous. Most of us are aware of climate change and yet we are doing relatively little to control it. One important reason for that is that the affective response we have in relation to climate change is rather small. Compare this with terrorism, for example, which provokes a much stronger affective response but, objectively, is a much less serious threat.

A particularly strong affective disposition with which we are all equipped is the so-called ‘ingroup - outgroup bias’. We have a positive disposition towards members of the group to which we ourselves belong (ingroup) and a negative disposition toward people from other groups (outgroup). This has profound negative consequences for human society (such as war and racism) and it also distorts our thinking. We tend to trust sources within our group too easily and therefore take over irrational beliefs form the ingroup, while we are typically extremely skeptical of the beliefs of members of the outgroup. This also plays an important role in the spreading of religious beliefs. Moreover, it explains why we tend to take on the irrational beliefs of our own group without a second thought, while the irrational beliefs of other groups often seem completely absurd (see chapter 1).

Together with the confirmation bias, the ingroup - outgroup bias is one of the biggest stumbling blocks to critical thinking. In chapter 5, we will look at how we can protect our thinking from these pervasive biases. In the next chapter, we will look at the various domains of irrationality that these biases lead to.

Summary

What are our two thinking systems?

System 1: fast, automatic, intuition-based system
System 2: slow, effortful, reflection-based system

Why is system 1 fallible?

  • It is frugal: it chooses simplicity above complexity

  • Error management

  • Evolutionary mismatch

Why is system 2 fallible?

Adapted to the social context: designed to convince others, win arguments

Which reasoning error helps with this?

The confirmation bias

What is the third source of irrationality?

Emotions

How does evolution by natural selection work?

1. Variation: random genetic mutations occur.

2. Replication: those genetic mutations are passed on to the offspring.

3. Selection: genetic material coding for traits that help the organism survive and reproduce will be more prevalent in subsequent generations. Since organisms possessing those traits will on average be more successful at reproducing (and passing on the genes coding for these traits) than organisms not possessing those traits.

Further reading

Kahneman, D. (2011). Thinking, fast and slow. Farrar, Straus and Giroux

Sperber, D., & Mercier, H. (2017). The enigma of reason. A new theory of human understanding. Allen Lane & Harvard University Press

Comments
0
comment
No comments here
Why not start the discussion?