Skip to main content
SearchLoginLogin or Signup

2. Predictably Irrational: An Overview of Common Reasoning Errors

Published onJun 30, 2022
2. Predictably Irrational: An Overview of Common Reasoning Errors
·

What is (and what is not) critical thinking?

The term ‘critical thinking’ is often used, but it is not always clear what exactly it refers to. So, what is critical thinking? Critical thinking is rational thinking. It aims to generate justified beliefs (beliefs that we may assume to be true) by systematically analyzing the way in which beliefs have been formed. In other words, critical thinking means that we assess the reliability of our beliefs by reflecting on how these beliefs were formed. Moreover, critical thinking is also autonomous thinking. A critical thinker does not adopt beliefs simply because they are part of a cultural tradition or expressed by an authority figure. In short, critical thinking consists in forming beliefs in a rational (not intuitive and / or emotional) and autonomous way (not relying on tradition and/or authority).

What is critical thinking not? Critical thinking is not ‘negative’ thinking. It does not aim to undermine every claim. Critical thinking does not mean that we question everything permanently. It does not necessarily lead to skepticism: the position in which one suspends all beliefs by ‘knowing that one does not know’. Critical thinking is not intelligent or creative thinking either. Sometimes intelligent thinking leads to very uncritical beliefs (think for example of ingenious conspiracy theories). Finally, critical thinking cannot simply be identified with well-informed thinking. Being well informed is a necessary condition for arriving at justified beliefs, for without good information we cannot come to a justified belief, but not a sufficient condition, because even with good information we can still come to unjustified beliefs. We can indeed misinterpret correct information.

The goal of critical thinking

Critical thinking aims to distinguish sense from nonsense, good arguments from bad arguments, reliable thinking from unreliable thinking. To do so, we must focus on the source of our thinking: our thinking apparatus. By gaining insight into our own thinking, we are better able to estimate the reliability of the outcomes of such thinking. It is important to realize that we are not born with the ability to think critically. Critical thinking must be learned. In fact, critical thinking often goes against our spontaneous way of thinking. We must constantly beware of reasoning errors or fallacies.

No one - however intelligent they may be - is immune to irrational thinking. On the contrary, sometimes intelligent people are actually more susceptible to adopting irrational beliefs because they are better able to defend those views against counterarguments. Take Sir Arthur Conan Doyle, author of the Sherlock Holmes detective stories, for example. Doyle was led to believe that fairies existed by two teenage girls armed with their dad’s camera and fairy dolls. Doyle would defend his outlandish view against sceptics and give complex arguments for the existence of spiritual entities.

The usefulness of critical thinking

Before going into how and why our minds lead us astray, and how we can guard our thinking against reasoning errors, we have one more important question to address: Why should we think critically? What is the use of critical thinking? Critical thinking is not a purely intellectual exercise. It has a real and important impact on our daily lives. We make many decisions every day. These decisions range from trivial ones about what to have for dinner, and if we should buy a new phone, to more important decisions such as what to study at university and which professional career to pursue. We make these decisions based on information. Information about the nutritional value, price, and taste of food products, about the price and quality of that new phone, or about the study curriculum and the profession we are considering.

This is relatively new in the history of humankind. Never before did or could we make as many decisions as today. Deciding which professional career to pursue, whom to marry, how many children to have, where to live and what to consume is a fairly recent phenomenon. In the Middle-Ages all these things were a given: almost everybody did what their father or mother did, was given away in marriage, did not do family planning, lived in their native village, and consumed what was available (given their social class). Life was already settled before it even began. Today, in modern societies at least, that is not the case.

Consequently, we have never been so dependent on information. And of information there is anything but a lack. Indeed, we are constantly flooded with it. The internet and other media bombard us daily with an endless stream of information. The problem, however, is that not all information is reliable and that we typically do not get an assessment of reliability with the fragments of information that reach us. We must find that out for ourselves. By now, most of us are aware that an email from an obscure billionaire who promises us a huge sum if only we help him unlock his heritage, is not exactly reliable. But so much misinformation still goes ‘viral’. The internet is overflowing with unfounded health warnings against, for example, the use of microwave ovens or cellphones. We are also fed a constant stream of health and other advice that is far from reliable. From the next detox cure that will make us look ten years younger, to ‘superfoods’ of which we cannot eat enough or – and this has much more severe consequences – completely unfounded and alarmist claims about covid- and other vaccines. These are big claims, without (sufficient) evidence to back them up. Nonsense has always been around, but the amount of nonsense we are served today is greater than ever.

Moreover, nonsense breeds more nonsense. Beliefs do not emerge in isolation. Our worldview consists of a complex web of interwoven beliefs. This means that illusions or irrational and erroneous views tend to branch out in our thinking (Boudry, 2016). Anyone who believes in the predictive power of astrology will probably be more susceptible to other illusions such as believing in the existence of people with paranormal gifts, psychics, and the efficacy of treatments such as ‘energy healing’.

The tenacity of nonsense

Nonsense, as I have previously pointed out, is a historical constant. All eras and cultures have their irrational views. Interestingly, however, whereas blatantly irrational views generally seem completely absurd to an outsider, people within groups that hold these views are often not aware of the bizarre nature of their convictions. We do not have to go back far into the past to find seemingly absurd beliefs. In the 18th century, a large part of the population believed in witchcraft, in the fact that an English woman, Mary Toft, gave birth to rabbits and that there were recipes to produce not only gold (alchemy) but also live animals (such as a scorpion by placing basil leaves between two stones and letting it heat in the sun).

From the outside, these views seem absurd, and it is hard to imagine that a large part of the population held such beliefs. But our contemporary illusions are not so different. We have rid ourselves of many misbeliefs since the 18th century, mainly thanks to the development of modern sciences, but we certainly did not rid ourselves of all illusions. How would someone from the 23rd century look at our widespread superstitions (touching wood, being apprehensive on Friday the 13th, etc.)? And what would she think of the popular belief that surviving for a week on so-called ‘detox’ juices and tea clears our body of toxins (which toxins is usually not specified), and that an ethereal, supernatural being was incarnated in a human body some 2000 years ago?

The fact that illusions are part of a coherent worldview and do not look so strange from the inside, only makes it more difficult to expose them. The problem is also that with our intuition or common sense, we can perhaps expose the most outrageous claims, but certainly not all illusions. On the contrary, irrationality often stems from our intuitive and spontaneous thinking. In other words, normal thinking leads us astray. We tumble from one cognitive trap into another. It makes us ‘predictably irrational’, as the behavioral economist Dan Ariely (2008) describes it.

Three rules of thumb

Yet we are not powerless. In most cases we can set our thinking straight by using three rules of thumb (Braeckman, 2017). The first rule of thumb is not to accept a claim merely because it sounds plausible. The fact that a claim sounds plausible is absolutely no guarantee that it is true (our intuition can be misleading, as we will find out). So, having the feeling of understanding or knowing something is by no means a guarantee that you actually understand or know something (De Regt & Dooremalen, 2015). This is why we need to rely on external (non-psychological) support to assess its reliability. Is the claim substantiated by facts? Does it emanate from a reliable source?

The amount of external support we should require before accepting a claim depends, of course, on the claim itself. Extraordinary claims must be supported by extraordinarily strong evidence: a photograph of elves or of the monster of Lochness, the ‘yeti’ or ‘bigfoot’ are not adequate pieces of evidence for accepting the existence of these creatures. Related to this is the question of the burden of proof. Anyone who comes up with claims about paranormal activities must provide evidence for this, not the other way around Note in this context that despite the large cash prizes promised by skeptics to those who can prove paranormal gifts unambiguously, no one has yet succeeded. Such beliefs are not ‘innocent until proven guilty’. The same applies to alternative forms of medicine, conspiracy theories, and other theories that go against the scientific consensus (De Regt & Dooremalen, 2008). Because that consensus is supported by a large amount of evidence and has come about through a reliable process. Anyone who wants to reject the consensus must come with strong counterevidence.

Secondly, we must use “Occam’s razor”. Occam (an English philosopher from the 14th century) taught us that the most economical or parsimonious explanation is often the best. Such an explanation does not raise many new questions which in turn require an explanation (making the explanation less likely to be true). Take crop circles for example. Some people believe that they are made by extraterrestrials. Another explanation is of course that they are ‘hoaxes’: that they are made by people to fool other people. Believing the first explanation raises a whole series of other questions that also require an explanation: how did these aliens got here unnoticed, why don’t they seek contact, why do they mainly make crop circles in Europe, etc.? The most economical explanation, of course, is that these circles are made by people with a humorous slant.

Finally, we must be aware of a series of ‘cognitive pitfalls’. Our thinking is standardly equipped with these pitfalls. Everyone is susceptible to them. In this book we will identify those pitfalls or biases, explain their origin and learn how we can avoid them. Cognitive illusions are similar to perceptual illusions. They are systematic, permanent and universal. Systematic, because our thinking is always distorted in the same way. Human illusions, while they can vary considerably in precise content from culture to culture, are largely variations on the same themes. Cognitive reasoning errors and illusions are also permanent just like perceptual illusions. Take the famous Müller-Lyer illusion:

Even if we know that the two lines are the same length (after measuring them for example) and we understand that you are dealing with an illusion, we cannot get rid of the impression that the bottom line is longer than the top line. The same goes for cognitive illusions. Even though we are aware of the cognitive pitfalls (the biases or reasoning errors) that lead to illusions, we still tend to make the same reasoning errors. Finally, illusions are universal. Every normal human brain is susceptible to the same kind of cognitive illusions (just as every human being with normal sensory perception is susceptible to the same perceptual illusions). The first step to critical thinking is therefore to expose the cognitive pitfalls or biases that lead to illusions or irrational beliefs.

Predictably irrational

To learn to think critically, first we need to become aware that our spontaneous thinking is deceiving us in predictable ways. The best way to do this is by exposing you to a series of riddles. They show us in what contexts and in what ways our spontaneous thinking is misleading. For each riddle, before you look at the answer, try to formulate the first answer that comes to mind and then think about why this answer might be wrong.

Problem 1: The ‘Linda’ problem

Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.

Which is more probable?

  1. Linda is a bank teller.

  2. Linda is a bank teller and is active in the feminist movement.

(Tversky & Kahneman, 1983)

The answer is 1. If we think logically, we realize that statement 2 cannot be more probable than statement 1, since 2 is a subset of 1. In a study by the psychologists Tversky & Kahneman (1983) it appeared that 85% of the participants answered statement 2. The reason for that is that 2 fits better with the description of Linda, but statistically 2 can never be more probable than 1. Intuitively we are very bad estimating probabilities. That is also evident from the next riddles.

Problem 2: The base rate fallacy

Martin is a single man aged 45. He is an introvert and likes to read. What is more likely: Martin is a librarian (A) or Martin is a salesman (B)? (Raiffa, 2002)

The answer is B and the reason for this is that there are many more salesmen than librarians (approximately a hundred times more). That is why, despite the personality description of Martin, it is still much more likely that he is a salesman. Not taking into account the fact that there are many more salesmen than librarians is known as the ‘base rate fallacy’ or ‘base rate blindness’. The base rate refers to the prior odds or probability. In this example the base rate is the number of librarians in the world divided by the number of salespeople, so 1/100. This number should also be taken into account, not just Martin’s personality description.

Problem 3: The base rate fallacy

1 in 10 000 people suffers from a rare, deadly disease. A doctor develops a test to detect the disease. The test has a false positive rate of 0,5%. So 99,5% of the people who test positive have the disease. Because the test is cheap and very accurate, the government decides to have everyone tested for free. Your test comes back positive. What are the chances that you suffer from the disease? (Kahneman & Tversky, 1985)

Most people answer 99.5%. That is wrong. The figure is much lower. It is only 2% because the base rate should also be factored in: only 1 in 10 000 people suffers from the disease. The chance of a positive test being a false positive (0.5%) is much greater (50 times greater – hence the 2%) than that you belong to the group of people who have the disease (0.01%).

Problem 4: The birthday problem

How many people should you gather so that the probability that two people have the same birthday is greater than the probability that no one shares a birthday? 1

The surprising answer is 23 people, with a group of 57 people the chance has already increased to 99%! Intuitively we think that the number is much higher. (For the statistical calculation – see source above).

Problem 5: Exponential reasoning

Every day a lily pad doubles in size. If after 40 days it covers the entire pond, then at what point would it be covering half of the pond? 2

days (not 20 as we are sometimes inclined to answer immediately, because we reason linearly and not exponentially).

Problem 6: Exponential reasoning

Imagine you could endlessly fold a sheet of paper with a thickness of 0,1 millimeter. How many times would you have to fold it so that the thickness of the sheet reaches the moon (about 385,000 km)? (By folding it once you would get a thickness of 0,2 mm, folding it twice a thickness of 0,4 mm, etc.)

times! That figure seems absurdly low and that is because we underestimate exponential growth.3

# Folds

Thickness (mm)

0

0.10

1

0.20

2

0.40

3

0.80

4

1.60

5

3.20

6

6.40

7

12.80

8

25.60

9

51.20

10

102.40

11

204.80

12

409.60

13

819.20

14

1,638.40

15

3,276.80

16

6,553.60

17

13,107.20

18

26,214.40

19

52,428.80

20

104,857.60

21

209,715.20

22

419,430.40

23

838,860.80

24

1,677,721.6

25

3,355,443.2

26

6,710,886.4

27

13,421,773

28

26,843,546

29

53,687,091

30

107,374,182

31

214,748,365

32

429,496,730

33

858,993,459

34

1,717,986,918

35

3,435,973,837

36

6,871,947,674

37

13,743,895,347

38

27,487,790,694

39

54,975,581,389

40

109,951,162,778

41

219,902,325,555

42

439,804,651,110

Such an exponential reasoning error was often made at the beginning of the Covid-19 pandemic. When the reproduction number was above 1 (meaning that each infected person infects on average more than 1 other person) but there were not that many infections yet, many people (including some policymakers) mistakenly thought that the pandemic was under control. But with a reproduction number above 1, you get an exponential growth of the number of infected people. Every so often, the number of infected people doubles. If that is the case – as we have experienced several times – the amount of infected people suddenly increases very rapidly.

Problem 7: The availability bias

What is more likely: That you will die from a shark attack, or from a dislodged part of an airplane that crashes down? (Tversky & Kahneman, 1973)

In the U.S. the odds are 30 times greater that you die from a broken part of an aircraft than by a shark attack, but since shark attacks get a lot more media attention, we tend to think that they are more likely to happen. This is also known as the ‘availability bias’. We often overestimate the likelihood that something will occur when it is easy to recall or imagine (see appendix).

Problem 8: The availability bias

Of which are there more English words: Words that start with an R, or words that have an R as the third letter? (Tversky & Kahneman, 1973)

As it turns out, there are many more English words with R as the third letter rather than words starting with R. We tend to think the opposite because it is easier to call to mind words that start with an R than words that have R as the third letter. The availability bias, too, plays a role here.

Problem 9: The availability bias

What is the probability that a startup will succeed?

It hovers around 10%. We tend to come up with a higher estimate because we hear a lot more about successful startups than failed enterprises. This is known as the ‘survival bias’ and it is a type of availability bias.4

One way in which we express the availability bias is by buying a lottery ticket. Because winners regularly appear in the media, it feels like the chance of winning is real, while it is in fact negligibly small. The probability of winning is around 1 in 8 million. Given that roughly six million people live in Flanders (the Northern part of Belgium), this means that you have a greater chance of bumping into a specific Flemish person whom you don’t know, by randomly knocking on a door in Flanders. If people would perceive their chances of winning in this way, the lottery would likely go out of business. Another example of the availability bias (and a similar statistical reasoning error as described in riddle 1) comes from the U.S. Shortly after 9/11, Americans were willing to pay more for a life insurance policy against terrorism than for a life insurance policy that insures against any cause of death.

Problem 10: Anchoring

The following ‘bias’ is known as ‘anchoring’. Psychologists (Kahneman, 2011) divided people in two groups. They asked the first group: Do you think the tallest trees in the world (redwoods) are more or less than 300 meters tall? Then they asked them: how tall are the tallest trees according to you? The second group was asked: Do you think that the tallest trees are more or less than 50 meters tall? How tall are the tallest trees according to you?

Interestingly, the answers to the second question varied widely. Group 1 estimated that the tallest trees were 255 meters on average, the average guess of group 2 was 85 meters. The reason for this is that the numbers given in the first question acted as an ‘anchor’ to the estimates for the second question. Anchoring is a much-discussed bias in behavioral economics and can lead to irrational consumer behavior. We will discover this in chapter 4 (‘Irrationality in Action’) and in the appendix (‘Detect the Reasoning Errors’).

Problem 11: The framing effect

The next bias is known as the ‘framing effect’. A disease breaks out; 600 people will die if nothing is done. Doctors develop two treatments to combat the disease. Which treatment would you prefer?

Frame A
Treatment 1: 200 people are saved.
Treatment 2: 1/3 chance that they will all be saved, 2/3 that they all die.

Frame B
Treatment 1: 400 people will die.
Treatment 2: 1/3 chance that they will all be saved, 2/3 that they all die.

Test subjects were presented with either frame A or frame B. In frame A, 72% opt for treatment 1, in frame B only 22% and – of course – it comes down to the exact same thing (Kahneman & Tversky, 1979). The decisions we make do not only depend on the information itself, but also to a large extent on the way in which this information is framed (and marketing techniques eagerly play into this as we will see in chapter 4 and in the appendix).

Problem 12: The Allais paradox

Another type of irrationality in behavioral economics is known as the ‘Allais paradox’ (Allais, 1953). You have an 80% chance of winning 4000 euro (A) or a 100% chance of winning 3000 euro (B): What would you prefer? Most people choose B (80%). Now, suppose you have a 20% chance of winning 4000 euros (A) or a 25% chance of winning 3000 euros (B): What do you prefer? In this case, most people choose A (65%). However, the consideration in both cases should be the same if we think rationally (in economics this is referred to as ‘expected utility theory’). So, we shouldn’t shift from B to A. After all, we give up the same sum in B (1000 euros) for the same increase in probability of getting the sum (20% increase).

It becomes even more striking when we make the percentages smaller: suppose you have a 45% chance of winning 6000 euros or a 90% chance of winning 3000 euros, compared to a 0.1% chance of winning 6000 euros or a 0.2% chance of winning 3000 euros. Here, too, we should choose the same in both cases, but intuitively we do not.

With loss, we see the opposite effect. Most prefer an 80% chance of a loss of 4000 euros over a 100% chance of a loss of 3000 euros (92%). But if the percentages change (but not the proportions) we get a different choice. Between a 20% chance of a loss of 4000 and 25% chance of a loss of 3000, 58% opts for the latter.

Problem 13: The hindsight bias

Another important bias is the ‘hindsight bias’. What probability do you think you would have assigned to the breakout of a global pandemic before covid-19 happened? Or of a global financial crisis happening before 2007? Chances are that you might overestimate the likelihood that you would have attributed to these events before they happened, after they have taken place. After the financial crisis, for instance, many investors and economists claimed to have anticipated its occurrence. And yet, most investors still lost quite a bit of money during the crisis, and articles warning about an imminent crisis in 2006 are hard to come by. This is due to the ‘hindsight bias’. With ‘hindsight knowledge’ it often seems quite probable that something would occur. Therefore, after the events took place, we usually think that our predictions in the past were better than they actually were.

The American psychologist Baruch Fischhoff (1975) first identified this fallacy. Before former U.S. President Nixon went to Russia and China, he let test subjects predict the probability of ten possible outcomes of this diplomatic mission. After the presidential visit, Fischhoff asked the participants to remember the probability that they had accorded to each of the ten possible outcomes. Most people, it turns out, overestimated the weight they had accorded to the scenario that unfolded. Most people thought they were better predictors than they really were. This kind of experiment has been repeated several times, for example before and after the trial of O.J. Simpson, and during the impeachment of President Bill Clinton, with similar results.

Problem 14: The confirmation bias

One of the most truth-distorting fallacies is the confirmation bias. We tend to register and look for information that confirms our beliefs (whilst remaining almost entirely blind to information that contradicts our beliefs).

Psychologist Peter Wason (1960) clearly demonstrated this phenomenon with his ‘hypothesis testing’ experiment. You are presented with a sequence of three numbers: 2, 4 and 8. A rule corresponds to this sequence. The aim is to guess that rule by proposing different series, each with three digits. Each time you propose a series, you get one of the following answers: ‘yes, that follows from the rule’ or ‘no, that does not follow from the rule’. When you think you know the rule, you can take a guess.

The rule, quite simply, is: a series of increasing numbers. Usually, people take a long time before they find it. The reason that we take a lot of time to solve it, is because we tend to propose sequences that confirm or conform to the rule in our head. We think, for example, that it involves doubling numbers or doubling even digits and propose sequences that correspond to this rule, whilst, in order to solve the riddle, you must do the opposite. You should propose series that do not correspond to your hypothetical rule. That is the only way you can test the rule and, if need be, replace it with another rule, which you can then test again, and so on.

Problem 15: Self-overestimation

Finally, another tenacious bias is self-overestimation. Most of us are susceptible to so-called positive illusions.

85 to 90% of people think that they are above average drivers. 94% of university teachers think that they are above average teachers. 25% of all people think they belong to the top 1% when it comes to social skills. And, ironically, the vast majority of us also thinks that they are less prone to overestimating their abilities than the average person.

Overestimation does not only apply to our sense of self. We also demonstrate a serious bias when estimating the intelligence and talent of our children. We will talk about the evolutionary explanation of these positive illusions in the next chapter. Interestingly, depressed people appear to have a more accurate view of their skills and talents than psychologically healthy people. This phenomenon is called ‘depressive realism’.

That concludes the overview of some of our most prominent biases. As Dan Ariely (2008) summarizes it: ‘Even the most analytical thinkers are predictably irrational; the really smart ones acknowledge and address their irrationalities’. That is the purpose of this book.

Summary

What is critical thinking?

Rational and autonomous thinking

What are the three rules of thumb of critical thinking?

  1. Demand external (not psychological) support for beliefs.

  2. Apply Occam’s razor: choose the most economical / parsimonious explanation.

  3. Beware of cognitive illusions.

Comments
0
comment
No comments here
Why not start the discussion?