Skip to main content
SearchLoginLogin or Signup

5. Mastering Critical Thinking: How We Can Guard Our Thinking Against Reasoning Errors

Published onJun 30, 2022
5. Mastering Critical Thinking: How We Can Guard Our Thinking Against Reasoning Errors
·

Three sources of reasoning errors

In order to guard our thinking against reasoning errors, we must first be aware of the sources of these biases. In chapter 3 we saw how natural selection equipped us with different thinking systems (system 1 and system 2) and how both systems systematically give rise to reasoning errors. Moreover, we saw that emotions can distort our reasoning. In this chapter, you will learn how you can protect your thinking from irrationality which arises from your intuitive thinking (system 1), from interference with your emotions, and from the confirmation bias and the overconfidence bias (the infamous biases of system 2, our conscious reasoning).

Intuitive reasoning errors

There is no off-switch for system 1

How can we guard our thinking against intuitive reasoning errors? First, we must realize that system 1 cannot be switched off. No matter how critical we are, our automatic, intuitive thinking continues to flood our minds with a constant stream of output. Our only protection against intuitive thinking errors is to check our intuitive output with our conscious and reflective thinking (system 2). This requires an effort. We are inclined to sit back and let system 1 do all the work. Remember that Kahneman (2011) calls system 2 the ‘lazy controller’. Also, remember Sperber and Mercier (2017) ‘argumentative theory of reasoning’: our reasoning abilities (system 2 thinking) evolved in order to be able to convince others, not to check up on our own intuitive thinking. Indeed, we are not inclined to reflect upon the reliability of our intuitions.

Even in contexts where people have been trained not to think intuitively about a certain topic, such as in the sciences, it appears that system 1 still frequently distorts human thinking. A good example of this occurred in the research on human evolutionary history. Like most other animal species, we have an (unconscious and intuitive) psychological mechanism that makes a strong distinction between members of our own species and other species. This evolutionary old mechanism has evolved for obvious reasons such as procreation, cooperation, and competition with conspecifics. This mechanism is still present in humans (notably, human languages typically group all other species under the single heading ‘animal’). This mechanism prompts us to regard human characteristics as fundamentally different from non-human characteristics.

Even in the context of research on the evolutionary history of humans, where this intuition has been violated by discovering that we share a common ancestor (who lived about 6 million years ago) with chimpanzees and bonobos, the intuition that humans are radically different than other species still appears to have distorted the thinking of paleoanthropologists. Human evolution was initially presented as ’unilineal’ and is still perceived by many lay people as such. A unilineal view of human evolution entails that there is a single line of hominid species between the common ancestor that we have with other apes and Homo sapiens (australopithecus - Homo habilis - Homo erectus - Homo sapiens). This view stands in stark contrast to the branching pattern perceived in the evolutionary history of other animal species, in which a given branch typically branches out and some branches become extinct, whilst others, in turn, branch out again.

The reason invoked to explain this difference is that the hominids were able to inhabit very different ecological niches by possessing (primitive forms of) culture. As a result, paleoanthropologists argued, hominids did not split up into different branches – each with specific adaptations to a particular environment – but evolved, as a whole, into what we are now. The underlying assumption (or ‘intuition’) here is that humans are radically different from other animal species, and that this difference had an impact on our (recent) evolutionary history (De Cruz & De Smedt, 2007).

This unilineal vision of human evolution has, however, been debunked. A rather large variety of hominid species, it turns out, inhabited the world at roughly the same time. All these other hominid species, however, have gone extinct, and Homo sapiens remains the only hominid species alive today. Recently, the remains of a dwarfish upright hominid with a brain size not much larger than that of a chimpanzee were found on the Indonesian island of Flores. They are estimated to have lived about 18,000 years ago, simultaneously with Homo sapiens, and were most likely driven to extinction by our ancestors.

We should, therefore, always be on our guard for the distorting influence of system 1 on our thinking. Even in contexts where we mainly rely on our conscious and reflective thinking processes (system 2), such as in the sciences, system 1 remains active behind the scenes. As I already mentioned before, our intuitive thinking cannot be switched off. All we can do is systematically check our thinking for reasoning errors that were automatically and unconsciously generated by system 1.

Can we never trust our intuition?

This central point of critical thinking (that we should check the output of our intuitive thinking) is at odds with the popular notion that we should ‘follow our intuition.’ We are advised to follow our gut feeling or inner voice and are often promised that this trust will guide us towards the right decision. In other words, we should have system 1, including the affect heuristics where we make decisions based on emotional reactions rather than based on a well-thought-out cost-benefit analysis, run the show. After what we have learned in the previous chapters, I hope that you understand that this is not an optimal decision-making strategy. Does this mean that we can never rely on our intuition? No! The correct answer is: it depends.

Two kinds of intuitions

‘Intuition’ refers to two very different sources of beliefs. The first source consists of genetically anchored or innate thought processes. The second consists of automatic but acquired thought processes. With regards to the first source, evolution has equipped us, as previously explained in chapter 3, with fast and frugal reasoning abilities shaped to navigate our environment. Our ancestors did not have the time to think at length about problems they encountered (remember the hominids pondering whether there is still a tiger in the cave – chapter 3). Nor did they have the luxury of possessing even more complex cognitive abilities, because these abilities come with a price tag (there is a trade-off between the accuracy and cost of cognition). Therefore, as we saw in chapter 3, our intuitive thinking is fallible. Moreover, it can lead to reasoning errors because of ‘error management’ (the fire alarm principle of making more mistakes to avoid costly mistakes – described in chapter 3) and because of a mismatch between the problems for which these intuitions have evolved, and the problems that we are encountering in a modern environment.

Ecological rationality

This does not mean, however, that our intuitive thinking is always misleading. As I pointed out in chapter 3, truth may not be an end in itself for natural selection, but it is usually the best way to ensure the survival and reproduction of an organism, at least when it comes to navigating the natural environment. Recently, there has been a reaction against Kahneman and his colleague Tversky (Tversky & Kahneman, 1974) as well as against other cognitive psychologists who were mainly focused on showing that our intuitive heuristics (the automatic thinking rules of system 1) lead to irrationality. According to the German psychologist Gigerenzer (2000), these heuristics are not ‘misleading because they are simple’, but rather well-adjusted tools evolved to deal successfully with important ‘ecologically relevant’ problems.

Heuristics are not the source of irrationality, Gigerenzer argues, but of ‘ecological rationality’. They enable us to solve ecologically relevant problems quickly and accurately. Take the ‘availability heuristic,’ for example. This heuristic, as Kahneman and Tversky discovered, leads to reasoning errors by making us assume, for instance, that deaths from shark attacks occur more frequently than deaths caused by dislodged aircraft parts (chapter 2). Gigerenzer and colleagues, however, rightly point out that the ‘availability heuristic’ usually produces accurate beliefs because events that are easier to imagine or recall are typically also more common.

Heuristics, Gigerenzer claims, produce true beliefs, at least when they are applied in ‘real world’ contexts. The reason that Kahneman and colleagues found that our intuitive thinking leads to irrationality, is because they tested our intuitions in an artificial-experimental context designed to make us err! In everyday life, however, our heuristics are generally reliable. Incidentally, Gigerenzer and colleagues refer to Kahenman and colleagues as ‘the people are stupid school of thought’.

Does this mean that we can trust system 1 blindly? Of course not. These heuristics, as Gigenerzer knows, are only reliable insofar as they are applied in an ecologically valid context. Hence the term ‘ecological rationality’, which refers to the ancestral context in which most of human evolution took place. The danger of a mismatch between these heuristics and the context in which they are now put to work, however, only increases as we move away from our ancestral environment.

Think of assessing financial risks, developing theories in quantum physics, or conducting statistical analysis. Our intuitions are no good in these contexts. However, we can usually rely on them in an everyday context. For example, in finding out what is most prevalent in our environment (using the availability heuristic), or who can be trusted in our social environment. Evolutionary psychologists Cosmides and Tooby (1992), in this context, argue that we are endowed with a well-adjusted ’cheater detection module’.

So, we should start by becoming aware of the fact that our intuitions automatically generate beliefs. When we notice that we have formed a belief intuitively, we should check whether or not our intuitions are reliable in the context in which they are applied. Are we dealing with a context that is ‘ecologically valid’? Or, put differently, is the context in which we apply our intuitions not fundamentally different from our ancestral context? If that is the case, we can usually trust our intuitive thinking. If we apply them in a context that is far removed from this ecological context (e.g. modern sciences, financial markets and statistics), we should turn on system 2 (consciously think it through) and refrain from going with the first thought that comes to mind (our intuitive output).

We should also be aware of the cognitive pitfalls inherent to our intuitive thinking, mainly due to ‘error management. When we see patterns and make causal connections, for instance, an alarm bell should go off. It may well be that there is a correlation, perhaps even a causal correlation, but – as we saw in chapter 4 – we also know that we are prone to make such connections too quickly. It is therefore advisable to take a step back, to engage our reflective thinking and to make sure that there is indeed a pattern or causal correlation. Do not be fooled like the instructor at the Israeli aviation base (mentioned in chapter 4)!

The same applies, of course, to those other domains in which we are often irrational (chapter 4). By being aware of these domains we can develop the reflex to critically check our intuitions in these contexts. In short, system 1 cannot be switched off (and fortunately so, life would be unlivable if every action, belief and decision were the product of laborious, conscious and slow cognitive processes) and can in most cases be trusted. In some contexts, however, it makes us predictably irrational. We need to be aware of this and switch to system 2 in these contexts to avoid the cognitive pitfalls inherent to system 1.

Acquired intuitions

Intuitive thinking however, also comes from a second source. Automatic and unconscious thinking processes are not only shaped by natural selection (and genetically anchored), they are also shaped by our experiences. Take the cognitive processes involved when driving a car, for example. Initially, when you first learn to drive, everything happens consciously (and slowly): turning the key, pressing the clutch, switching into first gear, looking in the mirror, etc. Here, system 2 is clearly running the show. After enough practice, however, these actions occur automatically and unconsciously. In other words, system 1 has taken over. Something similar happens when people develop expertise. By repeatedly performing certain cognitive tasks, we can often learn to perform these tasks automatically, unconsciously and accurately.

In his book ‘Blink’, in which he praises the power of intuitive thinking, Malcolm (Gladwell, 2005) discusses two salient examples of this process. The first is ‘chick sexing’: determining the sex of chicks. Distinguishing male from female chicks, it turns out, is difficult when the chicks are only a few days old. Because of the economic incentive to separate male from female chicks as soon as possible, courses exist that teach people to do this quickly and accurately. There is no single characteristic that allows one to determine the chick’s sex with complete certainty. Instead, there are various characteristics more often found in chickens than in roosters, and vice versa. Professional ’chick sexers’ have so much experience with this task, that they can determine the sex accurately with a single glance. Interestingly, they do this intuitively. They do not consciously make up their mind, which explains of course why they can do the job in a matter of seconds. With years of practice and experience under their belt, these ‘chick sexers’ have learned to distinguish male from female chicks intuitively.

In his second example, Gladwell (2005) recounts the story of an allegedly antique artwork which was confirmed to be authentic through a series of tests. When an expert saw it, however, he immediately knew that the artwork was a counterfeit. This expert, too, came to this decision not through conscious reasoning, but rather felt this intuitively. Yet again, extensive experience enabled the expert to come to intuitively reliable judgments, since the artwork was later confirmed to have been forged. Intuition from this source is therefore (generally) reliable, at least when these intuitive thought processes are the result of a reliable learning process.

A Manual for intuitions

Circling back to the question if we can or cannot trust our intuition. we should proceed as follows. First, we must check the origin of our intuition: Is it an acquired intuition as a result of a learning process, or an innate intuition? If it is an innate intuition, we must ask ourselves whether the context in which we apply the intuition is a context in which our intuitions are generally reliable. Do we apply it in an ‘ecologically valid’ domain, or a domain to which these intuitions are not attuned? In the latter case, an alarm bell should go off, warning us that following our intuition is not advisable.

Emotions

So much for the ‘manual’ for dealing with intuitions. Irrationality, however, does not only come from automatic and unconscious thinking processes. Another infamous source of irrationality is our emotions. Our thinking, as discussed in chapter 3, is not (always) isolated from our feelings. Emotions do not only play an important role in the selection of beliefs we take on board through our ingroup-outgroup bias, but also in the selection of beliefs that we refuse to throw overboard. We often develop emotional ties with our beliefs. This is evidently the case with religious beliefs but also, for example, with our political or moral points of view and more generally with opinions that we have publicly defended in the past.

Irrational forms of ‘cognitive dissonance reduction’

When we are presented with strong counterevidence to our beliefs (strong enough to get through the filter of the confirmation bias) a state of ‘cognitive dissonance’ occurs. Our beliefs are not consistent with the information that comes from reality. We usually find this unpleasant. We prefer to see ourselves as rational beings (beings that represent the world accurately). As critical thinkers we should of course eliminate the dissonance by discarding our beliefs or at least adjusting them, but as emotional beings we often refuse to do so. Instead, we engage in a different kind of dissonance reduction. An irrational kind. We do not adapt our beliefs to the outside world, but our perception of the outside world to our beliefs. In other words, because we cherish these beliefs so much, we keep them on board and reduce the dissonance by adapting the interpretation of the facts.

A striking example of this comes (again) from the wonderous world of the sects. A sectarian group in the U.S. believed that there would be a flood that would destroy the whole world in the morning of December 21st, 1954. The members of the sect believed that they would be saved right before the flood by aliens coming to their rescue in a flying saucer from the planet Clarion. Leon Festinger (1957), a psychologist, went over to see what would happen when beliefs in which people are strongly emotionally invested were unmistakably refuted. He observed the sect members standing atop of a hill in California on that faithful morning, ready to board the flying saucer. As you can imagine, nothing happened. No flying saucer and no flood. What would they do?

Some left the sect disillusioned. Most, however, stuck to their beliefs and came up with a special explanation for the facts. According to them, God had decided to save the world at the very last minute, because the small group of believers had ‘spread so much light’. A great example of irrational dissonance reduction: the belief is not adapted to the facts, but the facts are interpreted in such a way so that the belief remains intact. And in a much more subtle way (thank goodness!), we are all susceptible to this kind of dissonance reduction.

Think for example of someone who wants to live in an environmentally conscious way, but does not want to sell their polluting car and rationalizes this decision (e.g. if I sell it somebody else will just drive it, or that the greenhouse effect is mainly due to cattle, or that one car does not make a difference, etc.). The same goes for the smoker who wants to live healthily and minimizes the health risks of smoking or the athlete who takes doping but does not want to see himself as a cheater and tells himself that everyone does it.

The psychological mechanism of irrational dissonance reduction can have far-reaching negative consequences for society. Take, for example, the politician who has minimized or denied climate change throughout their life. They will be inclined to be very skeptical of new information about the impact of greenhouse gases on the climate. The same applies to other socio-economic issues where politicians have taken a stand. They tend to dig their heels in when presented with counterevidence.

How can we curb the confirmation bias?

Our intuitions and emotions are not the only ones to blame for our irrationality, our conscious and reflective thinking also leads us astray. The biases arising from our conscious thinking (system 2) are the confirmation bias and the overconfidence bias following from it. The success of modern science, as we saw in chapter 4 (and will further discuss in chapter 7), can be attributed to the built-in protection mechanisms against this universal human bias. In a similar way, we too can protect our thinking.

We can do so in two ways. Firstly, we can limit the confirmation bias by following the example of Darwin (see chapter 4): by being aware that we are affected by it (we all are!) and by making a conscious effort to look for and record evidence or arguments that would refute our beliefs. In other words, we can play devil’s advocate in our own thinking. When we form an opinion or belief, we should not – as we are inclined to do – (only) search for supporting evidence but also for counterevidence.

A second way to curtail our confirmation bias is to surround ourselves with (and listen to) people who think differently. We are not inclined to do that either. Engaging in discussions is not our favorite social activity (we prefer to talk to like-minded people) and yet we all need it to keep our thinking in check. Companies, for example, benefit from a board where disagreements occur and dissenting opinions are expressed.

The wisdom of the crowds

Other people, it turns out, are very good at uncovering the fallacies in our arguments, just as we are very good at exposing the fallacies that have crept into the reasoning of others. Only, we seem to lose most of this ability when it comes to our own beliefs and arguments (see the ‘bias blind spot’ explained in the appendix). As a result, groups generally come to more accurate beliefs than individuals. This phenomenon is often referred to as the ‘wisdom of the crowds’ (Surowiecki, 2004).

At the beginning of the 20th century, Francis Galton, Darwin’s nephew, discovered this phenomenon. A large group of people was asked to estimate the weight of an ox. It turned out that the median of their answers was extremely accurate (the median answer had an error margin of less than 1%). The larger the group, the greater the diversity in the group and the more the opinions of the individuals within the group are formed independently of each other, the more accurate the group becomes. The reason for this is that such a form of group thinking corrects for the individual errors and tunnel vision of each of the members of the group.

Large groups consisting of laymen often prove better at making predictions in economic and political contexts than the best experts! It is crucial, however, that the group does not behave as a group. For the wisdom of the crowds to materialize, members should not be allowed to communicate and consequently influence each other. Otherwise, social emotions such as conformism (ingroup bias) take over and the wisdom of the crowds often disappears.

The overconfidence bias

By making our beliefs vulnerable in this way, we also get rid of that other bias which follows from the confirmation bias, namely the overconfidence bias. We are usually much more certain that we are right about something than is justified. Research shows that people who estimate the probability that they are wrong at 1 in 100 are correct only 73% of the time, and even those who are so certain that they estimate the probability of being wrong between 1 in 1000 and 1 a 1.000.000, are only correct 85% of the time (Fischhoff et al., 1977)!

Moreover, people who are generally very confident that they are right, tend to predict much worse than people who are typically less certain. This is not surprising in the light of the confirmation bias. The more certain we feel, the more we succumb to tunnel vision and the more oblivious we become to counterevidence. To remedy this, we must get out of the confines of our own thinking. We must expose our beliefs to the critical gaze of others. Only in this way can we rid ourselves of our unfounded certainties. Only in this way do we tap into the wisdom of the crowds.

The extended mind thesis

That brings me to a more general observation. The great intellectual achievements of Homo sapiens are not so much the product of our ‘naked’ intellect. We have had our modern sized brains for some 200 000 years. For most of our history as biologically modern human beings, however, this did not get us much further than tending to fire and making rudimentary tools. What made possible the great cultural leap forward, is not so much our brain activity in isolation, but the use of external elements in our thinking. This insight is at the core of the influential ‘extended mind thesis’ (Clark & Chalmers, 1998). Our minds (or our thought processes) are said to extend beyond the boundaries of our brains. Think, for example. about the way we remember things by writing them down or the way we find our bearings by relying on signposts or our use of calculators to solve complex calculations.

Three levers for our thinking

There are three types of mind-external ‘levers’ that can scaffold our thinking (bring it to a higher level). The first lever consists of other minds. All major scientific discoveries and technological breakthroughs are the product of a collaboration of minds (people). Both a collaboration through time – scientists build on the work of previous generations of scientists – and a collaboration at the same time (scientists work in teams or test their ideas by presenting them to others). The importance of this cooperative form of knowledge acquisition can hardly be overestimated. According to the influential primatologist and psychologist, Michael Tomasello (2009), the most important cognitive ability of humans – and the one that underlies the difference in cultural complexity between humans and other animal species – is our ability to pool knowledge and build on the knowledge of others.

The second lever consists of the so-called ‘cognitive artifacts’ (such as logic, mathematics and language) that we have developed. They enable us to look at reality in a completely different way. Mathematics offer us a radically new way to interpret the data we gather from the world. Language does not only enable us to communicate and share our knowledge (supporting the first lever) but also to reflect upon our thinking and therefore be able to question the output of that thinking. To question a representation of the world, one must first be aware that one possesses that representation. Only by having linguistic representations (as opposed to unformulated intuitive representations) can we become aware of those representations and consequently change those.

The third lever, finally, consists of the instruments we use to support and enhance our thinking. These tools range from writing with which we radically expand the possibilities of our memory but also support long and complex reasoning processes (think of the use of writing in long calculations), to technological tools with which we extend the reach of our senses (such as telescopes) or perform complex computational operations (such as calculators and computers).

Outsourcing our thinking

So, the true power of our thinking does not reside between our ears but outside of our heads. To think properly, we must involve the outside world. We must appeal to other minds and make use of the cognitive and technological artefacts we have at our disposal. In a certain sense, we must outsource our thinking. A good recent example of this is the success of statistical prediction rules. These are equations in which relevant factors are given a certain statistical weight to arrive at a prediction. For example, an equation was developed to predict the price of wine at an auction, based on the age of the vines and all kinds of climatic factors. Such formulas tend to produce more accurate predictions than the best experts in the world!

Statistical prediction rules enable us to better predict outcomes in a wide range of contexts. For example, to assess the chances of success of a marriage or the chance of recidivism of criminals, but also when making medical diagnoses, assessing credit risk for banks and even to predict the productivity of a job-applicant. With regards to the latter, it turns out that it is better not to invite the applicant for a face-to-face interview because such unstructured job interviews significantly reduce the chance of attracting the best candidate for the job. And in times where these statistical prediction rules are released on ever larger data sets (big data), the accuracy of these predictions only increases.

The ‘take away message’ is that to think better (read: think more critically), we must acknowledge the limitations of our thinking. We are all susceptible to (the same) cognitive illusions, and we are all inclined to overestimate the probability of being right. A critical thinker is someone who consciously makes an effort to get rid of tunnel vision. Someone who is prepared to scrutinize their beliefs and is always prepared to revise those beliefs in light of new information. This is not something we do spontaneously. It goes against our nature. That is why critical thinking requires a conscious effort. It is a disciplined way of thinking.

Thinking about thinking

The essence of critical thinking is thinking about thinking. We must make a habit of asking ourselves if we can trust our thinking. We need to consider what a belief is based on: intuition or reasoning? Whether there are possible cognitive pitfalls. Whether we are emotionally involved in our beliefs and opinions. Whether we have used all available external levers. In short, we should keep questioning the output of our own thinking. Albert Einstein, one of the greatest thinkers in recent history, reportedly said the following: ‘It is not that I am so smart, but I stay with the questions much longer.’ That is critical thinking.

Summary

How to protect our thinking against reasoning errors coming from:

Intuitions

  • Check their origin

    • Innate intuitions: only reliable in an ecologically valid context

    • Acquired intuitions: usually reliable

Emotions

  • Be careful not to engage in irrational cognitive dissonance reduction!

Confirmation bias - Overconfidence bias (system 2 biases)

  • Be aware of these biases!

  • Play devil’s advocate in your own thinking

  • Surround yourself with people who think differently

What is the "wisdom of the crowd"?

When a large group of laypeople is asked to estimate something - and when the answers are formulated independently - the median of their answers is typically found to be very close to the correct answer.

What is the ‘extended mind thesis’?

Our minds (or our thought processes) extend beyond the confines of our brain.

Which three external levers do we use in our thinking?

  1. Other minds

  2. Cognitive artifacts

  3. Instruments

Comments
0
comment
No comments here
Why not start the discussion?