Skip to main content
SearchLoginLogin or Signup

4. Irrationality in Action: How Reasoning Errors Lead to Domains of Irrationality

Published onJun 30, 2022
4. Irrationality in Action: How Reasoning Errors Lead to Domains of Irrationality
·

In the previous two chapters (and in the appendix), we saw how our spontaneous thinking deceives us as well as why that is the case. On the one hand, we have our automatic, intuitive thinking (system 1) that works well in most cases but can be deceptive because it is fast and frugal (economical). Moreover, we saw that system 1 sometimes ’deliberately’ misleads us in order to avoid making costly mistakes (error management) and misfires in certain modern contexts (mismatch). On the other hand, we have our slow, reflective and conscious thinking (system 2) that helps us critically analyze the output of system 1 and overwrite it if necessary. System 2, however, is not infallible; it subjects us to a strong confirmation bias - the tendency to be receptive only to confirming information and evidence. Finally, our emotions must also be considered. Our thinking can be affected by our feelings. A prominent bias that arises from this is the so-called ‘ingroup - outgroup bias’: we take on beliefs from within our group rather easily (and mistrust sources outside of the group). These systematic and universal ‘biases’ underlie domains of irrationality or illusions: sets of commonly held beliefs that misrepresent reality. In this chapter, we look more closely at these domains.

Superstition, horoscopes, and palm reading

An important source of irrationality is our ‘over perception’ of causal relations. As explained in the previous chapter, it was generally less costly of a mistake for our ancestors to see causal relations that were not actually there, than the mistake of failing to detect causal relations that were there. To that end, natural selection has equipped us with a mind that is prone to over-detect causal relations (to make sure we do not overlook any).

This bias leads to superstition. Superstition arises because we think we perceive all kinds of patterns and relations that are not really there. Horoscopes establish links between the positions of celestial bodies and our personal experiences here on earth, and psychics draw relations between the lines in the palm of our hand and our future. Once we have become receptive to these false relations, our confirmation bias reinforces the belief that these forms of divination are truly onto something, because it makes us much more receptive to confirming evidence than to disconfirming evidence.

On a more basic level, we are all prone to make such false connections. Think of the athlete who wears the same pair of ‘lucky’ socks at every competition, or the apprehension of some people to walk under a ladder or on Friday the 13th, or the fact that they keep a talisman for good luck. They all see a relation (between an object or number and luck or unluck) that is not there. Superstition is as old as humankind itself and will probably exist for as long as humans do. It is an (unintended) by-product of the fact that the architect of our mental apparatus (natural selection) wanted to make sure that we did not overlook any causal relations in our environment.

Correlation does not imply causation

Even if there actually is a correlation between two events, that correlation - contrary to what we tend to think - is not always causal. Correlation does not imply causation! A good illustration of this occurred at an Israeli air-force base. The instructor noticed the following: when congratulating a pilot in training after a well-executed exercise, that pilot often performed worse in their next exercise. Conversely, when the instructor reprimanded the pilot for poor execution of said exercise, that pilot’s performance often improved in subsequent attempts. The instructor deduced that punishment works better than reward and that from now on only punishment should be used.

The reason for the correlation, however, had nothing to do with the instructor’s praise or scolding, but simply with the fact that after an exceptionally good performance, statistically, the chances of a weaker performance are much higher than those of a similar or better performance. Conversely, for a weak performance, statistically it is more likely that it will be followed by a better performance. This phenomenon is called ‘regression to the mean,’ and is often overlooked in alternative medical circles (see below).

Order in randomness and randomness in order

Since we are inclined to detect causes and patterns too quickly, we often underestimate the role of chance, randomness and coincidence. People see patterns in random sequences (e.g. in random sequences of numbers) and unconsciously put patterns in sequences of numbers when they are asked to make them random (because they put too much alternation in the sequences, such as never repeating the same number in a sequence or never having three or more heads or tails in a row in a sequence of random coin tosses – whereas in actual random series that often happens). So, we see order in randomness and imitate randomness with order! Noteworthy in this context is that the first iPod Shuffle which played songs randomly was quickly adapted because there were many complaints from people who were convinced that the succession of songs was not random. The manufacturers ended up putting patterns in the sequence of songs to make it seem random!

Chance blindness

The chance factor is also grossly underestimated. Athletes who score a few points in a row are said to be on a ‘confidence streak,’ whilst their colleagues who record a few misses in succession are thought to have ‘broken under the pressure’. While psychology plays a role in athletic performance and these analyses could very well be true, statistical analysis in basketball has shown that these alleged clusters of hits and misses turn out to be random. Nevertheless, matches are still analyzed in such a way because it seems that there is indeed a connection between a player’s previous hits or misses and the probability that he will score the next time. For this phenomenon, psychologists came up with the term ‘hot hand fallacy’ (Gilovich et al., 1985). This bias does not only affect sports commentators and coaches; we all tend to underestimate the importance of chance. Investment success is often wrongly attributed to insight and CEOs and coaches are often rewarded after successes or fired after disappointing performances, while both success and failure are, to an important extent, the products of unpredictable external factors.

Finally, when we are confronted with extreme coincidences, we often refuse to see those events as merely accidental. Not only is it a possibility that such coincidences occur, it is a statistical certainty that they will occur regularly. The chance that you win the lottery may be very small, but the chance that someone wins the lottery is not. Examples of such extreme coincidences abound: the house of the French family ‘Comette’ was destroyed by a comet, James Dean’s Porsche repeatedly brought misfortune to people (look it up!), a baby was saved twice when falling out of a window by the same man who walked under it, and 70-year-old twin brothers both died in a car accident on the same road in Finland in a timespan of two hours. In such cases we often tend to think that ‘this cannot possibly be a coincidence’, but of course it is.

Causal reasoning errors

As cause-thirsty beings, we also routinely make up causes for events when we do not know their real causes. Comets and solar eclipses were often seen as signs of the wrath of the gods before we had scientific explanations for these phenomena. In short, we are overzealous in finding causal relations: we see many causal relations that are not there, and if we cannot find out the real causes, we just make something up.

Moreover, we also misinterpret causal relations. A common fallacy is that we confuse the probability that A if B with the probability that B if A. We jump to conclusions and think, for example, that someone is angry with us when that person does not answer our telephone call. But the chance that said person would not answer their phone if they were angry with us has nothing to do with the probability that that person is angry with us when they are not answering the phone. There are many other explanations for this: they might have left their phone somewhere, or are too busy to respond, etc. Yet, we often fail to consider these other possible causes. This kind of causal fallacy also plays an important role in conspiracy theories. Such theories are especially interesting because they tend to be fueled by a series of different biases and cognitive illusions.

Conspiracy theories

Conspiracy theories - such as the theory that the moon landing in 1969 never took place, that both former U.S. president John F. Kennedy and the British Princess Diana were assassinated by their governments, that Georges W. Bush orchestrated the 9/11 terrorist attacks, or that covid vaccines contain microchips to monitor the population – provide accounts of events that, to say the least, differ strongly from their ‘official’ version. The official version is seen as a cover story set up by the guilty party. Conspiracy theories often arise from basic causal reasoning errors. For example, given that the terrorist attacks of 9/11 helped increase public support for the Bush administration and, to a certain extent, enabled former president Bush to invade Iraq in 2003, the Bush administration is thought to have orchestrated the attacks. But, of course, it is not because 9/11 was used for political purposes and increased support for the Bush government, that it was also staged by said government (Braeckman & Boudry, 2011).

The ingredients for conspiracy theories

Conspiracy theories are not fueled solely by causal reasoning errors. As these theories develop, a whole series of other biases come into play. Unsurprisingly, the confirmation bias is most obviously at play. Conspiracy theorists focus almost exclusively on ‘evidence’ that supports their theory (for example, a single eyewitness who claims to have observed something remarkable) and mainly surround themselves with like-minded people (in the US large gatherings take place of the so-called ‘9/11 Truth movement’). The latter plays into the so-called ingroup - outgroup bias. Like-minded people belong to the ingroup in which everything they claim is accepted uncritically. People who reject these theories are part of the outgroup and their arguments often fall on deaf ears. Sometimes they are even accused of being part of the conspiracy!

In this way, conspiracy thinkers shield their theory from any criticism. From the view of the conspiracy theorist, hard criticism can be expected because the opposing party naturally wants to suppress the truth, which conspiracy theorists think they have discovered. Strong and pointed criticism is therefore not seen as casting doubt on the theory, but typically interpreted by conspiracy theorists as an indication that they are on the right track! In this sense, conspiracy theories possess a kind of built-in immunization strategy (a feature that protects the theory against refutation). The more one points out flaws in these theories, the more adherents see it as a sign that they are right (Braeckman & Boudry, 2011). Shielding a theory from criticism is a core tenet of uncritical thinking. Such immunization strategies also play a major role in pseudoscience (as we will see below).

Debunking conspiracy theories

Conspiracy theories are usually quite easy to debunk with a healthy dose of critical thinking. In general, conspiracy theories tend to become increasingly less likely as they evolve. The reason for this is that advocates of conspiracy theories make statements which require explanations and as such force them to come up with supporting evidence and arguments that tend to become progressively more farfetched.

Take the 9/11 conspiracy theory, for example. One of the most important arguments invoked by its advocates, is that the WTC towers collapse too quickly and in too ‘controlled’ a manner. Conspiracy theorists think that the towers were brought to explode with bombs placed on the inside by the Bush government and were detonated shortly after the aircrafts flew into the towers. But that raises a whole series of subsequent questions: who put those bombs there? Probably not President Bush himself. If not him, then perhaps a team sworn to secrecy? But how many people are needed for this? To execute such an operation, at least a few hundred people must be part of the conspiracy. All of them Americans, many of whom would have known people in New York. Why did nobody disclose information about the imminent attacks before it happened to warn friends and family? Why would no one have confessed these plans simply out of conscientious objection? All of this seems quite unlikely, especially in times of whistleblowers.

When faced with such theories, we should remember the second rule of thumb of critical thinking, Occam’s razor: the most economical explanation is usually the most probable one. The more questions a theory raises (all of which in turn require further explanation), the less likely the theory itself becomes. These conspiracy theories often remain rampant because – like pseudosciences – they typically lack an internal, self-critical feedback loop.

Pseudoscience

This, then, brings us to the question of what distinguishes science from pseudoscience. Philosophers have been reflecting on this issue since antiquity. Aristotle thought that science produces knowledge about the true causes of phenomena, whilst pseudoscience only conveys unfounded opinions. But how can we know whether we are actually dealing with scientific theories? Aristotle’s criterion: ‘knowledge of true causes’, appears both too strong: good scientific research often turns out to be (partly) wrong or at least incomplete (think of Newton’s physics, for example) and useless: we can only determine with hindsight that a theory was wrong.

Popper’s demarcation criterion

Reflecting upon this central issue in the philosophy of science, the 20th century philosopher, Karl Popper (1963), proposed a ‘demarcation criterion’ that remains very influential to this day. According to Popper, a theory is scientific if, and only if, it can, in principle, be refuted (proven wrong) by means of observation. This may sound rather odd. Is it not the likeliness of being true rather than the possibility of being false that makes a theory scientific? Popper turns our intuition about reliable knowledge on its head. It is not certainty, but fallibility that defines science. And there is a good reason for that.

As we saw in the previous chapter, the bias that deceives us most frequently is the confirmation bias. We are primarily focused on evidence that supports our own theory and are selectively blind towards evidence that would refute our theory or belief. A good thinking system, then, is a system that protects us against the confirmation bias. From a Popperian perspective, it is exactly this protection against the confirmation bias that should form the backbone of scientific thinking.

Freud vs Einstein

This insight came to Popper (1963) while he was reflecting upon two very influential theories in the first half of the 20th century: Sigmund Freud’s psychoanalysis and Albert Einstein’s theory of relativity. Both theories revolutionized their respective scientific fields: Freud introduced the workings of the subconscious mind in psychology, and Einstein introduced the relativity of time and space in physics. Both theories also had a wide-ranging explanatory scope: Freud could explain many psychopathological conditions by invoking subconscious psychological dynamics, and Einstein’s theory could explain everything Newton’s theory explained and more, given that it explained the anomalies in Newton’s system, such as the orbit of Mercury around the sun.

Unsurprisingly, both theories were seen by many as very successful. But there was one crucial difference between the two theories according to Popper: Freud’s theory could not be debunked by observable facts, whereas Einstein’s theory could. For Freud, all possible instances of human behavior could be explained from the same set of principles. When philosopher of science Sidney Hook (1959) asked an auditorium full of psychoanalysts what kind of behavior a child should exhibit for it not to suffer from the Oedipus complex, the room kept remarkably quiet. In other words, no conceivable kind of behavior could refute Freud’s pet theory.

Einstein’s theory, on the other hand, makes precise predictions that are testable, and has undergone and passed numerous tests. During a solar eclipse, astronomer Arthur Eddington observed that starlight was deflected by the mass of the sun, just like Einstein had predicted. An atomic clock was brought into orbit around the earth at high speed, and it showed a slight distortion of time compared to the time measured on earth, as Einstein had predicted. Einstein’s theory is currently still being tested, for example at the CERN facilities. If it turned out that a particle in the accelerator would reach a speed faster than the speed of light, Einstein’s theory would be falsified.

The importance of criticism

That is the crucial difference between science and pseudoscience. Science remain open to refutation, whereas pseudoscience protects or immunizes its theories from refutation. Certainly, that does not mean that scientists are not susceptible to the confirmation bias. Of course they are – we all are – and good scientists are aware of this. Charles Darwin, for example, made the following entry in his journal: ‘I used the following golden rule for many years: as soon as I noticed a published fact, a new observation or thought that contradicted my general results, I always made a note of it. I knew from experience that such facts are very easily overlooked and forgotten.’ (Darwin, 1958/1887).

However, we cannot expect all scientists to have so much insight into their own psychology and to be as diligent as Darwin. Fortunately, the scientific context and methodology protects the sciences from the scientists’ confirmation bias. By creating an environment of open debate which encourages participants to approach each other’s work critically, the truth-biasing effects of the confirmation bias can be kept at bay. In such a context, the confirmation bias can even be an asset. This way, scientists go the extra mile to defend their hypotheses, while their colleagues go the extra mile to come up with counterevidence and counter arguments. In the scientific context, there is no lack of motivation to refute a theory (Boudry, 2016). The physicist who refutes Einstein’s theory goes in the history books.

Immunization strategies

This environment of open criticism is typically not present in pseudosciences. In pseudoscientific circles, discussion with skeptical colleagues is not prevalent. Theories are typically not made vulnerable by exposing them to criticism but are shielded from criticism. This is done by means of so-called immunization strategies. Firstly, pseudoscientists often weaken their claims or give a new interpretation when there is strong counterevidence. In other words, they set up ‘moving targets’. A good example of such a re-interpretation occurred in the religious community known as the witnesses of Jehovah. They predicted that Christ would return in 1873. When he did not, they argued that he had indeed returned but as an invisible spiritual being.

We find a similar move in Freud’s psychoanalysis. According to Freud, neurotic disorders are the result of a frustration of the libido. When many soldiers developed neuroses during the first World War by being exposed to the horror of war (the shellshock phenomenon currently referred to as ‘post-traumatic stress disorder’), refuting Freud’s view, Freud saved his theory by arguing that war threatened the soldiers in their most desired love object: their own body. The libido theory remained intact, but it was radically reinterpreted.

Another technique frequently used in pseudosciences to protect a theory from refutation is to build in enough vagueness in the theory. Chakra stimulation, for example, consists in cleaning ‘chakras’ from ‘bad energy’ and, doing so, in healing the patient from their ailments. If there is no immediate improvement, the therapist can easily explain that away by saying that the chakras are more thoroughly blocked than initially thought. Given that many ailments heal spontaneously over time, there is usually some improvement in the end, after which the therapist gets all the credit.

Healthcare: a perfect storm!

This brings us to the domain where pseudoscientific theories tend to be most prominent: that domain is healthcare. This is no coincidence, a combination of factors creates ‘a perfect storm’ when it comes to healthcare. Three factors in particular: first, there is the confirmation bias of the therapist, second, the placebo effect on the patient, and third, spontaneous healing (a ‘regression to the mean’ of the health state) of the patient. We already know about the confirmation bias; the placebo effect is the substantial positive influence that the patient’s psychological expectation has on the healing process; and the timespan of illnesses (with the sole exception of chronic and terminal diseases) ensures that patients spontaneously heal over time.

Because of those three factors, ‘therapy experience’ – the personal experience that a therapist has with a certain treatment – is a very poor indicator of the effectiveness of that treatment. Firstly, the therapist is influenced by the confirmation bias and is therefore unconsciously more receptive to ‘evidence’ that supports the effectiveness of the therapy, than evidence that undermines its effectiveness. Secondly, the patient believes in the effectiveness of the therapy and therefore benefits from a strong placebo effect. Finally, the passing of time (spontaneous healing) does the rest. This is why many people strongly believe in the effectiveness of homeopathy, acupuncture, and even ear candling therapy, and that the therapist, often in good conscience, believes that the therapy is effective based on ‘successful’ experiences with the therapy.

Randomized double-blind trials with control group

To avoid these pitfalls, therapies and medication should be tested in ’randomized double-blind trials with control group’. A control group is a group of test subjects that is given a placebo to compare with a group receiving the actual treatment and check whether the treatment or medication is effective over and above the placebo effect. The allocation of patients to the treatment group and the control group, is done randomly so as to avoid a biased selection of patients (for example, by putting the hopeful cases in the treatment group).

Finally, the research is double-blind. This means that not only do the patients not know whether they receive a placebo or the real medication or treatment (obviously), but neither does the researcher who interprets the results. The researchers should not know which patients are in which group to avoid being biased by their confirmation bias – which would typically lead them to look for confirming evidence that the therapy works by interpreting the results from the therapy group in a more positive light than the results of the control or placebo group. When popular ‘alternative’ therapies are tested in such a way, they often show no effect whatsoever over and above the placebo effect.

What about traditional medicine?

Whilst the argument may seem reasonable that certain forms of therapy are part of age-old traditions (think of all sorts of alternative Eastern therapies) and must therefore be effective, it is actually not a good argument for the therapy’s effectiveness. Just because something has been carried out for centuries, does not mean that it is beneficial or even that it is not harmful. Think of bloodletting, for example. This form of medical treatment was applied in the West for more than two millennia, from Greek Antiquity up until the 19th century. People throughout the centuries thought that all kinds of diseases could be cured by draining blood (and healers also found strong indications for this, misled as they were by their own therapy experience, distorted by the confirmation bias, the placebo effect, and spontaneous healing).

Draining (sometimes a few liters!) of blood, we now know, is not only ineffective in curing diseases, it is often harmful. Bleeding a weakened body is not exactly the best strategy to cure someone who is ill. George Washington, the first president of the United States, likely died due to bloodletting (and not the disease for which he was being treated, namely laryngitis). Remember how we defined critical thinking in chapter 2 as rational and autonomous thinking. Relying on authority or tradition does not lead to truth.

Religion

This brings us to a next domain where irrationality thrives. The domain of irrationality par excellence, not only because of the blatant irrationality of some of the beliefs, but also because of the scale on which these beliefs spread. That domain is religion. Religion is a strange phenomenon. It is universal (as far as we know, in all human societies throughout history people have entertained supernatural beliefs) but often comes at an evolutionary cost (with regard to survival and reproduction). These costs range from economic costs such as offering sacrifices to the gods, over reproductive costs (e.g. imposed celibacy), to health costs – sometimes life threatening – when engaging in extreme rituals. Why, then, are these beliefs and practices so common?

The ingredients for supernatural beliefs

Research in the ‘cognitive sciences of religion’ reveals a series of psychological factors and biases that play an important role in the emergence and transmission of religious ideas. A first important bias is the so-called ‘hyperactive agency detection’ (Barrett, 2000), which we discussed in chapter 3. We tend to detect the actions of an agent too quickly when interpreting events. This bias led people in all pre-scientific cultures to view thunder, lightning, solar eclipses, and other natural phenomena as voluntary actions of one or more supernatural beings (gods, spirits, etc.). Our tendency to attribute causes to events, and to make up causes if we cannot uncover the true cause, also plays an important role here.

Moreover, we are intuitive ‘dualists’. We tend to perceive mental phenomena (consciousness) as strictly separate from physical or material phenomena. We do so because we have specific and very different innate intuitions about the material world (’folk physics’) and the behavior of others (’folk psychology’). These two types of intuitions enable us to navigate our physical or natural environment and our social environment

We use very different principles in both contexts. Our ‘folk psychology’ is based on our ability to empathize with others. We attribute intentions, thoughts and emotions to others to understand (and predict) their behavior. Of course, we do not do this with regards to the ‘behavior’ of physical objects. In this context, there are very different principles at play. As a result, we intuitively see the mental as radically different and separate from the material (and we still tend to do so even though we now have neuroscience mapping out the relationship between physical processes of the brain and mental activity). From such a strong dualism, it is not too big of a leap to disconnect the mental from the physical by talking about an immortal soul in a mortal body, and to hypothesize the existence of purely spiritual (immaterial) entities such as ghosts and gods.

Finally, other biases also play a role in forming religious ideas. For example, we seem to have an intuitive preference for ‘teleo-functional’ explanations (explanations in terms of purposes). When children and non-scientifically educated adults are asked why a certain rock is pointy, they tend to prefer answers such as ‘because animals would not sit on them’ over answers such as ‘by coincidence’ or ‘because the wind or the rain have shaped the rock’ (Kelemen, 2003; Casler & Kelemen, 2008). This explains why creation stories are intuitively compelling; the world is experienced as the product of a divine architect in which everything is as it is for a reason.

Once these religious beliefs emerge, they spread like a wildfire. Another important reason why we easily take on such irrational beliefs is our ingroup - outgroup bias. We are taught these beliefs by people close to us (at a young age), and we usually take them on without critically questioning them. The emotional ties we develop with these ‘sacred’ beliefs make us even less likely to scrutinize them.

The myth of ‘Homo Economicus’

Irrationality, however, is not confined to these higher spheres. In our everyday life, dealing with practical problems, our biases also often lead us astray. In this context, behavioral economists have overthrown the traditional view in classical economics that economic actors are always rational. Such a rational economic actor is often referred to as a ‘Homo economicus’, who has well defined preferences and who maximizes their ‘utility’ (the value or pleasure they get from their investment and consumption choices) in a perfectly rational way.

The reality of how investors and consumers act, however, is far-removed from this rational ideal. What we are willing to spend on a particular product, for example, is usually the result of ‘anchoring’ (see problem 10 of chapter 2 and the appendix) and not of a rational consideration regarding how much utility the product will give us (taking into consideration how much utility any other possible purchase for a similar price would give us). Such a consideration, of course, is much too complicated.

In reality, we are typically prepared to pay reasonable prices for the products we want. To determine the reasonableness of these prices, we look at the prevailing prices on the market. When new products are first introduced, there are no price anchors yet. As a result, people feel lost and are often hesitant to buy the product, even when they really want it. For that reason, marketeers devised a clever way to overcome such reticence in the absence of anchors: they introduce several similar products at once, such as a deluxe version, a ‘basic’ version, and something in between. Doing so, they give consumers an anchor, and the latter predictably purchase the product in between. This happened, for instance, when homemade bread machines were brought to the market.

Another important phenomenon is ‘framing’ (see problem 11 – chapter 2). A nice example of how consumer behavior can be influenced by framing is described by Dan Ariely (2008) in his book ‘Predictably irrational’. The magazine ‘The Economist’ came up with the following price proposition: Option 1 offered a 1 year subscription to the online version for 65$, Option 2 offered a 1 year print version for 125$, and Option 3 offered a 1 year online and print for … 125$! Why add an absurd option like option 2? Who would choose the printed version only (option 2) above print and online (option 3) for the same price? Why do they not just give the customer two options? In practice, however, it did not turn out to be so absurd of an idea. When people were given the 3 options, Ariely discovered, 84% opted for option 3 and 16% opted for option 1. If they only received options 1 and 3, 32% opted for option 3 (print and online) and 68% opted for option 1 (online only). By placing options 2 and 3 next to each other, option 3 suddenly seems much more attractive (I will get the online version for free!’) So, adding that second option, which no one ever chooses, changes the preferences of the potential customers considerably, and generates more income for The Economist.

There is a whole list of irrational consumer and investor behavior documented by behavioral economists. For instance, there is the Allais paradox (problem 12 – chapter 2), ‘loss aversion’ (appendix) and the ’endowment effect’, i.e. the fact that we attribute more value to something simply because we own it (appendix). Our behavior appears to be very far removed from the rational behavior that is assumed in traditional economics. This has important consequences, much like our irrationality in other domains also has important consequences. We will discuss these consequences in chapter 6. In the next chapter, you will learn how you can protect your thinking against irrationality.

Summary

Which biases lead to:

Superstition?

Hyperactive pattern detection, confirmation bias

Conspiracy theories?

Causal reasoning errors, confirmation bias, ingroup-outgroup bias, lack of self-criticism

Pseudoscience?

Lack of self-criticism, confirmation bias

Religion?

Hyperactive agency detection, intuitive dualism, preference for teleo-functional explanations, ingroup-outgroup bias

How do pseudosciences protect their theories against falsification?

  • Moving targets

  • Built in vagueness

Why is ‘therapy experience’ unreliable?

  • The confirmation bias of the healer

  • The placebo effect on the patient

  • Spontaneous healing of the patient

What is the scientific protocol for testing a medical treatment or medication?

  • Randomized double-blind trials with control group

Further reading

Popper, K. (1963). Conjectures and refutations: The growth of scientific knowledge. Routledge & K. Paul

Ariely, D. (2008). Predictably irrational: The hidden forces that shape our decisions. Harper Perennial

Goldacre, B. (2008). Bad science. Fourth Estate

Comments
0
comment
No comments here
Why not start the discussion?