As I explained in chapter 1, one of the learning goals of this book is to reflect on what characterizes science and what constitutes the essence and the importance of scientific methodology. To do so, we did not (as is typically done in a course of philosophy of science) take a historical perspective and discuss what the most important philosophers of science have to say about science. Our starting point, instead, was human reasoning. We saw how, why and when our spontaneous thinking misleads us and how we can guard ourselves against reasoning errors. In chapter 4, we also saw that the scientific context protects against these reasoning errors and that this underlies the success of science. In this last chapter, I will elaborate on this.
In the first part of this chapter, we will consider the following question: which aspects of the scientific method make the sciences reliable (or at least more reliable than pseudosciences)? Before we take on this question, we must get a sense of what the scientific method is. It is debatable, however, whether it makes sense to talk about a single scientific method at all. We associate the scientific method with conducting empirical, experimental, quantifiable research with the aim of predicting and understanding a domain of reality. But none of these oft-cited features of the scientific method are present in all of the sciences. Formal sciences are not empirical. Evolutionary biology and astrophysics are not experimental. Psychology, sociology and anthropology are often not quantifiable. Biology typically does not make predictions.
So, we cannot speak of a single scientific method. The domains of the different sciences vary too widely. The difference between sciences is so important that some philosophers and scientists even question whether it makes sense to place all those different attempts at understanding a domain of reality under the same denominator of ‘science’. Leaving aside the formal sciences (mathematics and logic), there are two distinct families of (empirical) sciences. Those are the social or human sciences (such as history, psychology, economics, sociology, anthropology, ...) and the natural sciences (such as physics, chemistry, astronomy, geology, biology, ...).
The human sciences focus on human thinking, acting and interacting. The natural sciences focus on the physical and natural world. The objects of both types of sciences are very different. In the natural sciences, scientists study things like quarks, electrons, atoms, molecules, tectonic plates, genes, etc. In the human sciences, they study human thinking and acting (e.g. in psychology) and the interaction between people (e.g. in economics and sociology).
According to the 19th century philosopher Wilhelm Dilthey (1989), this means that both types of sciences also have a very different goal. The natural sciences seek to ‘explain’ (‘Erklären’): to describe the world in terms of cause and effect and their underlying laws (e.g. in the atmosphere of the earth, an object accelerates in vacuum at 9.81 m/s due to the gravity of our planet). The social sciences, on the other hand, aim to ‘understand’ (‘Verstehen’). What led to the French revolution is impossible to explain in terms of external, universal laws, according to Dilthey. To do so you must place yourself in the shoes of the actors of that historical episode (their thinking and feeling). It requires a subjective understanding.
In pointing this out, Dilthey criticized so-called ‘positivists’ such as Auguste Comte who aimed to provide the social sciences with the same quantitative method(s) as the natural sciences and sought to discover general laws within the domains of the human sciences. The central point that Dilthey makes is that we can explain physical reality with basic entities such as atoms and molecules and their lawful interaction, but not human reality. After all, we cannot explain the French revolution by invoking neuronal activity in the brains of the actors carrying out the event. Therefore, according to Dilthey, the natural and human sciences do not only have a completely different object, but also a very different method and very different aspirations.
For starters, the human sciences cannot accurately predict phenomena (as is the case in the natural sciences). Stars, planets, atoms, molecules and genes behave in a lawful (and therefore predictable) way. Human actors do not. We can predict an eclipse very precisely but cannot do the same for a political revolution or a financial crisis. At most, we can identify a series of factors that increase the chance of these events happening.
Moreover, both kinds of sciences are looking for something else. Natural sciences attempt to expose natural laws, social sciences are looking for generalizations. An object in the Earth’s atmosphere will always fall with an acceleration of 9.81 m / s (in vacuum), a group of people living in poverty under a cruel dictator will not always start a revolution (and it is certainly not possible to predict exactly when that revolution will erupt).
Finally, there is another important difference between the natural and human sciences. In the natural sciences there is no interaction between the theory and its object. In the human sciences there often is. The philosopher of science Ian Hacking (1995) calls this the ‘looping effect’: a theory can influence its object in the human sciences (i.e. the people it describes). The reason for this is that a theory in the human sciences can inform the people it describes and that this can influence their thinking and actions.
A striking example of this occurred in psychology. Up until the nineteenth century, women were often diagnosed with ‘hysteria’: a mental illness that caused all kinds of symptoms such as anxiety, fainting, insomnia and irritability. The disease, it was thought, stemmed from the uterus (hence “hysteria”, derived from "uterus") and to treat the affliction, physicians often removed the organ. Today we known that the uterus does not cause mental disorders and the disease is no longer recognized. In previous centuries, however, many women who suspected they might be affected by hysterical disorders displayed these symptoms. We see the same thing happening with other mental disorders, such as ‘multiple personality disorder’.
Looping effects also occur in the social sciences. Here too, a theory can influence its object (in this case: society). Karl Marx described the contract between industry owner and laborer as a contract that was not concluded between two free parties but within a power relationship in which the worker has no choice and is therefore not a free party. According to Marx, the laborer was forced to work for a subsistence wage to survive and the industry owner took full advantage of this. By describing the economic relationships in the society of his time, however, Marx would change these relationships (and the society he described). Communist revolutions ensued and different societies emerged.
The natural and human sciences therefore differ greatly from each other, both in terms of method, purpose and impact on the object of their study. According to some, the term social sciences is an oxymoron, a contradiction in terms – because, they argue, society can never be described scientifically. The natural sciences are commonly referred to as ‘the hard sciences’ and human sciences as ‘the soft sciences’. In Dutch people commonly talk about ‘exact sciences’ when they refer to natural sciences (as opposed to human sciences).
Yet, (good) human sciences and (good) natural sciences have a very important characteristic in common: namely that they are ‘self-correcting’. That is the essence of science, according to the famous astrophysicist Carl Sagan (1980). What Sagan means, is that the methodology and context (of both natural and human sciences) correct for the mistakes made by scientists. How does that work?
Just like everyone else, scientists are susceptible to the reasoning errors we have discussed. Remember the paleoanthropologists in chapter 5. Fortunately, the quality of the sciences does not depend so much on the quality of the scientists but on the quality of the methodology and the framework within which science is conducted. They operate in such a way that they protect against the reasoning errors and biases to which the human mind is susceptible.
First, the scientific methods protect against intuitive reasoning errors (system 1 reasoning errors). They make extensive use of cognitive artifacts (see chapter 5). These cognitive artifacts, such as mathematics, logic and statistics, do not only radically extend the scope of sciences (without mathematics, Newtonian physics let alone Einstein’s theory of relativity would not have been within our reach), they also protect against intuitive reasoning errors, such as the belief bias, gambler’s fallacy, hyperactive pattern detection, chance blindness, base rate fallacy, availability bias, etc. By using mathematical and statistical models and computations, scientists do not succumb to their biased intuitions.
Secondly, the scientific framework and context protect against the pervasive biases of system 2. It protects against the confirmation bias (as well as irrational strategies for cognitive dissonance reduction): through ‘peer review’, which ensures that each theory is critically screened for errors by other scientists before it is published. The scientific context of open criticism also contributes to this. Scientists do not lack motivation to try to revise the influential theories of their time (or at least refine them). The physicist who refutes Einstein’s theory, as I mentioned before, goes in the history books.
On the other hand, the scientific framework protects against the overconfidence bias. Experimental results must be reproducible and often the same experiments are carried out by other researchers to check whether the same results are in fact reached. In doing so, scientists want to ensure that these results are robust and not distorted by statistical anomalies (and an overconfident scientist).
Moreover, scientists engage in so-called ‘meta-analyses’. They pool all studies on a given phenomenon to reach more robust conclusions and to check whether the results of certain studies deviate strongly from the other studies (a good sign that they are flawed and should be discarded). In this way, they can weed out distorted results (for example, because certain studies had a sample that was too small and therefore not representative).
Finally, scientists will not only publish the results of their research, but also the precise methodology that they have used to arrive at those results. In this way other scientists can critically analyze the hypotheses that are put forward and they can raise problems with the methodology and the interpretation. In short, the context and procedures of ‘the scientific game’ ensure that theories are made vulnerable and can be corrected – if needed – by other scientists. This makes science a self-correcting process.
The scientific framework also protects the sciences against emotional distortion. While group thinking or the bandwagon effect undoubtedly affects research groups (just like other groups of people), their distorting effect is limited by the existence of rival research groups. A theory from a research group at a Dutch university may not be subjected to harsh criticism internally but can be critically evaluated by a Chinese research team. They do indeed have access to the data and methodology on which the research is based and there are objective measures for the success of a theory (reproducibility of results, validity of the inferences made, simplicity of interpretation – remember Occam’s razor? – coherence with other empirically supported theories, etc.).
And again: scientists do not lack the motivation to shed a critical light on existing theories. It is that rivalry (between researchers and research teams) that underlies the power of the sciences. That is the beauty of critical / rational / scientific thinking: it is the only universal kind of thinking. Different cultures have different values, customs and beliefs, but logic, mathematics and statistics are the same everywhere and everyone can expose reasoning errors and help build better theories.
The more scientists join the ranks and try to improve each other’s theories, the faster we move forward. The power of the sciences does not reside in the genius of individual scientists but in the size and diversity of the scientific community and in its self-correcting nature.
The only condition for scientific progress is that new theories, including the methodology, interpretation and the data through which they came about, are shared with the entire scientific community and are thus subjected to possible criticism. We do not tend to subject our pet theories to harsh criticism spontaneously and precisely this is missing in pseudosciences. In the latter, advocates of a theory often stick to it dogmatically and surround themselves with like-minded people.
The importance of scientific progress can hardly be overestimated. Rational thinking in general and science in particular, is the driving force behind improving living conditions throughout history. Most recently, the explosion of the modern sciences in the 20th century produced an unprecedented improvement in the living conditions and longevity. In the year 1900 the average life expectancy in Western Europe was around 46 years, today it is above 80 years (and globally around 73 years). In the year 1910 more than 74% of the world population lived in extreme poverty and in 1980 that was still the case for more than 43%. Today it is less than 10%.1
The sciences do not only have a crucial role in improving our living standards and conditions, but also in improving society. Herein lies the importance of the social or human sciences. To improve society, we must start by understanding its ingredients: the people that populate societies. In other words, to improve society we need to develop the human sciences. The insights of social psychology, for example, are invaluable for social problems such as multicultural integration, radicalization and populism.
Ironically, the human sciences are still in their infancy compared to the natural sciences. While the latter are of course very valuable and quench our thirst for knowledge about the world, the former are crucial for the future of our species, and by extension of countless other species. Some years ago, at the University of Oxford, there was a symposium on ‘existential risk’, namely the risk that humans will eradicate themselves. The participating researchers (from all kinds of scientific branches) were asked to estimate the chance that humanity will have destroyed itself by the year 2100. The median of their answers was 19% (Bostrom, 2013)! Personally, I am a lot more optimistic (perhaps it is not surprising that researchers dealing with existential risk are pessimistically inclined), but it shows the enormous importance of reaching a better understanding of what underlies the problems and challenges in our society in order to tackle them better.
Reaching a better understanding of humans and society, however, is often impeded. An important reason for the relative lag of the social sciences is that there are taboo subjects or issues that scientists often steer clear of in fear of causing negative social consequences. A good illustration of this occurred in the 1970s. Edward Wilson, an American biologist who had until then mainly studied ants, suggested that human social behavior could also be explained by analyzing the evolutionary past of our species and was therefore largely determined by our genes. Wilson was called a racist, sexist and even Nazi sympathizer. At an academic presentation of his work, students dragged Wilson from the stage and poured a jug of water over his head while chanting ‘Racist Wilson you can’t hide, we charge you with genocide!’ 2
What was it, you might wonder, that did provoke such an extreme reaction? Well, the consensus in the social sciences was that the social environment (and not genetics) determined human behavior and the reasons for this stance were not purely scientific. It was a reaction against the blatant racism and sexism of the 19th century where it was commonly thought that there were important genetic differences between the different races and sexes with regards to intelligence. Something that turned out to be false. Nevertheless, it remained taboo for most of the 20th century to study human behavior and qualities from a genetic, biological or evolutionary perspective. However, understanding human social nature and its evolutionary origins is an important piece of the puzzle in understanding society and meeting the important societal challenges standing in the way of a peaceful, harmonious global society.
The sciences must therefore be inclusive. They must not engage in self-censorship in the search for truth. That does not mean, however, that everything must be admitted. The question is not only about which research questions and hypotheses should be admitted because they provide valuable insights. It is also about which theories should not be admitted because they are completely unfounded (pseudoscientific). Most people agree that we should not consider astrology as a legitimate science and that chemotherapy is more effective in the fight against cancer than energy healing. But the question remains, how strict must we be? And – equally importantly – based on which criteria do we distinguish legitimate from pseudoscience?
Philosophers of science have addressed this question and proposed demarcation criteria, i.e. criteria to distinguish science from pseudoscience. The most influential demarcation criterion, as we saw in chapter 4, is Karl Popper’s (1963) criterion of ‘falsifiability’. According to Popper, a theory is only scientific if it is testable. This means that it must in principle be possible to refute the theory based on observation (which of course does not mean that the theory will be refuted!).
With his criterion, Popper went against the traditional demarcation criterion of his time, namely: ‘verifiability’. According to the latter, a theory is only scientific when it is shown by observation that the hypothesis is right. According to Popper, this is impossible: no theory has ever been verified. Scientists do not prove anything with absolute certainty. The reason is simple, we can never observe everything and so there is always the possibility that future observations will falsify a theory.
But Popper also went against another criterion: confirmation. It is not because a hypothesis is supported by observation that it is scientific. Whereas verifiability is too strong a criterion, confirmation is too weak: many pseudosciences (such as astrology) are ‘supported’ by a long list of confirming observations. Only, these theories are rarely tested: astrologists do not attempt to refute their theories (being under the spell of the confirmation bias). In this sense, Popper’s criterion protects against the confirmation bias by explicitly inciting scientists not to look for confirmation for their hypotheses but instead to look for counterevidence and counterarguments.
According to Popper (1963), scientific progress is driven by: ‘conjectures and refutations’. Every time a theory or an aspect of a theory is refuted, another one takes its place, which in turn is tested. In this way theories improve over time. The consequence, however, is that according to Popper we can never state with absolute certainty that a theory is true. If we do so, we slip into dogmatic thinking, the opposite of scientific thinking. Scientific investigation, according to Popper, is – or at least should be – a constant attempt to refute existing theories, not one of seeking additional evidence for theories.
Popper’s demarcation criterion may be particularly influential, but that does not mean that it was not criticized. Fellow scientists and philosophers of science exposed a series of problems with the criterion. First and foremost, it appears that scientists do not practice science in the way Popper envisions it (and in the way that his criterion requires). They do not just throw a theory overboard when confronted with counterproof. They will often come up with ad hoc hypotheses to explain the observed anomaly.
For example, when it turned out that the orbit of Uranus around the sun was inconsistent with Newtonian laws, Newton’s theory was not discarded. Scientists assumed that there must be another planet that affected Uranus’s orbit and this indeed turned out to be the case. Astronomers peered into the solar systems with improving telescopes and found that planet: Neptune. Falsifying Newtonian physics was not called for and this is often the case. Often a theory with a large explanatory scope should not be discarded when we are confronted with inconsistent observations.
Another point Popper’s critics made against his demarcation criterion is that pseudoscientists sometimes make falsifiable claims, such as astrologers who make testable predictions about personality and the future based on horoscopes. This does not make these predictions scientific. Of course, when such pseudosciences engage in risky predictions, they are typically falsified sooner rather than later. As such, this is not so much of a problem for Popper’s criterion.
A more fundamental criticism, however, came from philosopher of science Paul (Feyerabend, 1970), who considers himself to be an epistemological anarchist. According to Feyerabend there is not one correct way to understand reality, but many different and valuable ways. The world is so much more complex than presented in scientific models and theories, and when we limit ourselves to a single perspective on reality (a scientific worldview), we are left with an impoverished worldview.
According to Feyerabend, we should therefore never limit ourselves to one method, both within the sciences and in general. His principle is: ‘anything goes’! He is therefore radically opposed to a demarcation criterion. Such a criterion, he argues, prevents new knowledge from being acquired and thus prevents knowledge from progressing. Major scientific breakthroughs, Feyerabend argues, came about precisely because scientists broke the rules of their time. The Copernican revolution, the atomic model of Bohr, ... they all came about, according to Feyerabend, because scientists ignored the methodological rules of their time. Rules prevent progress, Feyerabend claims. We should let everything bloom instead of constantly weeding out ‘bad’ theories.
In the forceful words of Feyerabend (1970): “It is thus possible to create a tradition that is held together by strict rules and that is successful to some extent. But is it desirable to support such a tradition to the exclusion of everything else? Should we transfer to it the sole rights for dealing in knowledge, so that any result that has been obtained by other methods is at once ruled out of court? This is the question I intend to address in the present essay. And to this question my answer will be a firm and resounding NO.”
Feyerabend’s view fits in the context of postmodernism. According to postmodern thinkers, there are no objective facts, only constructions and interpretations. Scientists are therefore not discoverers of reality but rather sculptors of reality. Science is no better or more accurate than magic or voodoo, just a different perspective, a different construction and as such there is no reason to admit the former, while rejecting the latter.
Feyerabend strives for what he calls the separation of state and science. Analogically to the separation of state and religion, i.e. a state in which no religion is imposed on its citizens, no scientific worldview should be imposed on its citizens. We should be free, Feyerabend claims, to choose to give our children an education in voodoo, rain dancing, astrology, and/or science. As can be expected, these provocative statements were met with much criticism. When fellow philosophers of science Agassi (1976) rightly pointed out the absurdity of placing voodoo on a par with science, Feyerabend replied that he did not mean this in a literal sense. It was a matter of rhetoric. A nice illustration of an immunization strategy we discussed in chapter 4: setting up ‘moving targets’!
So, this postmodern constructivism did not convince everyone (to say the least). Alan Sokal, a physicist and a philosopher of science, responded in a remarkable way. Not only against Feyerabend but against all postmodern thinkers who believe that there are no objective facts, only perspectives and social constructions. According to (Sokal, 1996), this opens the door to a whole lot of nonsense. He believes that objective facts about the world can indeed be known and that we can and should make a distinction between meaningful, empirically supported theories and nonsensical theories about the world (i.e. that we should demarcate between science and pseudoscience).
To drive his point home, he came up with a hoax. He submitted an article to a leading academic journal ‘Social Text’ in the field of cultural studies and got it published (through peer review) (Sokal, 1996). His article (entitled ‘Transgressing the boundaries: Towards a transformative hermeneutics or quantum gravity’) offered a strongly relativistic view of the world (as most other articles published in the postmodern journal). Sokal’s article, however, was purposefully made nonsensical. It consisted mostly of grammatically correct and very esoteric sentences with many neologisms that made absolutely no sense. The hoax hit the intellectual world like a bomb! In a letter addressed to the publisher, he explained that it was an experiment to see if he would get an article past peer review that fits in with the style and philosophy of the journal, although it contained nothing but nonsense (Sokal, 1996).
What Sokal rightly denounced is that when we open the doors of what is academically acceptable too widely, we risk drowning in nonsense. Without a demarcation criterion, science inevitably loses its power. For two reasons. First, as we saw above, sciences can only progress if epistemic and methodological principles are shared by the scientific community - so that others can criticize and contribute to its progress. Feyerabend’s ‘anything goes’ deprives the sciences of their greatest strength: the universal standards that allow everyone to contribute and correct each other, regardless of their personal convictions and cultural backgrounds. Secondly, we must not forget that scientists build on the work of others. If everything is admitted, including completely unreliable scientific research, then the foundations of the scientific edifice collapse.
In conclusion, I would like to offer a final piece of advice: Try to strike a balance between openness and restriction. That is the takeaway message I want to pass on to you as budding scientists in particular and as people in general. It applies both to scientific research and to our everyday thinking. We must remain open-minded and open to new and sometimes surprising ideas, but we must not open our minds to such an extent that our brains fall out! So, be open to new ideas, possibilities, perspectives, but never lose your critical gaze.
Develop the habit of gauging the reliability of a belief by considering the way it came about. Screen the arguments you are presented with for reasoning errors. And, above all, develop the habit of critically reflecting on your own thinking and beliefs. Critical thinking is an indispensable skill in the information age in which we live. It makes little sense today to cram your head full of facts that are accessible by simply consulting your smartphone. What makes sense is to develop the right filters to process that constant stream of information.
I believe critical thinking is one of the most important hiatuses in our education today and I hope that this book has filled this gap for you. Because, as I pointed out in the previous chapter, critical thinking is first and foremost a matter of responsibility. Bad thinking leads to bad outcomes. In light of the important challenges that we face today, one thing is certain: the future will be determined by the quality of our thinking. Up to you, dear student or reader, to contribute to a better world as a critical thinker!
They are self-correcting
Cognitive artifacts protect against intuitive reasoning errors.
Framework and context protect against reasoning errors of system 2 and against emotional distortion.
Universal standards for good science ensure that scientists can criticize (and improve) each other’s work.
Scientific progress requires a reliable foundation upon which to build.
Dilthey’s ‘erklären’ (explaining)
The aim of the natural sciences – to describe the world in terms of cause and effect and their underlying laws.
Dilthey’s ‘verstehen’ (understanding)
The aim of the social sciences – to come to a subjective understanding.
Hacking’s looping effect
A theory can influence its object (of inquiry) in the human sciences, since it can influence what humans think and how they behave.
Popper’s falsifiability
A demarcation criterion distinguishing science from pseudoscience. Scientific theories must be testable: it must be possible in principle to refute the theory on the basis of observation.