Skip to main content
SearchLoginLogin or Signup

Cartesian Conundrums

An Introduction to Philosophy of Mind

Published onAug 24, 2021
Cartesian Conundrums
·

1 • Introduction1

What happens when a loved one dies? A moment ago, she was still warm and alive, and now something lies dead. Surely, they cannot be the same: the person you loved, and the cold body? Where did she go? These questions are as old as humanity, and have been answered by most religions and world views in terms of a dualism of mortal bodies and immortal souls. Such dualism comes quite naturally to people, even in present-day, secular societies.2 For centuries, they were accepted by lay people and scholars alike. But the Scientific Revolution of the 16th and 17th century – often called “the mechanisation of the world view” – gave rise to a different dualism, and it was René Descartes (1596-1650) who devised it.

Where before the Scientific Revolution the whole universe was seen as a kind of living organism, and organisms as being capable of pleasure and pain, afterwards the whole universe, including the bodies of living organisms, was considered to be a non-living mechanism. But although the mechanical body is not capable of pleasure and pain, something surely is. And that something is precisely the being that thinks about what it is. Descartes therefore divided the universe in two radically different kinds of “stuff”: the physical “stuff” (res extensa) which consisted of the whole physical universe with everything in it, including human bodies, and the mental “stuff” (res cogitans), like our minds, which was capable of both thought and feeling. Unlike the “outside world” of the res extensa, the res cogitans was immediately known, a kind of “inner world”.3 With this radical sundering of the universe, Descartes set the agenda for all philosophy of mind. The first question, immediately recognised in Descartes’ own time, was that of interaction. How could the mental, thinking and feeling, ever influence physical matter? This is referred to as the interaction problem. During the 19th and 20th centuries, what is called physicalist monism took over as world view: everything that exists is physical. That seemingly solved the interaction problem. But the problems for philosophy of mind did not go away; they only changed form. The question now was: how can thought and feeling be something physical? That topic still sets the agenda for present day philosophy of mind: what is the nature of thought and of feeling? Thought is first and foremost characterised by a kind of aboutness: whenever we are thinking we are thinking about something. Our thoughts have content. And feeling is characterised by consciousness: feelings that are not felt simply are not feelings. Thus, one might say that the two main topics of philosophy of mind are content and consciousness.

In the following, we are going to look at three successive attempts to deal with these questions. We will, incidentally, see that these attempts are, just as in Descartes’ time, heavily influenced by developments in the sciences. And we will also see that the question of content and that of consciousness are seldom tackled together. Often, philosophers grapple for a while with the question of content and then, without any definitive answer being given, shift their focus towards the problem of consciousness, and back again.

2 • First attempt: behaviourism

During the 18th and 19th centuries, the science rooted in the Scientific Revolution ̧ gathered momentum and started to yield spectacular results that changed western society. Its prestige grew apace. In order to share in that prestige, psychology began to change its self-conception: instead of a science of the soul – the literal translation of “psychology” – it became the science of behaviour. This movement started in the beginning of the 20th century in the US, where John Watson coined the term “behaviourism” – “a purely objective, experimental branch of natural science which needs introspection as little as do the sciences of chemistry and physics” (Watson, 1913, p. 158).

Behaviourism meant that psychology was no longer viewed as the science of the inner soul or mind, which was once thought to be immediately known by introspection, but as the study of publicly accessible behaviour of animals and human beings only. It was thus a rule of methodology not to talk about this inner world. Psychology should try to establish lawful connections between stimuli and behavioural responses, both described in the objective terms of the natural sciences. All reference to sensations and thoughts was to be avoided. A test animal wasn’t feeling hungry, it had been deprived of food for such and such a time. The subject of psychology, human or non-human, was considered to be a kind of black box: you gave it stimuli as input, and you measured the behavioural output. All reference to what went on inside the black box, to inner processes of thinking or sensation, was considered to be nothing but so-called intervening variables that were only postulated to smooth out the stimulus-response relation.

This refusal to even address the occurrence of conscious episodes in their subjects, earned the behaviourist psychologists the accusation that they were “feigning anaesthesia”: isn’t it patently untrue that there are no conscious feelings? The behaviourist Skinner tried to see thought as a kind of behaviour too: it was subliminal speech, and he suspected – but never managed to establish – that thought was accompanied by tiny movements of the speech apparatus.

In philosophy, behaviourism took another form. Gilbert Ryle directly addressed the Cartesian idea of a non-material mind somehow residing in a mechanical body; the “ghost in the machine”, as he named it (Ryle, 1949). According to him, there could not possibly be an inner mental world, where the movements of the physical body were planned and devised, and subsequently caused. Ryle is often seen as the originator of logical behaviourism, the view that there is no need for inner causes of outer behaviour, because behaviour is all there is.4 Unlike methodological behaviourism in psychology, logical behaviourism was not interested in describing behaviour in physicalist terms. All it claimed was that everything that Cartesian dualism deemed to be mental and internal, was in fact an aspect of public bodily behaviour. Saying that John thinks it will rain, means nothing more than that John is disposed to close the windows and take an umbrella when going out. Saying he is in pain, means he is disposed to show so-called pain behaviour: moaning or cursing maybe, taking an aspirin, avoiding stress to a certain part of his body. Neither thought nor feeling are ever the inner causes of outer behaviour; they are always aspects of public behaviour, or else dispositions to behave in a certain way.

The problem with logical behaviourism was that it is simply counterintuitive that there are no inner causes of outer behaviour. And if the mental is an aspect of behaviour, what about the Super-Spartan, who is in pain but never behaves like it? Or the Super-Actor who can behave angrily, without feeling any anger? Behaviourism tried to solve the problem of interaction between mind and body by simply not mentioning, or even denying, the existence of mind, of an inner world, of inner causes for outward behaviour. Yet the theory fits poorly with our own experience of what it is to be a human being. And although behaviourism of some kind ruled supreme for some 50 years in psychology, it was finally abandoned in favour of a kind of psychology and philosophy of mind that paid more attention to what traditionally belonged to the Cartesian mind.

3 • Meanwhile in science 1: the rise of computers and artifical intelligence

WWII saw the beginning of a hugely influential development: that of computers that could perform all kinds of intelligence-demanding tasks. Already in 1950, Alan Turing, the pioneer of computer science, asked whether computers could think. Though he proposed to test this in a rather behaviourist way, by trying to see whether a computer could behave indistinguishably from a human being, the existence of intelligent machines gave rise to all kinds of speculations regarding Cartesian dualism. Computers are fully physical devices, without mind or soul. Yet they can perform all kinds of tasks that – for human beings – require intelligence: playing chess, solving problems, recognising patterns. While not having a mind or soul, computers do have rich inner (not outwardly visible) states and processes, and these inner states and processes do cause their outward behaviour. So, maybe the Cartesian inner world of the mental could be salvaged without a Cartesian dualism. One could perhaps be a physicalist (materialist) while still talking about mental states and processes. This would of course change the meaning of the concept of the mental considerably. “Mental” no longer denoted a substance distinct from the physical; it only denoted the characteristics of “inner”, “content-bearing” and “influencing (outward) physical matter” that Descartes had imputed to the res cogitans.

Computers were sometimes compared to the brain: both were seen as physical systems consisting of units (bits or neurons) that can be on or off, active or inactive, and whose inner processes cause the behaviour of the whole system. Others considered computers as the physical substrate, whereas the program was seen as something mental, a meaningful part distinct from the physical system that at the same time “runs” on it and controls it. Computers, especially in the latter view, seemed to license mentalistic talk, both in psychology and philosophy. From a philosophical perspective, this was a good thing: it allowed talk about physical (hardware) and mental (program) things without accepting some Cartesian dualism, with its seemingly unsolvable interaction problem.

4 • Second attempt:
brain mind identity theory and functionalism

In the late nineteen-fifties, a new position was developed in the philosophy of mind: the mind-brain identity theory. Identity theorists said that, contrary to logical behaviourism, there are inner causes for outward behaviour: these inner causes are brain states and processes. They also claimed that my headache is not just a (side) effect of a certain brain state, it is in fact identical with that brain state, just as the thought “I’d better get some aspirin” is identical with another brain state. As a result of these claims, a description of human behaviour can be provided in terms of a completely physical causal chain from, say, the intake of alcohol, a number of brain processes and states, to the intake of aspirin. Behaviour is caused by brain states and processes, but at the same time it is just as literally caused by thoughts. According to those who endorsed the identity theory, this identity of mental states and processes (experiences of headaches and thoughts of aspirin) and brain states and processes was an empirical discovery, just as it was empirically discovered that water is identical with H2O. (Place, 2007).

This last claim, however, is not as straightforward as it looks. Empirical studies can never prove this identity; all brain science can ever show is that there are correlations between neurophysiological states and processes and mental states and processes – and new techniques of neuro-imaging bring ever more of these correlations to the light. That criticism notwithstanding, it is easy to see the attraction of this theory. It would be a very elegant solution to the Cartesian problem of interaction if the whole notion of two kinds of states and processes that interact with one another, mental and neurophysiological, could be replaced with just one kind of states and processes. According to identity theory, it is the brain that does all the causal work. When we think “I’d better take some aspirin”, it is true that thought causes us to take an aspirin, for the thought itself is also a brain state. In this way we can stick to talking about thoughts, desires, and other mental states as well as about causal relations between those states and our behaviour, as long as we realise that this is just a different way of talking about physical states.

Identity theory was seen to be supported by the existence of computers. Here were completely physical systems that could nevertheless perform actions that seemed to require intelligence. Descartes’ claim, that at least human beings could not be wholly mechanical systems, because a machine could never do what human beings easily can – namely, answering questions –, was suddenly disproven. Computers could do just that. The first computers from the fifties of the 20th century were popularly called giant brains (and these early computers were gigantic indeed). This comparison between brain and computer was based on the following: the brain consists of a network of brain cells, called neurons, which are connected with one another. These neurons can produce little bursts of electricity in an all-or-nothing fashion. Via the connections between neurons, these bursts are transmitted to neighbouring neurons, which can likewise “fire”. The combination of billions of neurons can eventually exhibit the mental capacities that human beings have: reading, translating, solving problems, playing games. The same goes for computers. Their basic element is the digital bit: an element that can be a 0 or a 1. It is the combination of these basic elements that gives rise to all the complex, intelligence-demanding things that computers can do. And indeed, the first kind of computers that were used for Artificial Intelligence were minimal networks of binary elements, called perceptrons, that were supposed to be self-organising, and could learn. These experiments with perceptrons were abandoned because of their limited success (Minsky & Papert, 1956). Other kinds of computers, with a so-called von Neumann architecture, were much more successful in AI. These had a central processor, a memory to store data and instructions, and input and output devices. Such a computer works because it is programmed, but the very same program can run on physically different machines.

Parallel with the dismissal of perceptrons as good models for AI, there was philosophical criticism of the identity theory. The first kind of criticism thought the identity theory was too restricting. If my thought of food is identical with brain process a, with characteristics F, G and H, then every thought of food has to be identical with such a brain process; just as the identity of this particular bit of water with H2O implies that every sample of water has to be identical with H2O. This has to do with what is known as Leibniz’ law: if a is identical with b, then a and b must have all of their characteristics in common; this is just what ‘identical’ means. But the brain of a cat or dog is different from mine, and Martians or computers have no brain at all. Does this imply that they cannot think of food? Even other human beings do not have exactly the same kind of brain that I have. Am I the only one capable of thinking of food? That seems very implausible. And finally, my own brain does not stay the same: the very brain process that I had yesterday, thinking about food, is not there anymore today: neurons change or die continually.

Identity theory was thus seen as too anthropocentric, allowing only human brain states to be identical with mental states. Maybe my thought of food today is identical with brain process a, while my cat’s thought of food is identical with his brain state b, and maybe a computer can think of food as well, and then its thought is identical with its internal state c.5 If one wants to do psychology, it wouldn’t do to claim that no two creatures can ever have the same thought, for how would one then be able to say that in general a thought of food leads to a certain type of behaviour? But maybe all these instances of thoughts of food, though identical as thoughts of food, do not have to be identical as to their physical characteristics. If that can be the case, one can still be a physicalist and avoid dualism: a thought is always identical with some physical state. It’s just that the same thought can, in different creatures, be identical to different physical states. This tallies nicely with the fact that the same computer program can run on physically different machines. In computer science this means that programs are, what is called, multiply realisable. Likewise, it is claimed that mental states are multiply realisable. The acceptance of these claims led to a new position called functionalism.

Functionalism claims that a mental state, such as the thought of food, is not defined by the physical state it happens to be identical with, but with the functional role it has in the causal chain. Every physical state that plays a role in the causal chain between food deprivation, feelings of hunger, and food-directed behaviour, is in fact a thought of food, whether this physical state is human, feline or canine. Human beings, cats and dogs are all different physical systems, but they can all think of food – and so can computers or Martians, if their internal states play the same role. Just as a mousetrap is defined by what it does, and not by precisely what physical system it is, so a mental state is defined by what it does, and not by what physical (brain) state it happens to be identical with on a certain occasion. Instead of saying that mental states are identical to brain states – but the same mental state can be identical to different brain states on different occasions – they prefer to say that mental states are realised by brain states (or computer states).

According to functionalism, the identity theory had seen the man-machine comparison in the wrong way: it was not about brains as computers, but about what human beings and computers can do. One could study mental processes without bothering too much about the physical realisation of those processes, because mental processes and mental states are multiply realisable. The physical constitution of the brain was considered just as irrelevant as the precise physical constitution of a machine. What was important was how they work on a more abstract level. What both computer and human beings (and other animals) do is performing computations on internal states, in other words, symbol manipulation. One could study the mental directly, without bothering about the physical realisation (Levin, 1969).

The second kind of criticism on identity theory went into the opposite direction. If there is an identity of two things, there actually are not two things but one. If Shakespeare is in fact identical with Francis Bacon, there are not two men, one a playwright and one a statesman, but one man who is both. So, if mental states are identical with brain states, all that really exist are brain states. What we have learnt is not that mental states are identical to brain states, but that what we thought were mental states are in fact brain states, just like what we thought were witches were in fact normal – or often slightly abnormal – old women. Real witches do not exist. Likewise, there is no such thing as the mental, and if we want to study what we thought were mental processes, we have to study the brain itself. Some philosophers, so-called eliminativists, believe that in the future, we will learn to drop all reference to mental states such as beliefs and desires, and describe ourselves and one another in terms of brain states.

The popularity of identity theory in the philosophy of mind, with its focus on brain states and processes, was linked to the popularity of perceptrons, brain- like computers, in AI in the fifties and sixties. But these perceptrons turned out to be rather unsuccessful. The rise of functionalism in philosophy of mind went hand in hand with the rise of a different computer architecture in AI: that of a program of computations being performed on internal symbols. Yet the idea of brain-like computers got revived in the eighties with so-called neural networks that were quite successful indeed. These were machines with simple units interconnected by connections of different weights, and they were again inspired by the architecture of the brain. Such machines can learn by adjusting the weights of the connections. The deep learning machines of present-day AI are in fact neural networks that can learn their tasks, not by applying logical computations to internal symbols, but by being fed huge amounts of data and changing their connections until they get it right. Whereas in the sixties and seventies identity theory seemed to be replaced by functionalism, on the basis of what was called the first kind of criticism above, the eighties saw a focus on the second kind of criticism on identity theory. Eliminativists or so-called connectionists, encouraged by the success of neural networks, expect that philosophy of mind and psychology will be replaced by neuroscience (Churchland, 2013).

5 • The problem of content

Let us now turn to the first of the two main problems in philosophy of mind, the problem of content. The Cartesian problem of interaction was in part the problem of how (mental) thought could act on the (physical) body. And identity theory and functionalism answer that question by saying that thought is itself physical. But how exactly? How can the content or meaning of a thought have causal influence on behaviour? If mental states and processes are, in one way or another, identical to brain states and processes, isn’t it the brain that does all the causal work, and not the content of the thoughts? If my desire for a piece of cheese, causing me to go to the fridge and cut myself some cheese, is in fact a brain state, isn’t it the brain state that causes my muscles to lead me to the fridge and not my cheese-desire? For how can a brain state be about cheese? There are roughly three ways of answering this concern.

The first answer is to simply say: yes, it is the brain that does all the causal work, and the mental does not do anything because it does not exist. This is the answer of the eliminativists who want to replace philosophy of mind and psychology with neuroscience, and who predict that we will gradually stop referring to thoughts, beliefs and desires at all. This does away with the problem of content, but at the price of being very counter-intuitive. One cannot even say, without contradiction, that they believe that beliefs do not exist.

The second answer is to say that mental processes are symbol manipulations, and that these symbols are physical brain states. This answer compares the brain to a computer, which also has internal states that play a role in the causal chain that leads to its output. It is claimed that it is indeed the physical characteristics of these internal states that do the causal work, but that at the same time these internal states do have content or meaning. The idea is that, in a well-programmed computer, the program rules are such that the meanings of the inner states being manipulated are respected, so that the output keeps making sense. If the meaning of one state is “1”, and of another “+”, the rules are such that “1+1” leads to a state meaning “2”. The problem with this answer is that the physical characteristics and the meaning of an internal state of a computer can easily become unstuck. This is because the computer will work whether its internal states have one meaning or another, or none at all. John Searle (1995) illustrates this with his famous Chinese Room argument. Suppose there is a man in a closed room. The room contains a rule book and a slot in the wall. Through the slot, papers are inserted bearing Chinese characters. The rule book, which is in English, contains instructions for reacting to the incoming Chinese characters, which are identified by their form, with other Chinese characters, the output. The man in the room doesn’t know any Chinese and doesn’t know what the characters stand for. But Chinese people outside the room think they are feeding the room with questions in Chinese and consider the answers they get as output as meaningful and relevant. Searle’s conclusion is that, for the computer, it isn’t the meaning of its internal states that does any work, it’s only their physical form. And just like the man in the room, the computer itself has no idea what it is doing. It is only the interpretation of the users of the machine that gives its states meaning. He thinks this is the difference between computers and human beings: unlike computers, human beings do know what they are doing, and their internal states do have intrinsic meaning. It surely seems that Searle is right in this, but even so he has still to account for how internal brain states can have meaning.6 Thus far, brain scientists and philosophers alike disagree about how to think about meaningful thoughts in all their linguistic, psychological and social complexity, at the level of our brains where all there seems to be are neurons and their connections.

The third answer is the one Daniel Dennett defends: there is, actually, no difference between computers and human beings (or human brains). Both are simply physical systems through and through. Both have inner states, but these inner states have no intrinsic meaning. Just as in the case of computers, the inner states of human beings are interpreted from the outside. By whom? Well, by human beings – Searle’s Chinese people outside the Chinese room. But these human beings also have no inner states with intrinsic meaning; these people are also only interpreted from the outside. As Dennett has it: “An implication of the view crudely expressed by the slogan that our brains are organic computers is that just like computers their states can be interpreted [...] by outside observers to have content – and that’s as strong a sort of content as their states can – or could – have. We are both the creators and the creatures of such interpretation” (Dennett, 1980, p. 355). This “interpretationist” answer solves the question how internal brain states can have meaning: they simply do not have any intrinsic meaning; all meaning is attributed from the outside and it is all a matter of interpretation. But these claims are problematic: doesn’t one have to have meaningful internal states already, in order to be able to interpret someone else’s internal states? And if, as Dennett claims, we interpret our own internal states in the same way we do others’, how can we possibly start doing so? Thus far the problem of content has no answer that is generally accepted.

6 • The problem of consciousness

Let us now address the other fundamental problem of philosophy of mind: that of consciousness. Quite suddenly, around 1990, and before there was any definitive answer to the problem of meaning, this other problem became widely discussed in philosophy of mind. How can any purely physical, mechanical thing ever be conscious? Why is it that most of the workings of our brains are completely unconscious, whereas others seem to be conscious or give rise to consciousness? This problem is sometimes called “the hard problem” of philosophy of mind (Chalmers, 1982). Why is there consciousness at all? Couldn’t the brain work perfectly well without there being any consciousness? Couldn’t there be, for instance, a brain state that fulfills all the alarm-bell functions of pain, without it feeling so painful? Part of the problem is that it is so hard to define what we mean by consciousness. What is it and how can we tell whether some creature or some thing is conscious? Most philosophers agree that it has to do with subjective experience, with what Thomas Nagel famously called a matter of “what it is like” (1995), but there are also philosophers who say that this whole idea of subjectivity, what-it- is-likeness and the qualitative aspect of experience simply doesn’t make sense.

Daniel Dennett (1974-10) tries to reduce all sensory experiences of something to judgements that something is the case. If we are looking at a blue curtain, there isn’t first an experience of blue and then a judgement that the curtain is blue; the judgement “blue there” is all that occurs. Thus, he has eliminated the problem of consciousness, and only has to deal with the problem of content, which he solves with the interpretationism explained in the previous section. The problem here is that this solution, though elegant, is doubly counterintuitive: we all have the impression that we do have sensory experiences, and it is hard to see how one could interpret others without having meaningful inner states of one’s own.

Other philosophers opt for an identity-theory solution: consciousness just is a kind of brain state or process, not an effect of it. Pain just is the firing of certain neurons. But the search to pinpoint which neurons or what aspect of neural functioning are identical with consciousness, is still on, with widely diverse answers being given. Moreover, the hard problem of why this or that aspect of brain functioning should be conscious remains unanswered. One might think that consciousness must be of some evolutionary advantage, but most scientists agree that consciousness itself doesn’t have any causal efficacy. It is the physical brain states and processes that do the causal work. Sometimes, before I know it, I find myself having eaten another piece of cheese. Apparently, my brain knows perfectly well how to indulge me, without my being conscious of any desire causing my behaviour, or of even the wonderful taste of the cheese. And if so much of brain functioning can be unconscious, why not all of it? Moreover, if consciousness is defined as subjectivity, is it even possible that science, which strives to be as objective as possible, can say anything about it? As was the case with the problem of content, there is as yet no generally accepted answer to the problem of consciousness.

7 • Meanwhile in science 2: the rise of robotics

Whereas the first decades of AI were dedicated to programming computers for doing intelligence-demanding tasks offline, without any direct contact between computer and real-life world, in more recent years AI turned towards creating robots. Robots behave by definition online, interacting with an environment (however artificial it may be). The pioneering work of Rodney Brooks (e.g. 1991) showed that building a robot as a mechanical body with a computer in its head was not the best way of having it navigate through its surroundings. One could build a better one if its legs interacted with the environment relatively independently from one another: robot behaviour was not the execution of a previously computed central plan, but the dynamic interaction of the robot-body parts with its environment.

8 • Third attempt:
embodied embedded cognition and enactivism

We have now arrived at the third attempt at dealing with the problems in the philosophy of mind. This third attempt was the result of a quite radical change of direction. Inspired by the progress in robotics, but also by certain insights from other philosophical disciplines, the idea took hold that the focus on the brain that had held sway over philosophy of mind was misguided. The “methodological solipsism” – Jerry Fodor’s term (1989)– of exclusively studying the inner world of the mind-brain, had to be replaced by a consideration of the brain in interaction with the (rest of the) body and the environment. The turn of the century saw the emergence of Embodied Embedded Cognition, which claimed that to study the mental one should consider the brain- body-environment as one dynamic system. For instance, a Japanese child doing sums with the help of an abacus, is not having cognitive processes in her head/ brain/mind, which she then turns into instructions for her fingers on the abacus. Instead, it is the whole system of brain and fingers and abacus that does the sums. Cognition was seen to be no longer restricted to processes in the brain, but as a process extending beyond skull and skin (Clark & Chalmers, 1980-03). Likewise, consciousness was no longer seen as a matter of particular brain processes, but as something happening in a whole organism in its environment.

Closely related to EEC, is the view known as enactivism. Already in 1991, the term “enaction” was coined: there is no pre-given world that a pre-given mind internally represents, but mind and world are “enacted” in a dynamic process of interaction (VarelaThompson & Rosch, 1998). Enactivism was at first an attempt to solve the lingering problem of consciousness. O’Regan and Noë, for instance, claimed that to consciously perceive something was a way of doing things (O’Regan, 2010). The difference between experiencing red and green is a difference in the way we act – literally in the way we move our eyes and bodies and know the interdependence between our own movements and the way the world reveals itself. Experiencing what it is like to drive a Porsche is knowing how the car reacts to your own steering and use of the throttle. Perceiving itself was seen as acting, and not as a passive receiving of input from world to mind. Living creatures, as they develop in evolution, cannot fail to acquire this kind of consciousness of their surroundings and their own place within it (Thompson, 2001). This last claim is called the life-mind continuity thesis: the whole universe is just a physical, non- feeling mechanism, but with the origin of life we get consciousness as it were for free. The problem of consciousness is thus solved by claiming that life and consciousness are inextricably linked.

Enactivism and EEC agree that we can no longer approach cognition and the mind more generally as things that are restricted to human brains. It is often thought that these views can best account for online, “lower”, cognitive processes, but that some “higher” cognitive processes are “representation-hungry” (Clark & Toribio, 2010). How can one, for instance, do so-called mental arithmetic (no abacus or pen and paper) without keeping some representations of the calculation “in one’s head”? These processes, like abstract problem solving or imagining, can better be explained by postulating an inner domain with content-bearing inner states. Yet there are also philosophers who are not willing to make this admission to the need for representations and go one step further. Radical Enactivists, worried as they are that the problem of content has never been properly solved, claim that no organism has inner content-bearing states, and that no cognitive process is ever in need of inner representations, not even imagining or abstract thinking. Imagining the hallway as it used to be, is simply the mentality-constituting interaction of (being disposed to) walk around a non-existing table (Hutto & Myin, 1994). For arithmetic they refer to Japanese school children, who start to learn on their abacus, but who are, after training, able to do the calculations with incredible speed without the abacus; these children still move their fingers over the non-existing abacus while doing the sums.7 Of course, tacitly thinking up sentences to say involves meaningful inner states before the words are said out loud. But those meanings are the meanings of public language, so the radical enactivists claim, and they are only learnt when a child learns its own mother tongue. Radical Enactivism therefore does not place the great divide of just- physical mechanisms on the one hand, and entities that have something mental on the other, between life and non-life, but between language-users and non- language-users. Although language does play an enormously important part in our lives, it seems odd to claim that prelinguistic children or animals have no mental life at all. But in the Radical Enactivist theory, this does solve the problem of content. With the occurrence of natural language, we get content for free. The problem is thus solved by claiming that natural language is the only thing that has content, and that is for semanticists to worry about.

9 • Conclusion

And so, philosophy of mind seems to have come full circle: from the simple denial of an inner mental world in behaviourism, to the claim that there are no inner, content-bearing states and that everything is behaviour in Radical Enactivism. Meanwhile, the problems of content and consciousness have led to many different answers, with lots of sophisticated insights, but they still have to be solved. Ignoring the problems, or claiming that they do not exist, hasn’t made them go away. Everybody has to pay the piper sometime. Although, amongst philosophers, Descartes’ dualism is no option anymore, his idea of an inner ‘something’ where both thoughts and experiences take place, still, in one way or another, seems to set the agenda for philosophy of mind.

Bibliography


Bloom, P. (2004). Descartes’ baby: How the science of child development explains what makes us human. Basic Books.

Brooks, R. A. (1989). A robot that walks: Emergent behaviors from a carefully evolved network. Neural Computation, 1(2), 253–262. https://doi.org/10.1162/neco.1989.1.2.253

Chalmers, D. (1995). Facing up to the problem of consciousness. Journal of Consciousness Studies, 2(3), 3–34. https://doi.org/10.1093/acprof:oso/9780195311105.003.0001

Churchland, P. M. (1995). The engine of reason, the seat of the soul. MIT Press.

Clark, A., & Chalmers, D. (1998). The Extended Mind. Analysis, 58(1), 7–19. https://doi.org/10.1093/analys/58.1.7

Clark, A., & Toribio, J. (1994). Doing without representing? Synthese, 101(3), 401–431. https://doi.org/10.1007/BF01063896

Dennett, D. C. (1969). Content and consciousness. Routledge & K. Paul; Humanities P.

Dennett, D. C. (1982). Comments on Rorty. Synthese, 53(2), 349–356. https://doi.org/10.1007/BF00484909

Dennett, D. C. (1991). Consciousness explained. Little, Brown and Company.

Descartes, R. (2003). Discourse on method and metaphysical meditations. (E. S. Haldane & G. R. T. Ross, Trans.). Dover Publications. https://doi.org/10.5962/bhl.title.44504

Fodor, J. A. (1980). Methodological solipsism considered as a research strategy in cognitive psychology. Behavioral and Brain Sciences, 3(1), 63–73. https://doi.org/10.1017/S0140525X00001771

Fodor, J. A. (1987). Psychosemantics. The problem of meaning in the philosophy of mind. The MIT Press. https://doi.org/10.7551/mitpress/5684.001.0001

Fodor, J. A., & Lepore, E. (1992). Holism. A shopper’s guide. Wiley-Blackwell.

Hutto, D., & Myin, E. (2014). Radicalizing enactivism: Basic minds without content. The MIT Press. https://doi.org/10.7551/mitpress/9780262018548.001.0001

Levin, J. (2013). Functionalism. In E. N. Zalta (Ed.), Stanford Encyclopedia of Philosophy (Winter 2016 Edition).

Minsky, M. L., & Papert, S. (1969). Perceptrons. MIT Press.

Nagel, T. (1974). What is it like to be a bat? The Philosophical Review, 83(4), 435. https://doi.org/10.2307/2183914

O’Regan, J. K., & Noë, A. (2001). What is it like to see: A sensorimotor theory of perceptual experience. Synthese, 129(1), 79–103. https://doi.org/10.1023/A:1012699224677

Park, S. M. (1994). Reinterpreting Ryle: A Nonbehavioristic Analysis. Journal of the History of Philosophy, 32(2), 265–290. https://doi.org/10.1353/hph.1994.0043

Place, U. T. (1956). Is consciousness a brain process? British Journal of Psychology, 47(1), 44–50. https://doi.org/10.1111/j.2044-8295.1956.tb00560.x

Ryle, G. (1949). The concept of mind. Hutchinson.

Searle, J. R. (1980). Minds, brains, and programs. Behavioral and Brain Sciences, 3(3), 417–424. https://doi.org/10.1017/S0140525X00005756

Thompson, E. (2010). Mind in life: Biology, phenomenology and the sciences of mind. Harvard University Press.

Turing, A. M. (1950). Computing machinery and intelligence. Mind, LIX(236), 433–460. https://doi.org/10.1093/mind/LIX.236.433

Varela, F. J., Thompson, E., & Rosch, E. (2016). The embodied mind: Cognitive science and human experience. MIT Press.

Vrijen, C. (2007). The philosophical development of Gilbert Ryle: A study of his published and unpublished writings. University of Groningen.

Watson, J. B. (1913). Psycology as the behaviorist views it. Psychological Review, 20(2), 158–177.

Comments
0
comment
No comments here
Why not start the discussion?