Building Better Zombies

I’m warning you…

David Chalmers relies in no small way upon his so-called ‘philosophical zombies’ – entities that are like human beings in every respect, physical and psychological, the only teeny, tiny difference being that they are lacking conscious experience – to make his point about consciousness being a basic property, i.e. that it is a mental-kind term that may neither be functionalized nor emergent from some set of physical states of affairs. However, his example doesn’t (and shouldn’t) convince anyone who is not already friendly towards the sorts of consciousness-views offered by the various dualisms and panpsychisms floating about. To pick but one example, Daniel Dennett doesn’t think that Chalmers’ thought experiment does the trick (and is, in any case, very hostile to the proposition that there may be any mental properties which are not emergent from/reducible to physical facts/happenings [1]). And for what it is worth, I don’t think Chalmers’ thought experiment is very compelling either, even though I am more open than is Dennett to possibilities beyond run-of-the-mill reductive/eliminativist physicalist doctrines.

Why should I say this? Chalmers’ uses his zombies thus:

  1. It is conceivable that there might be some possible world wherein there
    are creatures with brains and psychologies such as our own, which are
    functionally identical to our own, but which are lacking conscious experience.
  2. Since it is conceivable that such beings could exist, consciousness is not an emergent property of physical systems assembled (physically or functionally) like our brains, otherwise the zombies in question would be conscious as well.
  3. As such, we may conclude that consciousness is neither simply a product of physical systems arranged in the right way nor functional systems of the right sort. [2]

Obviously, this is a very simplified presentation of his argument (or at least, of the version of his argument with which I am familiar), but it captures something of its thrust. As should be apparent, it is the second premise that causes the argument to fail – simply because something implies no logical contradiction, it does not follow that it is actually possible (in this world or any other). It may well be that in actuality any number of functionally appropriate systems (of which human brains are but one example) will be conscious and nothing else besides – which, not incidentally, is precisely the physicalist position promoted by fellows like Dennett.

The weakness of the argument is unfortunate because I find the (reductive/eliminativist) physicalist’s explanation of consciousness risible [3]. I suspect that the argument could be reworked in order to appeal directly to and make use of the sorts of intuitions held by physicalists of a Dennettian sort, all while avoiding the use of modal arguments (“in some possible world…”). This would make for a more effective argument against functionalist/physicalist accounts of consciousness since, at the outset, it gives them everything that they say they want but leads to the necessary abandonment of some of it. What follows, then, is my own version of a zombie thought experiment that, although leaning heavily on the intuitions Chalmers is mining with his zombies, (I think) does a better job.

Building Better Zombies:

Imagine for the moment that there existed a complete neuroscience. I mean ‘complete’ in the sense that this theory ranges over the entire range of properties of neurons as well as their actions and interactions in the terms with which ordinary science is comfortable (i.e. objectively observable/measurable properties) – there is no hint of ‘woo’ about it.  Moreover, this theory has been subjected to rigorous testing and is as good as proven and is able to predict with near perfection what any particular sort of neuron will do whenever any stimuli one would expect to find in its ordinary operational environment (whether biochemical or electrical) are applied to it in said environment [4]. So, were we to have a single neuron placed before us in a petri dish, we should be able to apply any neurotransmitter or electrical charge under any environmental conditions we should choose to subject the cell to, with absolute certainty that it will react in a specific way. This is all well and good, especially since we would not need to make any reference to mental-kind terms (e.g. consciousness, intentionality, representation) – our neuron is simply caused to produce said effect ‘mechanically’, as it were [5]. Indeed, to make any appeal to mental properties at this low level would be to make a claim which no physicalist would suffer gladly – neurons are just too ‘simple’ to be host to intelligence or consciousness.

From this modest beginning, we should be able to attach a second neuron to the first and to apply some stimulus to one or the other and predict with perfect accuracy what each of them will do, considered individually and as a unit. Once again, this prediction will in no way necessitate a resort to mental-kind terms – we are still firmly in the realm of the objectively explicable – and so again with the addition of a third neuron, and a fourth, and a fifth, etc. With each successive neuronal addition, the behaviours of the whole system will become increasingly complex, but without causing any explanatory or predictive troubles (as I have, after all, stipulated that this is a complete neuroscience). On each iteration, we slowly build a neuronal assembly that is increasingly similar to our own human brains until, soon enough, we will have succeeded in building one that is physically and functionally identical to an ordinary human brain which may then be hooked up in the right way to a (presumably custom-built) body. The zombie is ready.

I told you so!

This zombie will now start interacting with its environment and new stimuli will arise naturally from sensory perception of the immediate environment. This will lead to the zombie behaving in complex ways – e.g. using language and planning its vacation to Maui in the fall – based upon the stimuli it receives. Indeed, its behaviour will seem remarkably similar to our own, as we would expect, given that we designed it to be exactly like a neurotypical human being, but for one crucial difference – its behaviours (ranging from thirst to the writing of forlorn love songs) may be explained entirely without reference to mental-kind terms. After all, why should we explain these behaviours by reference to such terms, since they are just the working-out of neuronal cause and effect, which our ideal neuroscience already accounts for with ease?

Not to do so, however, would necessitate accepting some highly dissatisfactory entailments. If our neuroscience really does explain our zombie’s apparently rich range of behaviours, then because we have built a brain that is physically and functionally indistinguishable from that of a womb-born human being [6], we should seek to use the same theory to explain our own behaviour.  Since everything may be satisfactorily explained without need to refer to consciousness, qualia, thought, etc., then explanations of our own activities will also need make no reference to such things.  The trouble is, we emphatically do have subjective experiences and we are conscious. Given this, there are a number of possible moves the physicalist could make:

  1. Give a reductive/eliminative account of phenomena like volition, thought, consciousness, etc. This is, however, precisely what the stipulated ideal neuroscience has done – it has accounted for all higher-order processes in terms of lower-order neurological functioning – and has led only to the present conundrum.
  2. Deny altogether the existence of the referents of mental-kind terms (or, what is the same, insist on their ‘illusoriness’). This strategy preserves the ‘zombic’ quality of our zombie – our mental-kind term-free explanation is then entirely sufficient – but, because our brains are physically and functionally identical to those of the zombie, we necessarily rob ourselves of consciousness, mind, etc.  This, to me, is a major non-starter [7].
  3. Acknowledge the reality of these higher-order phenomena and give a non-reductive account of their emergence from lower-order ones which don’t exhibit mental properties. This preserves the absence of mentality at lower orders of existence, but introduces problems of its own.  Notably, it requires an account of the precise degree of complexity required for emergence to take place.  Furthermore, ‘emergence’ strikes me as something of a scientific equivalent term for ‘and then a miracle happened’.
  4. Abandon physicalism.  There are plenty of acceptable alternatives (dualism, idealism, panpsychism), although they might not be popular in the faculty lounge.

Endnotes:

[1] At this juncture I should say that I am only very generally familiar with Dennett’s take on consciousness. Consciousness Explained is on my to-read list, but I haven’t yet got around to it. If I have said something strictly wrong about Dennett’s position, ignorance is my excuse (albeit, a poor one), but if I have got the gist of his position wrong, feel free to take me to task.

[2] He goes on from there to argue for panpsychism, a doctrine with which I shall not concern myself at present (though I find the idea fascinating, if rather counterintuitive and subject to its own problems).

[3] The problems with physicalist accounts of the mind are, to my mind, several and I should like to do a post dedicated exclusively to them, but this is not the time. NB: I do not discount physicalism in its entirety, however, but only those strains of it which claim to already have provided a complete, adequate explanation of the world and everything in it.  The “near enough” physicalism of Jaegwon Kim or Colin McGinn’s physicalist ‘mysterianism’ do not strike me as problematic.

[4] Necessarily, also, in the lab environment. This is, of course, a highly idealized science, but I’m doing philosophy, so I’m able to stipulate anything I wish in order to explore our intuitions. If magical miniature unicorns could do the trick, that would be fair game – so too with scientific theories which require prohibitively complex computations.

[5] Or, rather, biomechanically or biochemically. Or, for that matter, biophysically (especially if quantum minds are something one finds appealing).

[6] I am assuming here that the causal histories of our zombie brain vs. an ordinary human brain will not be relevant, at least insofar as consciousness is concerned – I think it likely that so long as the brain is up and running it should not matter whether it was built in a petri dish or a mother’s belly.  Of course, the different causal histories almost certainly would be of relevance to matters such as personality or learned skills (to name two).

[7] I can’t fathom how it is possible that consciousness (conscious experience) could be an illusion.  For this to be so, it would be necessary that we experience the illusion of having experience.  This idea is so obviously self-defeating and crazy that I wonder why intelligent people go in for it.