Was David Hume Enlightened?!

Hume needed all that fat to fuel his ginormous brain.

So, I’ve been sitting meditation for a long while.  It’s an interesting pursuit – the more one tries to just stay with whatever is here now, the more strange things seem to pop up.  Anyway, I was thinking about Hume today because of something that made itself so blindingly obvious during my meditation practice that I couldn’t help but make the connection.  Basically, whenever you have a sensory input (say, a fly passes through your field of vision), that input will be followed extremely shortly thereafter by an involuntary mental reproduction of that sight-event.  This comes almost immediately after the original sensory input and is noticeably different in ‘feel’ than a memory of the same event after the fact.

Now, you can’t force yourself to notice this – indeed, trying to force it will either entirely prevent it from occurring or will cause too much mental noise to allow you to notice it (I’m not sure which it is) – but it definitely happens.  And now I think I understand where Hume got his notion about ideas.  Or at least I think I do – my suspicion is that he was up to some sort of what today would be recognized to be meditation (granted, he probably didn’t sit full-lotus).  Which makes me wonder – where in God’s name did he get the idea to do that from?  For goodness sake, he even appears to have figured out anatta!

And so I’m seriously freaked – was David Hume… enlightened?


Building Better Zombies

I’m warning you…

David Chalmers relies in no small way upon his so-called ‘philosophical zombies’ – entities that are like human beings in every respect, physical and psychological, the only teeny, tiny difference being that they are lacking conscious experience – to make his point about consciousness being a basic property, i.e. that it is a mental-kind term that may neither be functionalized nor emergent from some set of physical states of affairs. However, his example doesn’t (and shouldn’t) convince anyone who is not already friendly towards the sorts of consciousness-views offered by the various dualisms and panpsychisms floating about. To pick but one example, Daniel Dennett doesn’t think that Chalmers’ thought experiment does the trick (and is, in any case, very hostile to the proposition that there may be any mental properties which are not emergent from/reducible to physical facts/happenings [1]). And for what it is worth, I don’t think Chalmers’ thought experiment is very compelling either, even though I am more open than is Dennett to possibilities beyond run-of-the-mill reductive/eliminativist physicalist doctrines.

Why should I say this? Chalmers’ uses his zombies thus:

  1. It is conceivable that there might be some possible world wherein there
    are creatures with brains and psychologies such as our own, which are
    functionally identical to our own, but which are lacking conscious experience.
  2. Since it is conceivable that such beings could exist, consciousness is not an emergent property of physical systems assembled (physically or functionally) like our brains, otherwise the zombies in question would be conscious as well.
  3. As such, we may conclude that consciousness is neither simply a product of physical systems arranged in the right way nor functional systems of the right sort. [2]

Obviously, this is a very simplified presentation of his argument (or at least, of the version of his argument with which I am familiar), but it captures something of its thrust. As should be apparent, it is the second premise that causes the argument to fail – simply because something implies no logical contradiction, it does not follow that it is actually possible (in this world or any other). It may well be that in actuality any number of functionally appropriate systems (of which human brains are but one example) will be conscious and nothing else besides – which, not incidentally, is precisely the physicalist position promoted by fellows like Dennett.

The weakness of the argument is unfortunate because I find the (reductive/eliminativist) physicalist’s explanation of consciousness risible [3]. I suspect that the argument could be reworked in order to appeal directly to and make use of the sorts of intuitions held by physicalists of a Dennettian sort, all while avoiding the use of modal arguments (“in some possible world…”). This would make for a more effective argument against functionalist/physicalist accounts of consciousness since, at the outset, it gives them everything that they say they want but leads to the necessary abandonment of some of it. What follows, then, is my own version of a zombie thought experiment that, although leaning heavily on the intuitions Chalmers is mining with his zombies, (I think) does a better job.

Building Better Zombies:

Imagine for the moment that there existed a complete neuroscience. I mean ‘complete’ in the sense that this theory ranges over the entire range of properties of neurons as well as their actions and interactions in the terms with which ordinary science is comfortable (i.e. objectively observable/measurable properties) – there is no hint of ‘woo’ about it.  Moreover, this theory has been subjected to rigorous testing and is as good as proven and is able to predict with near perfection what any particular sort of neuron will do whenever any stimuli one would expect to find in its ordinary operational environment (whether biochemical or electrical) are applied to it in said environment [4]. So, were we to have a single neuron placed before us in a petri dish, we should be able to apply any neurotransmitter or electrical charge under any environmental conditions we should choose to subject the cell to, with absolute certainty that it will react in a specific way. This is all well and good, especially since we would not need to make any reference to mental-kind terms (e.g. consciousness, intentionality, representation) – our neuron is simply caused to produce said effect ‘mechanically’, as it were [5]. Indeed, to make any appeal to mental properties at this low level would be to make a claim which no physicalist would suffer gladly – neurons are just too ‘simple’ to be host to intelligence or consciousness.

From this modest beginning, we should be able to attach a second neuron to the first and to apply some stimulus to one or the other and predict with perfect accuracy what each of them will do, considered individually and as a unit. Once again, this prediction will in no way necessitate a resort to mental-kind terms – we are still firmly in the realm of the objectively explicable – and so again with the addition of a third neuron, and a fourth, and a fifth, etc. With each successive neuronal addition, the behaviours of the whole system will become increasingly complex, but without causing any explanatory or predictive troubles (as I have, after all, stipulated that this is a complete neuroscience). On each iteration, we slowly build a neuronal assembly that is increasingly similar to our own human brains until, soon enough, we will have succeeded in building one that is physically and functionally identical to an ordinary human brain which may then be hooked up in the right way to a (presumably custom-built) body. The zombie is ready.

I told you so!

This zombie will now start interacting with its environment and new stimuli will arise naturally from sensory perception of the immediate environment. This will lead to the zombie behaving in complex ways – e.g. using language and planning its vacation to Maui in the fall – based upon the stimuli it receives. Indeed, its behaviour will seem remarkably similar to our own, as we would expect, given that we designed it to be exactly like a neurotypical human being, but for one crucial difference – its behaviours (ranging from thirst to the writing of forlorn love songs) may be explained entirely without reference to mental-kind terms. After all, why should we explain these behaviours by reference to such terms, since they are just the working-out of neuronal cause and effect, which our ideal neuroscience already accounts for with ease?

Not to do so, however, would necessitate accepting some highly dissatisfactory entailments. If our neuroscience really does explain our zombie’s apparently rich range of behaviours, then because we have built a brain that is physically and functionally indistinguishable from that of a womb-born human being [6], we should seek to use the same theory to explain our own behaviour.  Since everything may be satisfactorily explained without need to refer to consciousness, qualia, thought, etc., then explanations of our own activities will also need make no reference to such things.  The trouble is, we emphatically do have subjective experiences and we are conscious. Given this, there are a number of possible moves the physicalist could make:

  1. Give a reductive/eliminative account of phenomena like volition, thought, consciousness, etc. This is, however, precisely what the stipulated ideal neuroscience has done – it has accounted for all higher-order processes in terms of lower-order neurological functioning – and has led only to the present conundrum.
  2. Deny altogether the existence of the referents of mental-kind terms (or, what is the same, insist on their ‘illusoriness’). This strategy preserves the ‘zombic’ quality of our zombie – our mental-kind term-free explanation is then entirely sufficient – but, because our brains are physically and functionally identical to those of the zombie, we necessarily rob ourselves of consciousness, mind, etc.  This, to me, is a major non-starter [7].
  3. Acknowledge the reality of these higher-order phenomena and give a non-reductive account of their emergence from lower-order ones which don’t exhibit mental properties. This preserves the absence of mentality at lower orders of existence, but introduces problems of its own.  Notably, it requires an account of the precise degree of complexity required for emergence to take place.  Furthermore, ‘emergence’ strikes me as something of a scientific equivalent term for ‘and then a miracle happened’.
  4. Abandon physicalism.  There are plenty of acceptable alternatives (dualism, idealism, panpsychism), although they might not be popular in the faculty lounge.


[1] At this juncture I should say that I am only very generally familiar with Dennett’s take on consciousness. Consciousness Explained is on my to-read list, but I haven’t yet got around to it. If I have said something strictly wrong about Dennett’s position, ignorance is my excuse (albeit, a poor one), but if I have got the gist of his position wrong, feel free to take me to task.

[2] He goes on from there to argue for panpsychism, a doctrine with which I shall not concern myself at present (though I find the idea fascinating, if rather counterintuitive and subject to its own problems).

[3] The problems with physicalist accounts of the mind are, to my mind, several and I should like to do a post dedicated exclusively to them, but this is not the time. NB: I do not discount physicalism in its entirety, however, but only those strains of it which claim to already have provided a complete, adequate explanation of the world and everything in it.  The “near enough” physicalism of Jaegwon Kim or Colin McGinn’s physicalist ‘mysterianism’ do not strike me as problematic.

[4] Necessarily, also, in the lab environment. This is, of course, a highly idealized science, but I’m doing philosophy, so I’m able to stipulate anything I wish in order to explore our intuitions. If magical miniature unicorns could do the trick, that would be fair game – so too with scientific theories which require prohibitively complex computations.

[5] Or, rather, biomechanically or biochemically. Or, for that matter, biophysically (especially if quantum minds are something one finds appealing).

[6] I am assuming here that the causal histories of our zombie brain vs. an ordinary human brain will not be relevant, at least insofar as consciousness is concerned – I think it likely that so long as the brain is up and running it should not matter whether it was built in a petri dish or a mother’s belly.  Of course, the different causal histories almost certainly would be of relevance to matters such as personality or learned skills (to name two).

[7] I can’t fathom how it is possible that consciousness (conscious experience) could be an illusion.  For this to be so, it would be necessary that we experience the illusion of having experience.  This idea is so obviously self-defeating and crazy that I wonder why intelligent people go in for it.

Where Is the Mind Located?

Which is it – body in mind or mind in body?

If asked whether the mind is located within the body, most people – most Westerners, at least (I cannot speak for how people from other cultures might experience such things) – would immediately and unhesitatingly say “yes, the mind is located within the body.”  Indeed, it often feels a lot like it is.  I was lying awake last night and it was really apparent in the dark and the silence that my thoughts really did seem to be taking place in the physical space between my ears and behind my eyes.  But this, I know, hasn’t always been the case – other cultures have maintained that thinking happens in other parts of the body (by this they did mean thought, not emotion, which I experience as scattered throughout my body), sometimes even disconnected parts!  The heart was a typical one (and Aristotle thought the brain was an organ for cooling the blood).

There are other times, however, when I have exactly the opposite intuition, when I really do feel like my body is actually inside my mind.  It is a strange feeling and I can’t really describe it because it both is and isn’t a matter of physical/spatial location, but that is what it feels like.  Sometimes I oscillate between these two perceptions, back and forth, without any clear priority given to either.  But when I am ‘body in mind’, my thoughts take on a strange non-locality, are not really anywhere, whereas when I am ‘mind in body’, thoughts definitely occur in my head. So I have two questions for everyone:

  1. Do you experience your mind as being located within your body or do you experience your body as being located within your mind?
  2. Do you also experience your thoughts as being in your skull or do you sometimes have thoughts in your heart or left pinky?

No Moral Properties: Morally Relevant Properties

I do not think it overly incautious to say that most people view morality as something that exists apart from themselves.  I take no position between the various ways in which this is explained (whether as the Will of God, or of the law of karma, etc.) but I would be willing to go far enough out on a limb to say (without providing any strong evidence in support of the claim, mind you) that the perception of moral valuations as existing ‘out there’ as properties adhering to acts and objects, just as ‘red’ adheres to a fire-engine, is probably innate, the default setting of the human mind.  While it could be the case that this is so (who am I to say?),  it strikes me as unlikely that this should be the case.  After all, despite many years – our entire species-history since we attained sapience, in point of fact – of believing in morality, of arguing morality, and enacting (however poorly) morality, we have failed to achieve any robust species-wide agreement upon the content of morality.  Granted, there are such platitudinous agreements that, for example, we ought not to kill, but when we dig deeper into how different cultures and different individuals within those same cultures understand and operationalize such principles, we find that there exists hardly any common ground at all, even relating to such arguably fundamental positions.  Given our extraordinary successes in expanding human knowledge in other domains (e.g. astronomy, physics, medicine), it seems improbably that we might nevertheless have failed to achieve some degree of success in the moral sphere if, in fact, moral properties are obvious and actually existing features of the external world. [1]

Still, we feel very keenly (at least, most of us do) the strength and pull of morality and moral reasoning and we strive with mixed success to act within the boundaries these define.  Quite obviously, morality is a real phenomenon and we experience it as such.  Equally obviously, if morality is not an aspect of the external world, it will necessarily be a feature of the human mind.  The goodness that we observe in kindness and the evil from which we recoil in cruelty are not properties of the acts themselves, but are valuations that we have made and projected (instantaneously) onto the happenings themselves.  Thought the universe may be amoral, we most certainly are not.

Now, perhaps this doesn’t strike some of those reading this as plausible, which is fine – it is not really my intention here to change anyone’s opinion on this count.  Indeed, a good many will be sure to remain unconvinced because they know (or claim to) that it is perfectly obvious that there is a moral order independent of ourselves.  There will be others, however, perhaps fewer, who resist the view because they are unsure that they want to accept what they believe to be its consequences.  After all, if we lose the objective moral order, don’t we thereby fling to doors wide open to relativisms of all sorts, not to mention losing the basis of justification for our own actions? [2]  Such concerns, however, lively as they may be, have much less substance than they appear to, for two reasons.  First, and most importantly, there is no danger whatsoever that we are at risk of collapsing into barbarism and immorality/amorality on an account of morality as non-independently existing – what we take to constitute acting morally might change, but we do and shall retain our moral compass (whichever way it points) until evolution has stripped it from us.  Second, although there may be no such things as moral properties,  out there and ready-made to guide our actions, it nevertheless remains the case that there are morally relevant properties of objects and states-of-affairs to which we may turn for grounding our moral reasoning.

In order to determine what properties might count as morally relevant, we will necessarily need to first determine what exactly morality concerns itself with.  This, as any who has tried can attest, is a maddeningly difficult thing to do, at least to everyone’s satisfaction, but one or two tentative and broad definitions are available.  A minimally acceptable definition of morality obviously has something to do with guiding/constraining our actions, but this is too vague to serve as a definition since it says nothing about the reasons why we may declare certain acts impermissible in the absence of objective moral properties.  There is one family of accounts – one which I find compelling – that claims that morality exists as a way of constraining the acts of members of groups of particular species of social animals in order to bring about and maintain a modus vivendi necessary for their flourishing.  Some will find this to be too restrictive however, arguing that it misses out on the universality of the moral imperative – we generally, they would claim, extend the sphere of moral concern beyond the (sometimes very) narrow confines of our social groups.  Instead, morality is about acting appropriately toward everything.  There is something to this objection/definition, but I don’t think that it disproves the evolutionary account – I think, rather, that it serves as an interesting expression of the particular moral make-up of the human animal. [3]  I don’t see these two as incompatible as it is surely possible that our in-group moral sense is, like our intellect, far more powerful than strictly required for our survival and reproductive success and, like our intellect, has been applied to problems beyond those it arose to cope with.  But if this account does replace the other which I have given above, then morality will simply be concerned with the prescription of those actions which are of benefit to others and the proscription of those that are detrimental.  For the time being, then, I shall use this benefit/harm criterion as it arguably is a more broadly applicable standard (e.g. how we deal with a biting mosquito has little if anything to do with social cohesion) and is, in any case, widely assented to, even among those who don’t believe it to be the whole of morality. [4]

In order for some property of an object or situation to be morally relevant, therefore, it must be some property whereby an object (whether this object is the one which possesses the property or is some other thing) of moral concern may be brought to harm or benefit.  Unfortunately, this introduces a further complication into the matter – how do we judge whether an object has been harmed or benefitted?  I propose simply that for any action to count as a harm or benefit, it must have been done to an object which possesses some property whereby it can come to harm or benefit and would (had it the ability) judge itself to have been so affected.  This last bit is critical, since this is what allows moral action to be meaningful and coherent.  For example, we are not harming chimpanzees by refusing to educate them in mathematics since chimps’ natures are such that they do not consider themselves to be harmed by such withholding – indeed, they cannot even understand what it is that we are withholding.  If, however, we were to confine them to cages and refuse to feed them, then we are clearly doing them harm (they have a nature such that they require food and experience its absence as harmful).  Moreover, it is this own-judgement of benefit/harm that allows us to make moral arguments and appeals against the powerful and the opinions of our peers – but for this own-judgement there could not have been any case against slavery, patriarchy, etc.

From this it follows that the greater the number of properties by which an object can be affected, the greater moral consideration is due that thing.  A rock, for instance, cannot judge itself to have been harmed by anything, so is owed no moral consideration, except perhaps derivatively by being of interest to something to which we do owe moral consideration (by being someone’s property, say).  Conscious entities that feel pain and pleasure will deserve some minimal moral concern, while self-aware entities will deserve yet more.  Social animals will merit much, much more, since they can be affected not only by what  happens to themselves but also by what happens to members of their social groups.  Humans, finally, will bear the greatest degree of moral consideration since we can be affected in the greatest number of ways (indeed, by having our rights violated or our plans interfered with).

In any case, it should not be difficult, on inspecting an object, to ascertain which of its properties may be morally relevant.  The fundamental morally relevant property must be consciousness, since without consciousness there can be neither perception nor judgement of harm or benefit.  Emotional attachment, plan-making, and others are potential candidates.  But to compile a list would take much more effort than I would be willing to do on a blog!


[1] Unless, of course, our moral science (please pardon the term) is still in its infancy.  Morality may, on this eventuality, actually be a part of the external world as much as is the sphericity of the Earth.  As was the case with the Earth in days of old, ‘natural’ moral properties may not yet be obvious minus some conceptual equivalent of manned space-flight.

[2] As to why the prospect of relativism should be so troublesome to any but the most rigid of religious fundamentalists, I have to admit that I find myself at a loss.  Surely we can all agree that moral relativism is, quite apart from normative concerns, a descriptive fact of human morality as it manifests in practice?  What further harm can result from acknowledging relativism that doesn’t already obtain from the mere fact of it?

[3] I would be interested to know whether some evolutionary psychologist has addressed the question of why it should be that humans are so readily able to extend our sphere of moral  concern so far beyond our most intimate acquaintances – as far as other living species and even, on some occasions, to inanimate objects!

[4] And because, furthermore, one could make the case that many of our apparently non-harm/beneficence matters of moral concern (social cohesion, say) could be derived from harm/benefit considerations, it would just take a lot of work.  How this can be should become apparent later.

The Hard Problem(s) of Minds

The explanatory gap, that is

The 1980-90’s proved to be the decades of consciousness studies in the academic philosophy of mind.  It was during this period in particular that the matters of what consciousness is, whether it exists, and how it is possible were under serious discussion.  Unfortunately, the discipline seems to have moved on to greener pastures since then, but the questions raised during this period are still of the greatest degree of interest.  Probably the most interesting problem raised during this period was that of the ‘hard problem of consciousness’.  The idea here is that consciousness – defined as the very fact of experience, or the ‘what-it-is-like’ to be (and for)\ an experiencing being – cannot be accounted for by appeal to known facts about the material world.  There is an explanatory gap between our physical theories of the world and our lived experience that no collection of data about electron spin or loop quantum gravity seems to be able to bridge.

Of course, there are those who deny that there is a hard problem, insisting instead that the trouble only arises in the minds of people already predisposed towards a dualist outlook.  Dennett, the Churchlands, Hardcastle, and others all are of the opinion that there is no problem, that we have all the conceptual and scientific tools necessary to unravel the knot of consciousness and that refusal to acknowledge this comes from some emotional need to preserve the ‘specialness’ of the mind.  Of course, this is rife psychologizing [1] (and not very compelling, at that) – to say that the Kims or McGinns of the world are predisposed to dualism or trying to save mental ‘specialness’ is a grossly misleading statement that borders on slander.  One could here inject speculations about the sources of the refusals of some to acknowledge the cogency of the case for there being a hard problem, but I shall restrain myself – I am not here for a brawl.

Coming back to the point – how might we get to the hard problem?  [2]

The Weirdness of the Mental

The mind is a really weird thing.  Everything else in the world appears to exhibit certain publicly available properties like mass or spatio-temporal location.  The mind, however, does not have properties like this.  Where, for example, is its location in space?  It might seem like it is in the head, but if much closer attention is paid, it becomes unclear that this is so – sometimes the head seems to be in the mind.  How much space does the mind take up, is it limited or boundless?  If the mind is physical, why does it seem totally unlike all the other stuff we see around us?

The Possibility of Zombies

‘Zombie’ in this context does not refer to the brain-eating undead monsters of the films – rather, we are speaking here about ‘philosophical zombies’.  This sort of zombie is alive, mild-mannered, and doesn’t want your brains because it has its own.  This kind of zombie might live right next door, have a job, wife, mortgage, and kids, and get really excited when it’s playoff season, and you’d never know it was a zombie.  In fact, the only thing this kind of zombie doesn’t have that the rest of us do is consciousness – unlike ordinary humans, there is nothing that it is like to be a zombie.  The point of this sci-fi thought experiment is to demonstrate that because it is conceivable that there could be highly sophisticated cognitive entities that are without consciousness, it is therefore possible that such things could exist and, this being so, the fact of consciousness is difficult to explain.

Building a Better Zombie

If the zombie example seems unconvincing, perhaps that is because we started with a fully formed zombie.  What if we built one from the ground up?  So we take one neuron and study all its actions under various conditions.  Now, we know that whenever chemical c or electrical pulse e is applied under a given condition, the neuron reacts in some specific way, but we don’t imagine that the neuron is conscious.  So we add a second neuron and hook it up in the right way to the first and then apply our chemical or electrical pulse to the first, which reacts in its way and thus causes the second to react in another way.  We still don’t concede that the two neurons are conscious, whether considered individually or collectively.  Then we add a third neuron, then a fourth, then… rinse and repeat.  Since we know what each neuron will do when acted upon in a specified way, eventually we could build a fully functional human brain.  The trick is, since each additional neuron is without consciousness, and we understand perfectly how the entire set-up produces seemingly intelligent responses based on simple stimuli/response action, we would have no reason to make any appeal to consciousness or emotion or any other feature of the mental in describing our new zombie’s activity – it would all simply be the working out of physical cause/effect.  So how are ordinary conscious humans any different?


This is the final problem that I consider a hard problem (or at least sufficiently closely linked to the hard problem to warrant mention here).  Intentionality is the ‘aboutness’ of mental content.  So, for instance, my ‘cat-thoughts’ are about my pet cat, who is just now trying to climb onto my keyboard.  [3]  But intentionality is tricky for two reasons.  Firstly, material objects are not ‘about’ anything at all.  For example, if I see Jesus on my grilled cheese sandwich, the sandwich is not ‘about’ Jesus, I have just made a bunch of hay about a perceived resemblance.  But if my cat-thoughts just are the states of my brain (or some of its subcomponents), how can they anymore be about my cat than the cheese sandwich likeness is about the Lord of Toasts?  Secondly, representation is always of something and to someone, but without consciousness, how can cat-thoughts be represented to anyone (anything)?

These, then, are the ways I think fruitful to construe the hard problem.  I intend in the future to go much more deeply into each of these subjects and, in the interest of disclosure, I am tentatively favourable to the notion that there is a real problem here.  But I am also more than open to go the other way too.


[1] Psychologizing is philosophical bad form, generally.  One ought to deal with one’s opponents’ arguments first – if their arguments are bad or nonsensical, however, then psychologizing may be in order (but only as an error-theory).  By psychologizing their opponents in this way, the individuals listed are in essence saying that one cannot rationally disagree with their general outlook, that disagreement on this point is tantamount to arguing against 2+2=4, or that a thing is not identical to itself.

[2] I have deliberately left what follows sketchy and underdeveloped because I want to leave myself something to write about in the future!

[3] Bad kitty!

Reading Kim’s “Physicalism…” Chapter One

I have begun reading Jaegwon Kim’s Physicalism, or Something Near Enough (Princeton University Press 2005) as part of my self-directed research program on the hard problem of consciousness.  The book seems promising, both for its clarity and readability (a somewhat uncommon virtue among works of analytic philosophy) and for its organization.  It is split into six chapters that are intended to be read as stand-alone essays (indeed, most of these began life as lecture notes) but which together form a cohesive whole.  The book is intended to provide an argument to the effect that a thoroughgoing physicalism is not an appropriate theory to be applied to ‘the mind’, but that a near-total physicalism that leaves room for qualia is necessary and, importantly, good enough.  On with it!

Chapter 1 – “Mental Causation and Consciousness: Our Two Mind-Body Problems.”

Kim argues that there are two serious obstacles in the philosophy of mind that any modern physicalist theory will have to tackle.  The first of these is the matter of mental causation, of how it is possible that “the mind [can] exert its causal powers in a world that is fundamentally physical” (pp. 7).  Mental causation, whatever its mechanism, is an important concern for physicalists, Kim claims, for three reasons.  Firstly, our understanding of human agency and our moral practice requires that it be our beliefs and desires that cause our actions, not mere physical happenings.  [1]  Secondly, human knowledge requires mental causation (since it is predicated upon our having been in appropriately causally linked “cognitive-relations” with external objects and having reasoned from those relations).  Finally, in order for psychology to serve as a useful descriptive enterprise requires effective causation by mental states/properties – to say that ‘anger’ caused an action is to posit a real thing with causal efficacy (9 – 10).

The second obstacle for physicalists is consciousness, specifically the problem of “how [there can] be such a thing… in a physical world, a world consisting ultimately of nothing but bits of matter distributed over space-time behaving in accordance with physical law” (7).  Consciousness seems less immediately troublesome for the physicalist, but is vitally important in certain contexts.  Ethics, for example, makes much of the distinction between those things with consciousness and those without and even the average person finds consciousness to be highly valuable for the access it provides to things like sunsets, flavours, etc.  So consciousness matters rather more than even some philosophers (Dennett) suppose and is not, in any case, explained simply by compiling a list of “psychoneural correlations” (10 – 13).

The Problem of Mental Causation:

Kim believes that a “minimal physicalism” requires supervenience (that is, all physicalisms must include it but are not necessarily identical to it), which he defines as “the claim that what happens in our mental life is wholly dependent on, and determined by, what happens with our bodily processes” (13 – 14).  Supervenience cannot do the work physicalists want it to, however, because of several of their other assumptions; such as the

  1. “[principle of] causal closure of the physical domain.  If a physical event has a cause at t, then it has a physical cause at t” (15), and the
  2. principle of causal exclusion.  If an event e has a sufficient cause c at t, no event at t distinct from c can be a cause of e (unless this is a genuine case of causal overdetermination)” (17), and the
  3. principle of determinative/generative exclusion.  If the occurrence of an event e, or an instantiation of a property P, is determined/generated by an event c – causally or otherwise – then e‘s occurrence is not determined/generated by any event wholly distinct from or independent of c – unless this is a genuine case of overdetermination” (17).

These assumptions inexorably lead to problems for mental causation on supervenience theses.  M (a mental event) is thought to cause M’ (mental causation), but M’ instantiates because of/is generated by P’ (the physical property upon which it supervenes).  Only one of M or P’ can be the cause of M’ (by appeal to principle 2 above) and, since P’ is sufficient for the instantiation of M’ (whatever else happened in the past), the only way to resolve this problem is by claiming that M causes M’ by causing P’.  Physicalists should reject this, however, for two reasons.  Firstly, it is an instance of cross-domain causation (of a sort excluded by principle 1).  Secondly, M has its own supervenience-base P that is sufficient to cause M’, since it is sufficient to cause P’.  Here, then, is a case of overdetermination of P’ (by both P and M), but in order to avoid it we must abandon mental causation (else we are just iteratively pushing the problem into an infinite regress).  So we have preserved lawful relations between the mental and the physical, but have robbed the mental of all causation, letting all the action take place at the level of the bases of supervenience (19 – 21).

The problem of mental causation is summed up for Kim like this:

Causal efficacy of mental properties is inconsistent with the joint acceptance of the following four claims: (i) physical causal closure, (ii) causal exclusion, (iii) mind-body supervenience, and (iv) mental/physical property dualism – the view that mental properties are irreducible to physical properties” (21 – 22).

And since physicalists cannot reject (i) or (iii) without losing physicalism, and (ii) seems a reasonable metaphysical principle, only (iv) can be given up – reduction of the mental to the physical seems to be in order, but…

Can We Reduce Qualia?

We must be clear about what it is to reduce something.  Kim wants to avoid using Nagel’s bridge laws reductionism [2], finding them inappropriate for their vacuity (they could explain many dualisms) and for reasons he promises to elucidate in a later chapter.  Instead, Kim’s model of reduction of a mental property (e.g. pain) is as follows.  First, we must functionalize the property in question – that is, identify the role it is thought to play, the mechanisms that bring it about, and its probable consequences.  Then, we may identify its ‘realizers’, the things that are necessary for its manifestation (C-fibers, neural activation patterns, etc.).  Kim addresses the matters of multiple realizability, specific realizers, loss of singular ‘pain’ concepts, but none of this is particularly germane here – the upshot is that should functional reduction work, we will have saved the causal powers of pain’s realizers, and mental causal efficacy.  It should be noted, however, that this will only work if mental properties actually are functionally reducible – so are they (22 – 27)?

Kim says that intentional and cognitive properties are, but phenomenological properties (qualia) are not.  To make this case there is no need to resort to “anything as esoteric and controversial as the ‘zombie’ hypothesis much discussed [in philosophy] recently” – rather, all that is required is the modest metaphysical prospect of qualia inversion (to be discussed later in his book) (27).


[1] Kim does not elucidate why this should concern us, familiar as he presumes his audience to be with the philosophical literature, but the broad idea is probably something like this – if what one does happens without reference to things like his desires or beliefs, then his actions no longer seem to be for reasons, and, if not for reasons, then are no more meaningful, praiseworthy or blameworthy than a tree falling in the woods.  Moral responsibility cannot obtain since it is dependent upon the reasons-responsiveness of people and, on this conception, people would not be reasons-responsive.

[2] Wherein a property has been reduced when it is appropriately connected with some “nomologically coextensive property [or properties] in the base domain” by means of these bridge laws (22 – 23).