Was David Hume Enlightened?!

Hume needed all that fat to fuel his ginormous brain.

So, I’ve been sitting meditation for a long while.  It’s an interesting pursuit – the more one tries to just stay with whatever is here now, the more strange things seem to pop up.  Anyway, I was thinking about Hume today because of something that made itself so blindingly obvious during my meditation practice that I couldn’t help but make the connection.  Basically, whenever you have a sensory input (say, a fly passes through your field of vision), that input will be followed extremely shortly thereafter by an involuntary mental reproduction of that sight-event.  This comes almost immediately after the original sensory input and is noticeably different in ‘feel’ than a memory of the same event after the fact.

Now, you can’t force yourself to notice this – indeed, trying to force it will either entirely prevent it from occurring or will cause too much mental noise to allow you to notice it (I’m not sure which it is) – but it definitely happens.  And now I think I understand where Hume got his notion about ideas.  Or at least I think I do – my suspicion is that he was up to some sort of what today would be recognized to be meditation (granted, he probably didn’t sit full-lotus).  Which makes me wonder – where in God’s name did he get the idea to do that from?  For goodness sake, he even appears to have figured out anatta!

And so I’m seriously freaked – was David Hume… enlightened?

Building Better Zombies

I’m warning you…

David Chalmers relies in no small way upon his so-called ‘philosophical zombies’ – entities that are like human beings in every respect, physical and psychological, the only teeny, tiny difference being that they are lacking conscious experience – to make his point about consciousness being a basic property, i.e. that it is a mental-kind term that may neither be functionalized nor emergent from some set of physical states of affairs. However, his example doesn’t (and shouldn’t) convince anyone who is not already friendly towards the sorts of consciousness-views offered by the various dualisms and panpsychisms floating about. To pick but one example, Daniel Dennett doesn’t think that Chalmers’ thought experiment does the trick (and is, in any case, very hostile to the proposition that there may be any mental properties which are not emergent from/reducible to physical facts/happenings [1]). And for what it is worth, I don’t think Chalmers’ thought experiment is very compelling either, even though I am more open than is Dennett to possibilities beyond run-of-the-mill reductive/eliminativist physicalist doctrines.

Why should I say this? Chalmers’ uses his zombies thus:

  1. It is conceivable that there might be some possible world wherein there
    are creatures with brains and psychologies such as our own, which are
    functionally identical to our own, but which are lacking conscious experience.
  2. Since it is conceivable that such beings could exist, consciousness is not an emergent property of physical systems assembled (physically or functionally) like our brains, otherwise the zombies in question would be conscious as well.
  3. As such, we may conclude that consciousness is neither simply a product of physical systems arranged in the right way nor functional systems of the right sort. [2]

Obviously, this is a very simplified presentation of his argument (or at least, of the version of his argument with which I am familiar), but it captures something of its thrust. As should be apparent, it is the second premise that causes the argument to fail – simply because something implies no logical contradiction, it does not follow that it is actually possible (in this world or any other). It may well be that in actuality any number of functionally appropriate systems (of which human brains are but one example) will be conscious and nothing else besides – which, not incidentally, is precisely the physicalist position promoted by fellows like Dennett.

The weakness of the argument is unfortunate because I find the (reductive/eliminativist) physicalist’s explanation of consciousness risible [3]. I suspect that the argument could be reworked in order to appeal directly to and make use of the sorts of intuitions held by physicalists of a Dennettian sort, all while avoiding the use of modal arguments (“in some possible world…”). This would make for a more effective argument against functionalist/physicalist accounts of consciousness since, at the outset, it gives them everything that they say they want but leads to the necessary abandonment of some of it. What follows, then, is my own version of a zombie thought experiment that, although leaning heavily on the intuitions Chalmers is mining with his zombies, (I think) does a better job.

Building Better Zombies:

Imagine for the moment that there existed a complete neuroscience. I mean ‘complete’ in the sense that this theory ranges over the entire range of properties of neurons as well as their actions and interactions in the terms with which ordinary science is comfortable (i.e. objectively observable/measurable properties) – there is no hint of ‘woo’ about it.  Moreover, this theory has been subjected to rigorous testing and is as good as proven and is able to predict with near perfection what any particular sort of neuron will do whenever any stimuli one would expect to find in its ordinary operational environment (whether biochemical or electrical) are applied to it in said environment [4]. So, were we to have a single neuron placed before us in a petri dish, we should be able to apply any neurotransmitter or electrical charge under any environmental conditions we should choose to subject the cell to, with absolute certainty that it will react in a specific way. This is all well and good, especially since we would not need to make any reference to mental-kind terms (e.g. consciousness, intentionality, representation) – our neuron is simply caused to produce said effect ‘mechanically’, as it were [5]. Indeed, to make any appeal to mental properties at this low level would be to make a claim which no physicalist would suffer gladly – neurons are just too ‘simple’ to be host to intelligence or consciousness.

From this modest beginning, we should be able to attach a second neuron to the first and to apply some stimulus to one or the other and predict with perfect accuracy what each of them will do, considered individually and as a unit. Once again, this prediction will in no way necessitate a resort to mental-kind terms – we are still firmly in the realm of the objectively explicable – and so again with the addition of a third neuron, and a fourth, and a fifth, etc. With each successive neuronal addition, the behaviours of the whole system will become increasingly complex, but without causing any explanatory or predictive troubles (as I have, after all, stipulated that this is a complete neuroscience). On each iteration, we slowly build a neuronal assembly that is increasingly similar to our own human brains until, soon enough, we will have succeeded in building one that is physically and functionally identical to an ordinary human brain which may then be hooked up in the right way to a (presumably custom-built) body. The zombie is ready.

I told you so!

This zombie will now start interacting with its environment and new stimuli will arise naturally from sensory perception of the immediate environment. This will lead to the zombie behaving in complex ways – e.g. using language and planning its vacation to Maui in the fall – based upon the stimuli it receives. Indeed, its behaviour will seem remarkably similar to our own, as we would expect, given that we designed it to be exactly like a neurotypical human being, but for one crucial difference – its behaviours (ranging from thirst to the writing of forlorn love songs) may be explained entirely without reference to mental-kind terms. After all, why should we explain these behaviours by reference to such terms, since they are just the working-out of neuronal cause and effect, which our ideal neuroscience already accounts for with ease?

Not to do so, however, would necessitate accepting some highly dissatisfactory entailments. If our neuroscience really does explain our zombie’s apparently rich range of behaviours, then because we have built a brain that is physically and functionally indistinguishable from that of a womb-born human being [6], we should seek to use the same theory to explain our own behaviour.  Since everything may be satisfactorily explained without need to refer to consciousness, qualia, thought, etc., then explanations of our own activities will also need make no reference to such things.  The trouble is, we emphatically do have subjective experiences and we are conscious. Given this, there are a number of possible moves the physicalist could make:

  1. Give a reductive/eliminative account of phenomena like volition, thought, consciousness, etc. This is, however, precisely what the stipulated ideal neuroscience has done – it has accounted for all higher-order processes in terms of lower-order neurological functioning – and has led only to the present conundrum.
  2. Deny altogether the existence of the referents of mental-kind terms (or, what is the same, insist on their ‘illusoriness’). This strategy preserves the ‘zombic’ quality of our zombie – our mental-kind term-free explanation is then entirely sufficient – but, because our brains are physically and functionally identical to those of the zombie, we necessarily rob ourselves of consciousness, mind, etc.  This, to me, is a major non-starter [7].
  3. Acknowledge the reality of these higher-order phenomena and give a non-reductive account of their emergence from lower-order ones which don’t exhibit mental properties. This preserves the absence of mentality at lower orders of existence, but introduces problems of its own.  Notably, it requires an account of the precise degree of complexity required for emergence to take place.  Furthermore, ‘emergence’ strikes me as something of a scientific equivalent term for ‘and then a miracle happened’.
  4. Abandon physicalism.  There are plenty of acceptable alternatives (dualism, idealism, panpsychism), although they might not be popular in the faculty lounge.


[1] At this juncture I should say that I am only very generally familiar with Dennett’s take on consciousness. Consciousness Explained is on my to-read list, but I haven’t yet got around to it. If I have said something strictly wrong about Dennett’s position, ignorance is my excuse (albeit, a poor one), but if I have got the gist of his position wrong, feel free to take me to task.

[2] He goes on from there to argue for panpsychism, a doctrine with which I shall not concern myself at present (though I find the idea fascinating, if rather counterintuitive and subject to its own problems).

[3] The problems with physicalist accounts of the mind are, to my mind, several and I should like to do a post dedicated exclusively to them, but this is not the time. NB: I do not discount physicalism in its entirety, however, but only those strains of it which claim to already have provided a complete, adequate explanation of the world and everything in it.  The “near enough” physicalism of Jaegwon Kim or Colin McGinn’s physicalist ‘mysterianism’ do not strike me as problematic.

[4] Necessarily, also, in the lab environment. This is, of course, a highly idealized science, but I’m doing philosophy, so I’m able to stipulate anything I wish in order to explore our intuitions. If magical miniature unicorns could do the trick, that would be fair game – so too with scientific theories which require prohibitively complex computations.

[5] Or, rather, biomechanically or biochemically. Or, for that matter, biophysically (especially if quantum minds are something one finds appealing).

[6] I am assuming here that the causal histories of our zombie brain vs. an ordinary human brain will not be relevant, at least insofar as consciousness is concerned – I think it likely that so long as the brain is up and running it should not matter whether it was built in a petri dish or a mother’s belly.  Of course, the different causal histories almost certainly would be of relevance to matters such as personality or learned skills (to name two).

[7] I can’t fathom how it is possible that consciousness (conscious experience) could be an illusion.  For this to be so, it would be necessary that we experience the illusion of having experience.  This idea is so obviously self-defeating and crazy that I wonder why intelligent people go in for it.

Where Is the Mind Located?

Which is it – body in mind or mind in body?

If asked whether the mind is located within the body, most people – most Westerners, at least (I cannot speak for how people from other cultures might experience such things) – would immediately and unhesitatingly say “yes, the mind is located within the body.”  Indeed, it often feels a lot like it is.  I was lying awake last night and it was really apparent in the dark and the silence that my thoughts really did seem to be taking place in the physical space between my ears and behind my eyes.  But this, I know, hasn’t always been the case – other cultures have maintained that thinking happens in other parts of the body (by this they did mean thought, not emotion, which I experience as scattered throughout my body), sometimes even disconnected parts!  The heart was a typical one (and Aristotle thought the brain was an organ for cooling the blood).

There are other times, however, when I have exactly the opposite intuition, when I really do feel like my body is actually inside my mind.  It is a strange feeling and I can’t really describe it because it both is and isn’t a matter of physical/spatial location, but that is what it feels like.  Sometimes I oscillate between these two perceptions, back and forth, without any clear priority given to either.  But when I am ‘body in mind’, my thoughts take on a strange non-locality, are not really anywhere, whereas when I am ‘mind in body’, thoughts definitely occur in my head. So I have two questions for everyone:

  1. Do you experience your mind as being located within your body or do you experience your body as being located within your mind?
  2. Do you also experience your thoughts as being in your skull or do you sometimes have thoughts in your heart or left pinky?

Gee Whiz!: Extensive Exuberance and Cognitive Confabulation

The dynamic duo! Hmm, they sure do look happy…

Andy Clark and David Chalmers present a view of the mind in their appropriately titled article, The Extended Mind, that is, in a word, incredible.  It is my aim in this paper to demonstrate why one should find their view to be so, in terms both of its functionalist foundations and for its strongly counter-intuitive consequences (particularly as these concern personal identity).  Before I can present my criticism, however, the view itself must be presented.

Put succinctly, Clark and Chalmers’ view is that cognition and the mind literally extend beyond the boundaries of our physical bodies and out into the surrounding environment – as they put it, “[c]ognitive processes ain’t (all) in the head!”  One may, at this juncture, be tempted simply to proclaim their insanity and to leave it at that (I certainly am) but the pair has a reasonable seeming set of conditions for their view that, when satisfied, makes it much less obviously crazy seeming.  The first condition states that “[i]f, as we confront some task, a part of the world functions as a process which, were it done in the head, [sic] we would have no hesitation in recognizing as part of the cognitive process, then that part of the world is… part of the cognitive process.”  However, it is not simply that there is something outside the skull that performs a cognitive function that is then employed by another entity1 that makes external thing a part of the entity’s mind.  In order for this to be so, a second condition must be met – if any feature of the external world is to be considered a part of a mind, then it must be integrated with other components (some of which might be, for instance, a human mind as ordinarily understood) into a ‘coupled system’.  These coupled systems are characterized by dynamic, two-way interactions among their constituent parts, all of which “play an active causal role” (Clark and Chalmers 8).  These are, of course, functionalist criteria – whatever plays a certain functionally defined role for a system just is whatever that role picks out (e.g. pain) (Putnam 161 -163).  Examples help to make clear how this might work and the dynamic duo is kind enough to indulge their readership with two.

The first is a study conducted on the mental rotation of objects by human subjects with and without the aid of Tetris.2  The upshot of the study was that although players are capable of altering the positions and rotations of the game’s variously shaped blocks, when the game itself was used to do these tasks at the behest and on behalf of the human player, the speed at which these tasks were accomplished improved dramatically.  This is all to say that, insofar as a human subject of the study was instructed to manipulate shapes and the Tetris program was used to facilitate that task, the program itself became part of the subject’s cognition (Clark and Chalmers 7-8).

The second example is a story about two people, Inga and Otto, who want to visit the Museum of Modern Art.  Inga is a neurotypical human being and so, when she decides that she wants to go to MOMA, she simply remembers (perhaps with a little effort) where it is located (53rd Street) and is off on her merry way.  Otto, too, wants to go to MOMA, but it is not so simple for him – he (sadly) has a strange disorder that prohibits him from remembering information like telephone numbers or addresses.  Fortunately, he has a notebook that he uses to store just this sort of information, so when the urge to go strikes him, he looks up the address and then is also off on his merry way.  Clark and Chalmers assert on the basis of this example that both Inga and Otto have the belief that MOMA is located on 53rd Street – the only difference between the two cases being that Inga’s belief is stored internally and Otto’s is stored ‘externally’in his notebook (Clark and Chalmers 12-13).  Not to belabour the point, but the externality of the belief that is referred to here must be interpreted as ‘physically external to the organism Otto’ and not ‘external to Otto’s mind’, since the point that Clark and Chalmers are trying to make precisely is that the belief is still internal to Otto’s mind, never mind where it’s physically stored.

If these examples prove Clark and Chalmers’ view right, then they do so by demonstration that the functionalist criteria that the view rests upon can be satisfied by external objects.  On first glance, they appear to do just this, but looks can be deceiving.  First, there is the matter of the functional roles these facets of the external world are supposed to play in the examples.  Secondly, there is the question of whether these external features of the world do in fact form dynamically coupled systems with human agents.

From the Tetris case, although the program is doing some of the shape-rotating for the subject, it’s not clear that when she uses the program she is anymore engaged in the mental rotation of shapes – instead, it seems she has shifted to doing something rather like manipulating physical blocks.  The key to understanding this, I think, is that she is only aware of the blocks as ‘objects’ in her visual field (one of the ways we are aware of physical objects we can’t touch) and that when she presses the ‘rotate’ button, the program represents them as reoriented in physical space.  It is unclear to me how this is substantially different from moving physical shapes with her own two hands.  On this analysis, the program actually satisfies the functionally defined role of physical manipulation and not that of mental rotation.  Moreover, if I am correct about this, then the seemingly ‘dynamic’ mutual causal interaction no longer seems so dynamic.  This is so because the subject is relating to the program as though she were relating to actual physical objects instead of to a dynamic computational mechanism and as physical objects specifically of the sort that do not initiate causal sequences on their own.

A dynamically coupled system!

Then there is the matter of Otto and his notebook.  Clark and Chalmers insist that Otto, just like Inga, has the belief that MOMA is on 53rd, only that his belief is relevant to what is written in his notebook.  What seems important about this example, however, is that the notebook stands in for Otto’s memory, not his beliefs3, so the question is really a matter of whether what is written in the notebook counts as memories – that is, does it perform the functionally defined role of memories?  On the one hand, it is stored information authored by Otto, so it actually seems like the notebook might be a good candidate for memories.  However, there are some common counter-examples that belie this appearance.  For example, if someone were to unearth a piece of work one evidently did but has no recollection of (say, old papers from grade school), it would seem incorrect to say that, in reading it, one is ‘remembering’ what he wrote – it seems more accurate to say that he is ‘learning’ what he wrote.  This example exactly parallels Otto’s since Otto has no recollection whatsoever of what he has written and (presumably) only knows that he is its author precisely because it is in his notebook.  So there is something strange about the contents of the notebook being memories.

Furthermore, there is the problem of whether Otto and his notebook truly comprise a dynamic coupled system.  Again, it is not at all clear that they do – especially since all the obvious causal activity comes from Otto’s side, whether that means his writing in it, searching through it to find pertinent information, etc.  Clark and Chalmers might respond that, nevertheless, what is written in the book causes some effect or other in Otto’s brain, which effect plays the role of ‘memory’ and forms the basis of statements about belief.  However, what is written in the notebook is mere ink on paper and requires interpretation before it can do anything like direct Otto to 53rd Street.  Thus for human infants, the illiterate, non-English speakers, and animals (to name a few) the notebook and its contents would not have this sort of causal efficacy.  The question seems to be whether the effect ostensibly ‘caused’ by what is written in the notebook can really be substantially attributed to it in light of the fact that practically all the action is taking place within Otto’s brain.  Once the words have been taken in and fed to the brain’s interpretive faculties (via the visual processing centres, etc.) the notebook has no further essential causal role to play in Otto’s ‘remembrance’.  I think that, while it is necessary to have causal inputs to have meaningful engagement with the external world, it makes little sense to say that the external world becomes a single thing with any agent.

This last brings me, finally, to the view’s consequences.  Leaving behind the matter of whether the functionalist criteria gives us what Clark and Chalmers suppose, if it is granted that they do, truly bizarre results emerge.  Firstly, say that Otto and the notebook have formed a dynamically coupled system and that the notebook is serving as his memory.  This raises an important question – who exactly is ‘remembering’?  It is unclear whether it is Otto, the notebook, or the whole system that is ‘remembering’ MOMA’s location, or which has beliefs about it.  It does not follow from Otto’s being a necessary part of the cognitive system that he has the memory, just as it does not follow from the eyes being a necessary part of the visual system that they see anything at all.

That explains a lot.

Secondly, the view makes a complete hash of personal identity.  If someone is using a search engine, the computer will have become part of his mind, until he stops using the computer and moves on to playing music (say), at which point a musical instrument might well count as part of his mind.  There no longer remains a clear or meaningful boundary that delimits self from not-self on this view, but experientially it does not seem that way – he and the computer very much ‘feel’ like different things to him.  Then there are cases where two or more minded beings could plausibly be said to be dynamically coupled systems – say, in a predator-prey relationship.  The fox and the field mouse are definitely mutually causally efficacious and dynamically so.  Furthermore, they could plausibly be construed as playing a functionally defined mental role for the other (‘predator’ or ‘prey’).  If this works, it has two very strange consequences.  One, the predator and prey actually don’t have any clearly distinct identities, but sort of bleed into one another.  Two, part of the prey’s own mind is trying to kill it!  Obviously, these are not good consequences for the view.

In this paper, I provided Andy Clark and David Chalmers’ view that cognition and the mind literally extend into space.  I explored their functionalist account of how this might work and attempted to demonstrate that these criteria actually cannot be met by such external features of the world.  I then provided some considerations of the counter-intuitive consequences the view would have, were it correct.  On the basis of these arguments, I believe I have proven true my thesis that their position is incredible.


1 Although it is not discussed by Clark and Chalmers, it is not obvious that there is any reason why some other species of animal could not also have its mind or cognitive states extend into the environment.  Indeed, it is probable that, should the view pan out, there are species known to us for whom this is the case.

 2 Tetris is a popular video game in which the player “arranges the blocks that fall endlessly from up above.”  See: http://www.youtube.com/watch?v=hWTFG3J1CP8

3 I must admit some perplexity why they decided to focus on Otto’s beliefs.  In more cynical moments, I suspect it is because beliefs are slippery things to functionally define in a way that memories are not and, as such, allow lots of problems to go unnoticed.

Memory and Symbolic Thought

I followed some links on Sabio’s blog and found my way over to this page of beliefs that people once had, but no longer do.  Pretty interesting reading.  What caught my attention most was the change of belief by Joseph LeDoux, a neuroscientist, who went from thinking that memories are once-stored and ever retrieved things to being things that are repeatedly generated when called for.  I have heard this elsewhere, but it reminds me of something else that I have thought about (or read/been told about, but cannot remember where).

Hardly anyone remembers anything from before about the age of four years.  What is also interesting is that this is also right around the time that language skills are coming into their own (if there are any developmental psychologists out there, please feel free to correct me/supplement this).  I have a suspicion that it is actually our ability to use language (symbolic thought) that gives us the ability to ‘remember’ so much as we do.  So while non-linguistic animals might have an experience and need to store reams of data about the particular qualities of that experience, we are able to store symbolic instructions that may be used later to reconstruct the event for us out of a much smaller set of stored sensory modalities.  I find this a fascinating notion (and the implications are neat).

Just a thought.

Essay: On the Impression of Time

David Hume

What you will find below is one of the short papers written for the Hume seminar I took last year.  I think it safe to finally start posting essays without danger of being kicked out of school.  So without further comment:

 On the Impression of Time

In this paper I shall investigate the explanation Hume gives in A Treatise of Human Nature of how the human mind comes to perceive, in the changing series of its perception, a flowing of time, as discussed in “Of the other qualities of our ideas of space and time” ( –  I will provide reasons for believing that this view is untenable as it stands.  I will also argue that his account of time cannot be rescued, for the best available means of doing so would have significant knock-on effects on his general understanding of perception, effects which are contrary to experience.  Interestingly, revision to his account of the perception of time will also have effects upon his understanding of perception.

For Hume, “[t]he idea of time, [is]… deriv’d from the succession of our perceptions of every kind, ideas as well as impressions,” and that without this “succession of ideas and impressions… [it would not be] possible for time alone ever to make its appearance” ( –  There are several reasons to suppose this plausible.  Firstly, Hume points out that for a man who is asleep there is no experience of time, nor is there for one who is “strongly occpy’d with one thought,” though in this latter case he actually means that “according as his perceptions succeed each other the greater or less rapidity, the same duration appears longer or shorter to his imagination…”  Further, Hume illustrates with an example of how when we spin a burning coal, we are able only to perceive a ring of fire as a static object and perceive no change in it and therefore no ‘time’ (

Hume hastens to point out, however, that “[t]he idea of time is not deriv’d from a particular impression mix’d up with the others, and plainly distinguishable from them; but arises altogether from the manner, in which impressions appear to the mind, without making one of the number (  Our experience of time is not something over and above our experience of the impressions (and ideas of these impressions) themselves, but is rather the collection of these that furnish the experience of time.  He illustrates this by using the example of five notes played in succession on a flute; the notes are played and they are perceived (as impressions and ideas) and there is no sixth idea or impression of reflection that constitutes the experience of time.  The only thing of importance here is, as Hume emphasizes, the manner in which they arise: “here it only takes note of the manner [emphasis original], in which the different sounds make their appearance…” (

Is this a satisfactory account of how we come to have the impression (and idea) of time?  In its broadest outlines this account certainly seems to accord with how humans experience the world: I have never known a feeling of time while in a deep (i.e. dreamless) sleep and everyone has experienced the ‘timelessness’ of a wandering mind.  But any explanation of human temporal perception must account for these common observations, and to do so is a low hurdle to overcome, while there are difficulties for Hume in the technical aspects of his theory.  Since Hume is building upon the account of perception that he provided at the beginning of the Treatise, the perception of time must be consonant with that account.  So first I shall grant his claim that without a continually changing series of perceptions, whether of impressions or ideas, there could be no experience of time, since this seems a plausible point.

The objection is as follows.  Hume is insistent that there is no idea or impression over and above the series of perceptions that may be identified as a separate impression of time.  If this point holds then, when hearing the five notes of the flute, I can only perceive a particular impression of a note (or the idea of that impression) at any given instant.  If this is true, then would it not appear to my perceptual faculty that, at any given instant, whatever it is that is being perceived would appear to be an unchanging object?  A reason to think this might be the case is that while perceiving that impression I am denied, by Hume’s account, any impression of reflection combining the ideas of a previously heard note with the idea of the present one.  If this is the case, I can have no recollection or knowledge of any previous impression(s) or idea(s) to indicate that the current impression has not always been.  Without such an impression of reflection it seems to me that change could not be perceived; that in the next instant there would be the perception of a different idea or impression does not resolve the matter, since that too would have the appearance of an unchanging object for the same reasons as given above.  And since, for Hume, the perception of time is a function of the perception of change, then without any change being perceptible, time must also be unable to appear to us.

If anyone should wish to rescue Hume’s account from this difficulty, she would necessarily be confined to the use of our original ideas and impressions of the notes, since the impressions of reflection have been ruled out due to their derivation from our ideas (1.1.2).  How might such a defense me made?  A possible solution might be to allow a ‘piling-up’ of our impressions and ideas, such that their durations overlap and are perceived simultaneously, so that there would be awareness of change without requiring any impressions of reflection.  Although Hume does not address the issue of whether our perceptions have the characteristic of persisting for durations or whether they may be so layered, it is plain that if this attempt to save his account of temporal perception is successful, it will be so by virtue of how it emends his account of basic perception.  If the manoeuvre fails, then it will be precisely because humans are not in actuality such inveterate multi-taskers.

While some would no doubt disagree, upon careful self-reflection, I cannot find any evidence that I am capable of simultaneous perception – it appears to me that I can only perceive one thing at a time (although what is perceived changes extremely quickly).  Since it seems that I am not actually able to perceive more than one idea or impression at a time, I must conclude that this method of layering perceptions cannot be a valid means of rescuing Hume’s position.  Consequently, I believe that his account requires revision.

It would be a simple matter to make his theory of the perception of time consonant with his general theory of perception and our lived experience, but it requires the affirmation of just that which he denies – that is, that the perception of time is a species of impression of reflection.  On this view, it is the case that the perception of time is a complex idea that compares current impressions with the ideas of previous impressions and notes the relative vivacity of each and their progressive diminishment in vivacity as new impressions arise.  Thus, the perception of time does not arise simply from an experience of the manner in which perceptions arise, without remainder.  Happily, this hypothesis actually helps explain how it is that one who is lost in thought experiences time more slowly than otherwise – the sustained focus upon the thought reduces the number of other impressions crowding upon the mind, meaning the vivacity of the idea of the thought competes with fewer other ideas and therefore appears stronger than it otherwise would.  Therefore, it would appear to change more slowly and consequently time would appear slower as well.  Unhappily, however, even this view would have impacts upon Hume’s basic understanding of perception.  Specifically, since it does appear that we only perceive the impressions of the flute notes and their ideas, our perceptual apparatus must operate much more quickly than we typically take for granted!

Although it is presumptuous in the extreme, I have here outlined Hume’s account of how we come to perceive time and the reasons why I find it untenable.  An exploration of possible resolutions to it were explored and I found in favour of the revision of the account he gave to one that classifies the perception of time as an impression of reflection.  I did so because to believe otherwise would require the acceptance of a state of affairs about human perception (namely, simultaneity) that I do not believe obtains.  Consequently, I asserted that the impression of time must be an impression of reflection, contrary to Hume’s position.

Cognitively Structured Reality

The world is not as it appears to us in naive conscious perception, instead we perceive reality only after it has already gone through a process of cognitive structuring.  At this very moment, none of us is engaging with reality as it is, but instead we are, quite unbeknownst to ourselves, projecting reality.  This seems obviously wrong, of course – is there a way to compellingly demonstrate what I mean?

Indeed there is: reading.  When we are looking at what is ‘written’ on a page or computer screen we do not perceive patterns, colours, or shapes but, rather, we perceive ‘meanings’.  Take the following statement:

Today is Friday

We all know precisely what it means – indeed, we cannot help but see what it means.  Now, take this statement:

اليوم هو يوم الجمعة

Do we know what this means (if it even means anything at all)?  According to the online translator, it also means ‘today is Friday’.  Unless one has learned to read Arabic, however, it does not – to those who don’t know that language, it is a meaningless bunch of squiggles on a page.  Now, turning our attention back to the phrase in English, try to look at the words again and see them not as meanings, but as shapes devoid of meaning – that is, try to see them in the way we see the written Arabic: as nothing more than some squiggles.  We can certainly see that there are shapes present in the visual field, but it isn’t actually possible to look at an English word and not perceive its meaning (or, at least, I am unable to do so).  Note that the meaning is instantly apparent to us, that there is no delay between the conscious visual perception of the shapes and the conscious perception of the meaning those shapes encode – the two arise simultaneously.  Note also that the shapes in the visual field and the meaning of the shapes appear to be coextensive.

There are (at least) two implications that fall out of this.  First, meaning isn’t out there in the world.  It is something that we project onto the world – our minds cognitively structure reality before it is even available for us to consciously engage with.  Second, because this projection or structuring happens in the way it does, it is completely transparent to us, so we cannot even be certain whether or not we are dealing with something that’s a feature of the world or just a projection onto the world.

These implications are troubling, and not just because of the havoc they wreak on traditional epistemological concerns – there are political and interpersonal worries too.  In dealing with any individual or institution, I cannot be certain whether I am actually engaging with the reality of that person/thing, or whether I am engaging only with what I imagine to be there.  This can lead to problems like lack of empathy, or the elimination of more-or-less tolerable though necessary institutions, or even violence against groups/individuals who do not really ‘deserve’ it.