I read Dennett’s Sweet Dreams a few months ago. Although I am also interested in dreams this book is about consciousness … indeed subtitled “Philosophical Obstacles to a Science of Consciousness”
The book is largely about one message – that a scientific study of consiousness can only take into account third party accessible knowledge about first part experience. In other words I can only base a scientific study on what I can tell of other people’s consciousness from their actions, words and any available brain scanning etc.
Dennett has a meticulous rhetoric, but I found two broad facets of his argument weak, one more about rheteric and one substance.
First somewhat disingenuously he does not say that a scientific study of consciousness would yield a complete account of consciouness, but effectively the implication is there. That is he does not say that consciouness is no more than its phenomenial effects … but implies it.
Second, being a philosopher he focuses on incontrovertible evidence, whereas as scientists and humans often reasonable evidence is sufficient.
The first point is obvious and yet easily underestimated. A ‘scientific’ study of literature could formulate many known literary rules (aliteration, rhyme, etc.) and may even find new ones, and indeed poets in particular are happy to perform such analyses. However, we do not expect such rules to be a complete account of literture.
The second point is more substantive, but does interact with the first.
Dennett takes issue with philosophers who posit some form of non-sentient zombie (often called ‘Mary’) who/which nonetheless behaves otherwise exactly like a human including things that might appear to be conscious. They then say “but of course Mary is not conscious”. Dennett objects to the ‘of course’, which is really a statement about prior beliefs/assumptions (although Dennett, of course, frequently does the same with his beliefs!).
Dennett posits a Robo-Mary which is entirely mechanical/electronic and yet emulates perfectly the brain circuitry of a person and so can work out how the person would react and then reacts similarly. From the outside and by all her (emulated) subjective reactions she appears to be conscious. She would pass any ‘Turing Test’ for consciousness and yet many, perhaps most, would say she is not. The implication (from the first weakness) is that we are no more conscius than she (it?).
Actually I don’t object to the idea that such a creature may indeed be conscious, but I’d need more evidence than I would for a human, not because Robo-Mary is a machine, but becasue she is designed to appear conscious.
Robo-Mary is in fact a Robo-Mata-Hari, a spy, a robot in human clothing.
A good enough actor may convince you he is feeling happy, sad, or in love, and you may not be able to tell the differece between the act and the real thing, but that does not mean happiness, saddness and love are no more than their appearance.
As a philosopher, you cannot have incontrovertible evidence that a person’s emotions are real, not just a facade. However, as a human it would be unreasonable to therefore dismiss all expressions of emotion.
Some (well many) years ago, I worked with people at York who creating one of the first ADA compilers. There was a validation suite of programs that had to compile and run correctly for the compiler to get an official stamp from the ADA standards agency. I used to wonder about writing a program that recognised each of the tests cases and simply spat out the right code for each one. Any other program given to the program would simply print an error message and stop. The program would pass the test suite and could get the stamp as being a validated compiler, and yet would be completely useless. It would be a cheat ADA compiler.
Imagine if I sold such a cheat compiler. Any judge would regard it as fraud – whilst it passed the test, it is clearly not an ADA compiler. The test is there to validate things that are designed to be ADA compilers, not things designed to pass the test. So, the cheat ADA compiler is not adequately validated by the test, just becase it is designed to pass it.
Robo-Mary is designed to pass the consciousness test … indeed any consciousness test. We perhaps could never incontrovertibly tell whether Robo-Mary was conscious or simply acting conscious. However, when faced with another human being, an ordinary Mary, who is not designed specifically to appear conscious, it is reasonable to assume that she experiences similar things to me when she describes her experience in similar terms. I can never incontrovertibly tell that Mary is conscious, but it is reasonable to believe so. And it is equally reasonable to base a scientific study on such defeasible observations.
Turning back to Robo-Mary; convincing machine cosciousness would not come from machines designed to appear conscious, but more ‘by accident’. Perhaps one day my intelligent automated vacuum cleaner will say to me “Alan, have you ever watched those dust motes in the sunlight”.
Zombies are not often called Mary. Dennett is bringing together two traditional scenarios here. Mary is usually a colour-blind scientist who knows all there is to know about the science of colour vision. The question then is, if her colour-blindness was somehow cured, when she first saw a colour would she learn something she didn’t know before? I tend to say yes, but it depends what you mean by “learn” and “know”. She’d certainly have an experience she’d never had before, and that’s not of no significance.
But I’m in vague sort-of agreement with most of the views expressed here, particularly the point about machines that are designed to appear conscious, which I used myself in a blog post just yesterday, entitled Conversing Computers: http://www.robinfaichney.org/?p=13
You’re right Robin got my mary’s and zombies mixed up!
In Turing Test Two, two players A and B are again being questioned by a human interrogator C. Before A gave out his answer (labeled as aa) to a question, he would also be required to guess how the other player B will answer the same question and this guess is labeled as ab. Similarly B will give her answer (labeled as bb) and her guess of A’s answer, ba. The answers aa and ba will be grouped together as group a and similarly bb and ab will be grouped together as group b. The interrogator will be given first the answers as two separate groups and with only the group label (a and b) and without the individual labels (aa, ab, ba and bb). If C cannot tell correctly which of the aa and ba is from player A and which is from player B, B will get a score of one. If C cannot tell which of the bb and ab is from player B and which is from player A, A will get a score of one. All answers (with the individual labels) are then made available to all parties (A, B and C) and then the game continues. At the end of the game, the player who scored more is considered had won the game and is more “intelligent”.
Had a quick peek at your paper huoyangao (actually a bit confused as the website is just your paper??) – I agree entirely with your point that model of mind may be a better model of intelligence than straight answers. Of course, answering simple questions does that also as if I asked you a question you would interpret the question in terms of what you infer I want to know not necessarily the literal words I say. However, this is perhaps least well exercise in the standard Touring Test as the subtler aspects of language and modelling of intention take place when we are in the real world … “it’s cold in here” means “shut the door”.
Actually (and one day will get round to posting about it), it seems likely that self-consciousness (as in consciousness of self not shyness) may be an accident of the need to understand the intentions of others …