Very recent research into Post-Traumatic Stress Disorder (so recent that there aren’t even web pages to cite) is turning up individuals who are seemingly filling their brains and are unable to lay down new memories. The researcher I was discussing this with said it is related to the “resource allocation model” of memory. These poor folks are so traumatized by an event that the stress and anxiety keep laying down associated threads with strong associations and the system gets overloaded. Hard to believe, but we seemingly can exhaust the possibilities within a lifetime.
The Chinese room experiment can easily be adapted to account for memory. Simply have the guy in the room keep copies of the inputs and outputs and use these in following the directions. This is exactly what a computer program is, so IF consciousness can be achieved in theory by a conventional computer system, it follows that we ourselves may in fact be (and probably are) Chinese room systems. But we can never determine scientifically whether a computer has the same sort of conscious awareness that we have.
Of course, I can’t prove that you have the same sort of consciousness that I have. You certainly behave as if you do, but you could do that without actually being conscious. There is no reason to think that your behaviours have a different sort of origin than mine do, so it makes sense to think that you are in fact conscious. A computer is different, however. Since the computer’s own experiences (if it has any) cannot, by definition, be experieced by me, the only way for me to build a conscious computer is to build a computer that behaves as if it is conscious, and to hope that it is. There is by necessity another perfectly reasonable but entirely non-testable explanation for the computer’s behavior besides it’s manifesting consciousness.
Is this paradox navigable? Of course it is, in the sense that we can ignore it succesfully–if we act as if Cmdr. Data is conscious, we will never be proven wrong. Eventually, I think our understanding of the brain and our ability to mimic its form and functions will advance to the point that the gap will barely be noticable–we will in effect become Antipodeans (Richard Rorty’s hypothetical beings who have no language to describe, and profess no awareness of, subjective mental experience apart from its objective neurological description).
Off to Great Debates.
DrMatrix - GQ Moderator
“I am convinced that I exist” implies an address on behalf of an entity under that impression. That description applies to this entity called SentientMeat also. The question is whether there might be a solely physical explanation for consciousness, thinking and self.
That was me, actually. And it wasn’t a physicalist versus a non-physicalist, it was a physicalist versus an eliminativist (a kind of “hardline fundamentalist” physicalist, but both physicalists nonetheless).
Entire system, yes, not just the fellow with the envelopes.
As you say, we can’t determine scientifically whether a human has that either. I agree that tests are exceedingly difficult, if not impossible, but I would suggest that AI is straying off the point. We might explain consciousness in terms of vastly complex memory, chemical emotion and sensory input without being able to engineer it in the same way that we might explain but not engineer black holes or the weather.
As for the “what Mary didn’t know” and “what it is like to be a bat”, I don’t find these supposed paradoxes particularly challenging. Mary has never “seen red”. I have never “heard an ultrasonic echo”. All this means is that a certain signal (in these cases EM radiation of pure ~630nm and an acoustic wave of >22kHz, respectively) has not been received by a particular type of sensory apparatus attached to a brain. If this constitutes knowledge then one can never know “all there is to know”, since there will alway be hypothetical signals receivable by hypothetical apparatus which one has not yet attached to one’s brain. These paradoxes therefore seem to me to rest on simple legerdemain with the word “knowledge”.
I am also unfamiliar with those Antipodeans, Alan. However, if the paradox rests on the impossibility of determining what certain neuron fire actually means to the neuron firer, this is no more fundamental than the impossibility of working out which computer program is being run solely by reference to chip activity, or of decrypting a message encoded using a one-time-hash private key. Again, the entire system, including the original inputs which formed those neural connections must be considered. Most of these paradoxes appear to arise only when an essential element of the brain is arbitrarily discarded, thus essentially constructing a strawman.
Why try to expain a mystery with a mystery?
Spelling and grammer subject to change without notice.
And, to clarify what I mean by “explanation”, I consider that it is possible to put forward a feasible explanation for an experience without actually having that experience: a feasible explanation for sight can be given to a blind man.
Is, say, memory that much of a mystery?
Umm, more junk. As with anything, the nature of anything named is not fully delineated by its definition. You think the definition of the brain includes something about atoms? That would be curious to an atomist in, say 1700, when atomic theory had not even been proven.
The first step in understanding the brain is to admit that were almost completely ignorant of its nature.
Can we please dispense with all this “Oooh the brain is all scarily mysterious and we don’t understand it at all” melodrama?
I think that what such people are saying is that they don’t understand the enormous advances made in neuroscience, cognitive science, neuropsychology, experimental psycology or any number of entire fields which study the brain and its associated entities.
Granted, they have only been around for decades rather than centuries, and a full and falsifiable explanation of every single facet of this thing called “mind” is the challenge of the millennium, but to say that we are ignorant of practically anything to do with this kilo of offal in our skulls is, well, ignorant.
See, that’s why I’ve ended up with transcendental idealism. All this matter is really irritating and infuriating to explain. It is quite obvious to me what a mind is, even Descartes found a degenerate solution to the self, but explaining the physical is… well, downright mysterious sometimes. Forces and particles and vacuum potential: the world is a strange place no matter how we look at it.
Of course, the point is always that it is us looking at it. The bitch about qualia is that our entire foundation of science is directly or indirectly founded on it. Science is per se empirical investigation, and empircal investigation per se is experiential. As any fan of Decartes knows, experience needs someone to have it, and that someone is us. Using science to explain ourselves is, in some ways, viciously circular unless we find and accept some interesting a priori methods of reasoning that solve or otherwise remove the circularity. Descartes tried, and IMO failed, but he failed for interesting reasons.
Is thinking the motion of atoms in the brain? I think it is important to understand what we mean by this question. If we mean, “Does someone with a damaged, impaired, or removed brain do anything we consider thinking?” the answer is pretty unequivocal: no. Impairing the brain also impairs what we call thinking. The other question is, “If my brain was damaged, or removed, would I still consider that I was thinking?” This is the qualia angle. Our methods of inter-subjective determination require questioning that which is publically available, and qualia are not publically available. We can say that people with damaged brains act and report problems with thinking and thought-related activities (stroke victims, for instance). But how powerful is that evidence in answering our question?
I recently read a paper from 2001 that linked response time to moral dilemmas with brain activity that indicated an emotional state. Pretty interesting paper, if anyone has access to Science mag it is Volume 293, page 2105. It is available online to subscribers. They showed that response time to moral questions (where the answers were “appropriate behavior” or “inappropriate behavior”) varied considerably based on the level of emotional commitment, which they measured by brain activity in various areas. Surely responding to moral problems is an activity centered around thought. More or less, I feel comfortable with their methodology. So there is, if we accept the paper’s results, certainly a correlate between brain activity and pure thought (moral questions, after all, are purely hypothetical situations). Someone like SentientMeat is more or less content to let the matter lie: the brain is the source of thought, it is not just the “cause” of thought but the brain’s activity, in a strict sense, is thought. Those of us who would care to argue are left the task, then, of explaining what thought is that the brain is not sufficient for (I don’t think many would say that the brain is not necessary).
I don’t really think qualia serves this purpose. Qualia raises interesting questions about investigative techniques (Dennet’s treatment in “Consciousness Explained” is interesting) which in turn qualify our conclusions and cause us, conservative as we are in science, to hedge our bets, but not so much so that strong assertions like “the brain is sufficient for thought” are completely without evidence. In fact I think there is some compelling evidence there. If I were not already previously committed to thought as the source of atoms then I might be quite persuaded.
Sometimes I think you are a peerlessly erudite and lucid nutjob, erl, but that might simply be your transcendental idealist mind-rays messing with my brain atoms.
Gotta take my cues from Hollywood and end on a shocking note.
No. It is.
“Enormous” (get some literacy) advances were made in chemistry in the 1700s; bigger advances were to be made later.
No sh*t, Sherlock.
“Offal,” there you go. You’re one wise dude, dudeman.
Well then, Aeschines, in absence of any serious rebuttal (and I’m sure I spelled “enormous” correctly), perhaps you could offer a particularly “mysterious” aspect and we could go from there?
“Enormous” has pejorative connotations. If you were one of us literati, you’d know that.
Mystery? I don’t think there is any mystery at all: all is pattern.
Ehrm, Aeschines, I do not think that word means what you think it means. Enormous can just mean big, unless you are restricted to a definition many dictionaries label as archaic.
Yes, well, I suppose it does make a little more sense that way. It was a great line, even if I failed to learn appropriately from it. (I finally found it [url=http://boards.straightdope.com/sdmb/showthread.php?p=4556608#post4556608]here, if anyone is curious. I kept searching for “supervenient physicalism”, but it was “supervenience” that you used.)
Looking back at my posts in light of this, I see that I fell for a common trap when arguing for a position one doesn’t actually hold–making it resemble more closely than it ought one’s own position. (See Chronos, I live by my own advice. I am far more often a dire warning than a good example!) In this case, I made non-physicalism into a sort of strong non-eliminativism, which forced me unintentionally to adopt eliminativism as a straw man for physicalism. Thanks for keeping me honest.
The truth of the matter, of course, is that as a non-eliminativist physicalist, I almost entirely agree with you. It would have been a crying shame, though, if we’d all started agreeing with each other, especially when DrMatrix just moved us into GD. So it’s a good thing erislover finally showed up.
Oh, yeah–Antipodeans. The eliminativist Richard Rorty posited a planet inhabited by beings with a very highly advanced brain science. This technology had become so integrated into their culture, that they no longer spoke, for example, of feeling happy, but of being in brain state 113-alpha, and might with equal ease speak of seeing red or of being in brain state 9421-G. They had absolutely no language to describe what we call qualia, and were completely baffled when human scientists asked what it is “like” for them to be in brain state such-and-such. These human scientists and philosophers were divided on the question of whether these beings actually had qualia but had forgotten how to recognize them, or were “zombies”–non-conscious automotons outwardly indistinguishable from humans (apart from their strange language). Rorty’s contention was that since the Antipodeans’ language appears to be complete–that is, they can describe every exerience a human might have–but without reference to inner, subjective experiences, therefore these inner subjective experiences must be non-existent, and are actually an artifact of our language and our inability to properly identify brain states.
The Antipodeans were so named to mock certain of Rorty’s Australian colleagues.
PS–Sorry about the broken link above. Just cut and paste to get to SentientMeat’s post. It’s worth the extra trouble.
I just don’t get the pejorative “offal”; materialists apparently not only believe that matter is all, they despise it, too.

Here are some criticisms of Penrose’s theories:
http://c2.com/cgi/wiki?MistakesOfRogerPenrose
http://philosophy.ucsd.edu/EPL/Penrose.html
http://psyche.cs.monash.edu.au/psyche-index-v2.htmlThe second link talks about artificial neural nets and their ability to do things like learn patterns from incomplete data and infer things without complete rules.
I haven’t gone through all the links which you posted, but that argument about neural networks is simply irrelevant. Neural networks are still algorithmic in nature. They may not have complete data, but their behavior can certainly be described by very specific rules.
Penrose’s argument is that we can mathematically demonstrate the existence of problems which cannot be solved algorithmically, but which can be solved by human beings. The question of “complete data” is simply not relevant to that issue.