When I hear someone say, “We don’t all see the same colors,” I interpret that to mean “It’s possible that what you see as red, I see as blue,” and vice versa. I didn’t interpret that to mean “When I see dim red, you see bright red,” because, as I indicated, I knew that all along. (Well, for a long time, anyway.)
While culture plays a role in influencing our preference for certain colors, it doesn’t explain why red and blue are the two most popular colors used in national and state flags. (I think.)
I have never before heard that some people see random colors when totally deprived of light. It certainly doesn’t happen to me. I see total, absolute BLACK (unless I rub my eyes).
Cecil Adams his own self is apparently color-blind. (Look at the last paragraph.)
I think we’ve beaten this dead, bloody-red horse enough.
Weak sauce, man. Just calling me solipsistic doesn’t make it so. But given the time and effort spent refuting your accusations, I certainly welcome anyone else’s contributions to this debate…
Put simply, if any of those assumptions is false, then so is your equivalence (brain=computer), BickByro.
Firstly, even if this is not required for consciousness, it is required for your equivalence. And while much of our behaviour is hardwired, as you say, it is fairly obvious that not all of it is. How else can you explain that some people speak English and some speak Greek, or that some used to speak Minoan? Or how there was a first person who figured out a way to calculate pi?
The argument for non-conscious DNA giving rise to conscious minds is a good one. It seems to me that that fourth condition may not be necessary for your equivalence. Too bad the third one is, and appears to be false.
I’d say the fact that flatworms are not extinct is proof that this isn’t so. Seriously, I don’t think so. Maybe an earthworm, but even that is stretching it. I don’t think we have any good evidence that either worm is conscious in the sense we are talking about.
Trading in your solipsism for mysticism? I certainly don’t know of any non-observable effects of consciousness, in my head or anyone elses. If you haven’t observed them, what makes you think they are there?
I haven’t accused you of anything, Brick. But it is apparent that I am failing to make myself understood. Please note: I said understood, not agreed with.
I am simply declining to spend more time on an exercise of diminishing returns.
Spiritus: As far as I can tell, you’ve been making yourself understood (and cut me some slack here–consciousness can mean so many things to so many people that it’s necessary to do a lot of chiseling before any progress can take place).
And you most certainly have been accusing me of something: solipsism.
tourbot: Hey, thanks for picking up the debate.
Umm… somehow I feel I’m being patronized here (that’s patronised for you Brits)…
Let me state my equivalence again:
The computer makes no decision it was not programmed for.
If consciousness is nothing more than the result of neurons having fired, true free will cannot exist (I did not state this part explicitly in the first place, and I apologize. But it seems necessary, to me: if consciousness isn’t really a force in and of itself, but merely the by-product of mechanical forces [p.s. this is what Chalmers and I are saying, though it might not seem like it], how could we believe a consciousness could “make a choice”?)
In the absence of true free will, it can be said that no human can make a decision he/she was not programmed for.
Spiritus’ relevant condition again for the record:
Okay, now let’s go on…
Okay. Prof. Pinker would argue vehemently that we are programmed to speak a language that fits under certain organizational and relational criteria. So, in a very real sense, we are all programmed to be able to speak English, Greek, Minoan, etc. We also appear to be programmed to invent languages in the event that we have none of our own, and to continually modify our existing ones to keep up with changes to external conditions (new people, new things, etc.).
So, program a computer with Pinker’s Universal Grammar framework, a fixed (but large) number of sounds, and a command to identify and name objects, attributes, and relationships by combining those sounds (a tall order, I know, but I don’t think it’s ridiculous), and as you introduce the computer to more and more new things, I think you’d end up with a language every bit as coherent and “valid” as one of our own. Would that count as self-modifying behavior or not, do you think?
As for how we discovered pi, well… if there is a Universal Grammar framework, there could well be a Universal Mathematics framework. And if I’m not mistaken, calculating pi seems to have been independently invented in several cultures. I can’t say for sure, of course, but it seems possible that discovering pi was just an inevitable result of our programming, just like inventing languages seems to be.
This isn’t by any means an iron-clad proof that neural networks are incapable of self-modifying behavior. But I think it raises some worthy questions as to what “self-modifying” really means.
I was also thinking this (and it may well be wrong): Doesn’t my computer engage in “self-modifying” behavior when it performs a defrag?
I thought an earthworm was more evolved than a flatworm, not less. But in any case, can you tell me what sense of consciousness you believe we are talking about, so I can be sure we’re on the same plane?
Observable from the outside is what I meant, of course. There’s nothing mystical about it: consider a food you particularly like or dislike. In my case, I can think of few drinks more refreshing than a glass of icy cold club soda. My girlfriend, on the other hand, would sooner drink a glass of seawater. I can observe, from the outside, that she makes outward indications of disgust whenever I convince her to “give club soda another chance.” She drinks the soda and immediately wrinkles her nose and sticks out her tongue. As an intelligent outside observer, I can deduce that she still doesn’t like the stuff (were I truly naive, I might deduce that she makes faces whenever she tastes something delicious).
But for all this, I am incapable of myself “observing” the phenomenon “disliking club soda” (a phenomenon she has “observed” every time she’s tasted the stuff). Whenever I drink it, I like it, and I cannot observe the contrary state. Neither can she observe mine.
Not a full response, just a list of apparent misunderstandings.
http://www.davidchess.com/words/poc/solnote.html Epistemological solipsism is a family of doctrines that hold that we can know only facts about the self, and that other facts cannot be known, or can be known only in some secondary sense.
[sub]I note that that site has Chalmers listed as a panpsychist, which agrees with your light bulb statement.[/sub]
I have argued no such position, though I see no reason to eliminate it as a possibility.
It is a prerequisite of free will. Your case has been predicated upon a model of consciousness that rejects “zombies”. You made an analogy to computers apparently to argue that consciousness is “something else”. I prefer not to have to defend your axioms.
No. These were elements that seem minimally necessary for the analogy that you proposed to be valid. They do not represent arguments on my part.
Then your initial point that computers gathering data was an example of empiricism without consciousness seems to have been contradicted.
Show me where I have done so.
No. It does not.
It implies that I think your computer lacks some element that I find necessary for consciousness. I do not recall stating that “complexity” was a necessary condition. I am certain that I never argued any single condition was sufficient.
Ask me again when I actually make such an assertion.
Because it changes the meaning. You do not have a direct experience of consciousness. It does not get a free pass through the veil of perception.
I responded to “don’t pretend for a second that it isn’t an assumption. And that’s not too empirical.” by noting that the assumption sub[/sub] was, in fact, necessary to empiricism. The assumption was not empiricism. Nor did I present the assumption as demonstrated by empiricism.
Thus, if your premise is true you have never heard circular reasoning.
Then you either are willing to abandon empiricism entirely or you have failed to understand my statements. As to distinctions, you have the case reversed.
As does everything that we can “verifiably agree upon”.
No. Those are the things we already know about consciousness (or perceive that we know, at least). I do not need science to tell me how I experience red.
The above is not at all “like” what I said. Blue would be unnecessary for car, not absent from car. There can be no necessary conditoins for consciousness which are unique to me if other things are conscious. Nothing in my statement holds any requirements for sufficient similarity.
These are things that I have said before: there is absolutely no refutation for the position.
There is no way (rigorously) around solipsism. One can only reject it.
The alternative to solipsism is the decision that my individual perceptions are not the only valid measure of reality.
let me be explicit, then. The rejection of solipsism can only be made at the level of axiom, a priori assumption, or initial epistemological assertion.
There is nothing at all logical about the decision to reject solipsism. Logic cannot pierce the phenomenological veil.
Things I have said before: solipsism falls into three subsets: moral/ethical, metaphysical, epistemological
Solipsism, as I hope I stated clearly above, has more than one face. Your arguments thus far have repeatedly relied upon epistemological solipsism.
Solipsism is not a fact, though it is irrefutable
Perhaps this confusion stems from my penchant for treating epistemology rigorously in these type of discussions.
By understanding them as distinct. How else. When asked for a definition of consciousness I do not say “phenomenological experience”. The element is not the set.
It would be if we had anywhere else discussed the evolutionary advantages/disadvantages of consciousness. Or if we had fully explored the existing context of the debate. Or maybe even if you had attempted to introduce a new context for the entire discussion.
None of those things happened.
I disagree.
I have described the position that the valid epistemology for consciousness is unable to pierce the veil of perception as epistemological solipsism. That is not a particular radical statement on my part. That you see this description as a personal accusation is between you and your perception.
If whole language were completely innate, any individual would be able to read any language they were exposed to immediately.
Pinker’s hypothesis does state that the ability to learn language is innate in the structures of the brain. However, I doubt he has ever ventured that the individual words, sounds and syntax are hard wired. Those are made up from whole cloth (though each generation may add a its own strand to the loom), within the limits allowed by the innate structure. A better test would be to have two computers built with similar innate structure and see if they can arrive at an agreed upon language, and then see if they are able to teach it to a third such computer. Yes, I would consider that self-modifying behavior.
While a person with normally functioning vision (i.e. not color blind) may see some slight gradations or differences in red as perceived by another person it seems highly likely that all people see red generally the same (i.e. red to them is red to everyone else barring small fluctuations in intensity, hue, etc. particular to that individual). I base this conclusion on an explanation given at the web site http://van.hep.uiuc.edu/van/qa/section/Light_and_Sound/Properties_of_Light/982241220.htm
Here’s a quote from it:
Works for me unless someone has evidence that this reasoning is flawed.
The assumption that “seeing green” means seeing the frequency of light assiciated with green.
The assumption that a displacement of color perception (even after we accept assumption 1) affect frequency but not the breadth of frequencies perceived in each color band.
The assumption that distinctions in color perception could only be represented by a linear displacement along the “color line”.
For ref:
340-400 Near Ultraviolet (UV; Invisible)
400-430 Violet
430-500 Blue
500-560 Green
560-620 Yellow to Orange
620-700 Orange to Red Over
700 Near Infrared (IR; Invisible)
I don’t think it works at all. Your red is red still–still the same wavelength, even if it doesn’t look like what anyone else would call red. The fatal flaw in the explanation given is that he(?) is claiming that to see “orange” as “red” the actual orange wavelengths (620 nanometer) must somehow become red wavelengths (about 700). Therefore, the red wavelengths would actually have to become infrared, which we cannot see. But that’s not what I or I think anyone else is saying about color perception. Colors may appear different to two different people (if you could see from their perceptions), but the actual wavelength of the light does not change. Your perception of colors is different, the colors themselves are not.
I don’t think it’s really terribly likely that people perceive colors significantly differently, but it is logically possible given that we cannot accurately compare them. This is just an issue of perception, not objective reality; people don’t actually see a different spectrum, they just may perceive one. If you see orange and red as looking like what I would call red and purple, purple is red to you and you do not need to see into the infrared scale to have a complete spectrum. “Red” the 620-700 namometer wavelength is always red; the actual color displayed inside your head may not look the same as the color I see inside my head.
Think of the differing color perceptions as like pairs of glasses.* Let’s say there’s two types: one changes the color of everything you see, one has no effect. With the changing glasses, you don’t see any more or less colors, but the ones you see look different. Now imagine you’ve had a pair of one of these two types of glasses permanently affixed to your head since birth. When someone points to a color you have learned to identify as “red” you call it red. But how do you tell if you have one of the changing glasses or one of the non-changing ones?
*[sub]The analogy is not all that great since glasses actually do change the wavelengths of light. But these are magic glasses that exist only in your brain and don’t interfere with actual light wavelengths.[/sub]
Hmm… I’m not sure this explanation works for me just yet. Let’s say we have hypothetical person A who is a person with the ability to distinguish between colors as easily as any normally functioning person but who sees those wave lengths of light as a different color then they do. While it may be possible for such an individual to exist I think that the possibility for a sizeable number of such people to exist is very remote. Even 1 or 2% of the population having this characteristic seems unlikely.
Here is the scale as produced by Gaudere originally:
Configuration 1
While leaving the frequencies alone let’s create a hypothetical group of people whose colors are shifted relative to how we see them. Something like this:
Configuration 2
Note that the colors are in the same order, just shifted a couple of notches. Before continuing I think I need to address a point made by Spiritus as to why I think color perception would need to remain in the same order.
I am indeed assuming that distinctions of color perception have to be represented linearly (meaning, lined up in this fashion). If color perception isn’t linear doesn’t the use of color based camouflage become an inadequate defense due to the contrasts between the colors? In other words a blue-green animal gains great camouflage with a blue-green background and some camouflage with either a blue or green background. The lesser amount of protection with either a blue or green background would be due to the color change and not a change in color hue which might stay the same. On the other hand, an animal perceived to be yellow (but with purple perceived directly on one side of yellow and blue directly on the other side) would only receive it’s camouflage benefits when it’s background very close matched it’s own coloring due to the contrast between the lighter color of yellow and the darker colors of purple and blue.
If we accept that the color perception needs to be in this order then we need to determine if Violet is perceived just above UV and Red just below IR just like configuration 1 for nearly all people or if other configurations (like configuration 2) are existent but rare (say 1% or greater). I think this determination can be made based on color intensity. Let’s say we have a number of pairs of magic glasses which instead of changing perceived colors instead put the world into grey scale. Then we show a group of people wearing the magic glasses several colored cards and ask them to indicate the order of the cards from lightest to darkest. The first card is colored a bright yellow, the second is colored a bright red, and the third is colored a bright purple. Based on contrast alone it would be possible to tell if all people perceive colors the same since if they didn’t you would have situations like:
Person A
Actual Card Colors:
Card 1: Yellow
Card 2: Red
Card 3: Purple
Assuming Person A isn’t colorblind and can distinguish colors as easily as Person B shouldn’t it be easy to tell that they’re seeing a different color since different colors have different contrasts when viewed in black and white?
Am I missing something? Please point it out if I am.
Well – you ignored the question of a shift in the size of each band. For instance:
**standard**
340-400 Near Ultraviolet (UV; Invisible)
400-430 Violet
430-500 Blue
500-560 Green
560-620 Yellow to Orange
620-700 Orange to Red Over
700 Near Infrared (IR; Invisible)
**deviateion**
340-400 Near Ultraviolet (UV; Invisible)
400-410 Violet
410-460 Blue
460-510 Green
510-560 Yellow to Orange
560-700 Orange to Red Over
700 Near Infrared (IR; Invisible)
Also, I do not believe that hue is an important characteristic of camouflage. Tigers do not live in an orange-and-black jungle. It would seem that intensity/luminance and patterns of contrast are the key elements to natural camouflage.
I think the shifts based on frequency range would easier to detect then if the colors were simply shifted with frequencies intact. For instance a person perceiving standard could be shown colored light in the range 400 - 430 all of which he would call blue, whereas light seen in the 400 - 430 range by a person perceiving the deviation would call some of it violet and the rest blue. Obviously we could determine that he’s perceiving this range differently.
Hue may not have been a factor in the development of all animal camouflage (like Tigers as you mention) however it seems likely that it was a factor for the development of at least some camouflage (like the beige/light brown coloring of a desert dwelling lizard).
You’re thinking that if there is a perceptual color shift of red to yellow, a person looking at a very light yellow card would perceive a medium red one, and I don’t think that must be true; I would think they would percieve a bright light red (which, of course, they have been taught is bright yellow). I am guessing that color intensity/contrast across people’s perceptions is probably pretty similar, although people will judge different colors as being the darkest/most intense so it’s unlikely that they’re exactly the same.
Now, look at the red, yellow, purple cards I have set up here. The top row is how a “normal” color vision person sees the cards. They rank them, light to dark, as 1, 2, 3. How does Person A rank them? …1, 2, 3. See them in grayscale. Yes, the colors seen by Person A look wonky to us. But to them, they are perfectly normal, red, yellow and purple, arranged in that order because that is how that person’s world looks. Or look at the difference between this and this picture; in both the contrast is the same, colors can be distinguished, yet they’re quite different (or as different as I could get with a few moments noodling around in photoshop. I didn’t do a full red==green, orange==blue switch because it’s too tough for me to “fake” in anything outside of flat color blocks.). It is possible for people to be perceiving quite different colors, and yet still pass color-identity and color-contrast tests. Again, I don’t think radical differences are by any means certain, but I think they are possible and can’t be blithely disproven with the tools we currently have on hand.
But he wouldn’t call them blue. He would just perceive blue, but he has been taught that part of the range of 400-430 is called violet. Just because you don’t see colors the same doesn’t mean you can’t perceive the difference between various shades, and even if you see see blue and purple as both being blue, you can tell the difference between the blue-you-see-that-is-called-purple and the blue-you-see-that-is-called-blue.
Definitions of colors are taught, not innate. There is no red, orange, yellow, blue, green, purple or indigo; they’re arbitrary cultural classifications of light wavelengths that have no clear-cut objective boundaries. So even if I call something “blue” that you consider “purple” it doesn’t mean at all that I am seeing a different color than you. I may have simply been taught that purplely-blue is blue, and you were taught that purplely-blue was purple.
On the color shift, you seem to be arguing that there is a clear and unambiguous difference between “blue” and “violet” which turns exactly at a given wavelength. Haven’t you ever disagreed with someone over whether a color was “yellow” or “green”?
Likely? Why? If we can demonstrate that intensity and contrast (and maybe value) are sufficient to provide an advantageous camouflage, then why would it be likely that the hue was necessary too? For that matter, this entire line of reasoning is predicated upon the idea that visual perception among the predators/prey for any given camouflaged animal are sufficiently similar to human eyesight.
I am not at all sure that is valid. I certainly do not think the observation that began this line of discussion is a “flawless” demonstration that all people perceive colors identically.
I’m about halfway through it and all I have to say is “This is a lot more about color perception then I ever wanted to know”
So far it looks like perceived color differences may be possible but only in specific orientations… I hope you read the above link because I sincerely don’t want to try and summarize it’s contents.
Spiritus:Okay, I’m beginning to understand your frustration. But thanks for bearing with me and at least trying!
Let me clear up a few points of my own:
Yes, I am arguing for Chalmers’ theory, but other than the basic premise (science will never satisfactorily explain consciousness), there aren’t many hard-and-fast conclusions that go along with the theory, so if I seem to jump around and not have a definite “party line,” it’s because I don’t. Just like you, I am exploring possibilities. Just because I state a possibility does not mean I believe that possibility is the truth.
Telling someone repeatedly that they are engaging in solipsism is precisely the same as accusing them of solipsism. If “accuse” conjures up visions of personal attacks in your mind, I’m sorry. I don’t consider it personal. But the fact remains: you have repeatedly accused both me and Chalmers (based on my description) of solipsism. If you can’t fess up to this, we really are having a pointless conversation.
You state in your last post that “You do not have a direct experience of consciousness.” In other words, consciousness can only be known in some secondary sense. If consciousness can only be known in some secondary sense, it seems certain that the outside world cannot be known in a primary sense. Thus, it appears, you are every bit as guilty of solipsism as you have claimed me to be (does ‘claim’ work better for you than ‘accuse’?).
Did not this debate between the two of us begin with my proposal of Chalmers’ theory and your attempts to poke holes in it? Seeing as how Chalmers’ theory is that understanding the network of neurons CANNOT give us an understanding of consciousness, you were indeed, by arguing against Chalmers, arguing for such a “purely material consciousness,” at least hypothetically (for I also understand that just because you argue something does not mean it is your own belief).
How did free will get involved in this? Are you saying that we have free will?
Actually, the way it worked is that I said computers are empirical, and you said they are not because empiricism requires a conscious mind (the assumption on your part being that computers lack consciousness). I wouldn’t mind trying to disprove that empiricism can only be practiced by the conscious (an assertion of yours that seems rather groundless), but I’m equally comfortable proposing that, in fact, the computer is, at some level, conscious, in which case there is no reason to suspect they are not capable of empiricism, even if your assertion is correct.
Not in so many words. But when you say things like:
it seems to me that you are taking a list of “ways we extrapolate consciousness” and using it as a litmus test to pass judgment on whether the subject is conscious. Not so?
Then there’s this one:
The basic point is the same: a certain list of traits observed in human consciousness is used as a litmus test for all possible consciousnesses.
First of all, I’m trying (albeit in a roundabout way) to determine what conditions you do believe are sufficient for consciousness. And no, you certainly did not state that complexity was a necessary condition—as I said, you implied it. And I stand by that; what possible reason do you have for declaring your computer non-conscious? It seems to me that, if you believe a human brain could (given sufficient technological advances) be replicated in silicon (am I wrong to think you believe this?), the only difference between a Pentium 4 and the Silicon Brain would be a degree of complexity in organization.
Again, I never said you made the assertion. But, given a materialistic view of consciousness (electrical charges among neurons=consciousness) and your belief that computers are not currently conscious, your options, as I see them, are either (1) The computer’s “neural” organization is not sufficiently complex to produce consciousness or (2) The computer, lacking true neurons, cannot be conscious. I welcome dissent from this conclusion, of course.
Moving on… why are you so hung up on the idea that “only my consciousness is verifiable to myself” = “in the entire cosmos, only my conscious is valid”? The second simply does not follow from the first. There is a hidden assumption is solipsism: that only that which is verifiable is valid. I never made the mistake of assuming that.
As a means to explain consciousness, yes, I am willing to abandon empiricism entirely—that was my first premise, and it is the main thrust of Chalmers’ thesis. But I hardly have the case reversed; all this talk of “my red” in this thread seems like proof enough that of perceptions of color are a constituent of our consciousnesses—if it were not so, we’d see no “personal” colors at all. That does not mean that empiricism no longer has any function, simply that it’s useless here. I suspect you are defining consciousness differently than me, but you seem reluctant to admit that you are defining consciousness at all, so it’s hard to go much further on this point.
Yes, for each of us, everything DOES only exist in our private worlds—where else would it exist? What other world do we inhabit? Certainly we are not viewing our surroundings from any sort of objective viewpoint. But just like with the colors, though I may see teal and you see blue-green and someone else see green-blue, we can verifiably agree that the color has a certain CMYK or RGB value. That’s where empiricism is useful.
Well, perhaps it is your lack of curiosity on the subject of consciousness that is contributing to your frustration with the conversation. Certainly the most interesting thing about consciousness, to me, is not why or how you are conscious (for hell, I’ll never know for sure that you actually are anyway), by why or how I am conscious (something I’m quite sure of). It is of much more interest to me to know why I have any phenomenological vantage point from which to observe the universe at all than to know why it looks like you do.
My point is that obviously your individual phenomenology is not required for a consciousness, but it is very required for your consciousness. Or, more directly, an individual phenomenology is a prerequisite (or is possibly identical to) consciousness. And as I’ve stated earlier, your individual phenomenology is probably more than 90% identical to mine, so the distinction is a fine one. But it is there.
Please explain how any of the things you’ve said contradict my assertion that the first clause of the solipsistic axiom (only my consciousness is verifiable to myself) is an observable fact. I don’t see anything particularly rigorous about ignoring my clear and repeated statements that the second part of the solipsistic axiom need not be a conclusion of the first.
“Just do it,” eh? Again, I’m not catching the “penchant for rigor” here. I’d like to see you at least attempt to give some arguments for why you disagree with me that the two are identical concepts (for clearly this is a major difference in our thinking).
Oh, don’t be a sourpuss. You said that if consciousness were an illusion, there’d be no philosophical “meaning” to be found in it. I disagree—anyone with a philosophical inclination would next attempt to discover why we are so bound to the illusion. Kind of a reverse allegory of the cave, you know?
tourbot:
I believe Pinker does posit that a certain amount of syntax is hard-wired. But your general point is correct—there’s no reason, say, for one language to have inflections and another not to. But to claim that words, sounds and higher-level syntax are “made up from whole cloth” seems questionable. I don’t see any reason to suppose that we don’t have some sort of random-number generator working within us, churning out sounds for things should we need them. It’s a short step from sounds to words.
Now, the reason why one language would develop inflections and another not is difficult to explain, I’ll admit. I can’t deliver a logical proof on the spot for it. But it doesn’t seem insurmountable, somehow. We are programmed to be able to speak a language with or without inflections—couldn’t syntax, perhaps, just amount to a roll of the dice inside the ol’ Language Center? If the “inventor” of your language happened to have the inflection-using part of his brain activated when he started building the language, you’d end up with an inflected language. Am I totally off-base in this?
I agree in full, except that I’d think the “teaching to a third computer” wouldn’t be necessary to determining “self-modifying,” for there’s nothing self-modifying about being forced to speak the language of your parents and your society (which is how we are taught). Even the modification of languages is a utility-based process (eg English dropping the inflections to ease communication with foreigners), which doesn’t have much to do with self-modification in the “no-external-stimulus” sense—only at the origin of language do I find a real problem of “where did it come from.” Yet “unadulterated free will” seems strangely unsatisfying…
I have repeatedly stated that the position that consciousness is not open to empiricial study is grounded in epistemological solipsism. I say so again now. How you imagine that I might be unable to “fess up to” something I have stated many times is a mystery.
It works better for me if you understand epistemology.
Yes, nothing external to perception is known except in a secondary sense. This is a problem only if you restrict your epistemology to primary perception. That position is known as epistemological solipsism. Other epistemologies accept the validity of indirect knowledge. You are arguing that solipsism is the proper epistemology to apply to the question of consciousness (though you seem to get angry when I note this).
I disagree.
Claiming that I am “guilty of solipsism” simply because I understand the precepts upon which it rests is absurd.
Well, to be strictly precise I was arguing that it might be possible. In other words, I see no justification for ruling it out.
Well, in the part of the paragraph which you chose not to quote, I said: Your case has been predicated upon a model of consciousness that rejects “zombies”. You made an analogy to computers apparently to argue that consciousness is “something else”.
In other words, if you argue that consciousness must have an extra-material element to prevent us from being “zombies”, that seems to imply that if we were “zombies” we would not really be conscious. ‘Saying “well, this neuron received an electrical charge” tells us absolutely NOTHING about why we feel things, why we aren’t just consciousness-less robots (‘zombies’ is Chalmers’ pet term) responding to these electrical charges without giving them a second (or even a first) thought.’ In other words, response to stimulus without intervention of free will is excluded from our definition of consciousness.
I say again, I feel no obligation to defend your axioms.
Incorrect.
You clearly argued that computers recording data are not concsious minds. If you will not “fess up to that”, then this conversation is worse than frustrating. It is dishonest.
You dropped this point after I said, “That would be the word relying. Again, empiricism is a decision making tool employed by conscious beings.” If you had meant to keep contesting the point, you should have said something. Calling it groundless now is hardly encouraging to further discourse.
“Litmus test” implies a level of certainty which I do not feel is justified. You may recall quoting me as saying Thus, if all of the observed effects point to consciousness, then I feel secure in saying, “it’s probably conscious”. I do not drop liquid on litmus paper and say “it probably has Ph above 7”. You chose not to quote my next sentence, which says: ** If we reach a point of understanding where the effects are more specifically and finely understood, then I will even feel comfortable dropping the “probably”.** I would have thought that clarified my position. Apparently not.
I said: ** A working definition is the best we can hope for so far, since we know it only by its effects. These include: the ability to perceive, awareness of self, the ability to determine actions, awareness of the passage time. I make no claims that this list is ehaustive.** That does not imply either that the traits are only observed in humans or that it applies to all possible consciousnesses.
* Computer acting strangely -- posting this before it gets
lost in a crash. Frankly, I'm not certain whether I will
finish the rest or not.*
Grim_Beaker
I’ve only had time to scan the article. More when I’ve digested, but this passage seems pertinent.