The problem with Gnosticism is: we all suffer from it

I gave a list. Complexity was not on it. Stand by what you wish.

It possesses, to the best of my knowledge, no sense of self, no ability to determine actions, and no awareness of the passage of time.

If that proves to be true, and if the silicon brain exhibit all observable effects of consioucness, it implies that a certain arrangement of silicon is sufficient for consiousnees. Whether the requisite element is complexity, computational ability, characteristics of inputs, mechanisms for interpretation (programming/perceiving), et al would be interesting questions to explore.

It would also have implications, in the general sense, for the possible nature of other material consciousness. It would have no implications for the necessary nature of other consciousnesses.

And it certainly says nothing about a “flash point” at which a computer would suddenly become conscious.

I dissent. I find 1 to be overly simplistic, as noted above. I find 2 ot be absurd. I find your assertion that these represent the universe of possible options to be unjustifiably reductionist.

Except in the realm of examining consciousness, where you have asserted it repeatedly.

You have the case reversed because I am not the one arguing for a special distinction between some perceptions that we know only through the veil of phenomenology (“things we all agree on”) and others (“characteristics of individual perceptions other than those we all agree one”). You are the one drawing that line.

You have misunderstood, again, the sentence: disqualify everything else perceived or measured by mind from the millieu of empiricism. It does not specify only those things perceived or measured and which we can “all agree on”. “All agree on” is not an epistemological category that I am comfortable using as a litmus test.

Again, you inspire me to repetition. A working definition is the best we can hope for so far, since we know it only by its effects. These include: the ability to perceive, awareness of self, the ability to determine actions, awareness of the passage time. I make no claims that this list is ehaustive.

Where you find reluctance in that statement is not obvious to me. I am, of course, reluctant to declare that I have the answer. Declarations of certainty in the absence of evidence is an aspect of ignorance.

What I lack is patience for this type of conversational tactic. If you find lack of curiosity about consciousness in my posts then I am done talking with you.

It turns out I wasn’t cooler after all.

The pace of work is picking up, so I will probably have to drop this for at least a week (giving ample time for cooling off on both sides).

But I’ve got a little bit o’ time:

And since I have been arguing that consciousness is not open to empirical study, you have been accusing me of epistemological solipsism. This is a simple fact, and it’s ridiculous that we’re even arguing about it.

Considering that you reject the idea that it is *im**possible as solipsism, I hardly see what options you’ve left for yourself. If it can’t be impossible, it must be possible, right?

                          quote:

                          How did free will get involved in this? Are you saying that we have free will?

Hold on a minute–free will? Again, I never said that the “extra-material element” is equal to “free will.” What makes you equate them?

I will admit that, and I apologize for the confusion. My intent was this: computers are “not conscious” in the sense that they do not appear to show the signs of consciousness that we have (this kind of goes along with your non-quite-litmus test). At the same time, they may well be conscious. So your claim that empiricism requires a (human-like?) conscious mind made me think of two attacks simultaneously—for one, I’m not sure you’re right that empiricism DOES require a conscious mind, and two, I’m not sure that computers might not possess a certain consciousness (though admittedly not the type we have that would lead you to say consciousness is required for empiricism). I ended up pursuing the latter point when I should have spent more time on the former (as you point out). Does that make any sense?

[quote:

                          The basic point is the same: a certain list of traits observed in human
                          consciousness is used as a litmus test for all possible consciousnesses

You gave a list of what appear to be observable effects of consciousness that may be sufficient to determine whether something is conscious. You did not provide a list of causes of consciousness that may be sufficient to create a consciousness. Complexity, in this case, would have no place on a list of effects, but it would certainly have a place on a list of causes.

A list of observable effects, only presumably due to consciosness, which you admit is incomplete and should not be used as a litmus test. So at best you can declare your computer probably not conscious (at least in as sophisticated a manner as we are).

Again, we’ve reached an impasse of definitions. I don’t see how computational ability, characteristics of inputs, mechanisms for interpretation, et al are not, ultimately, functions of the organizational complexity of neurons/silicon chips. Could you explain?

When?

Okay, okay, I take it back. You just seem so non-committed to any line of thought (other than declaring Chalmers a solipsist) that it is difficult for me to get much “traction” in this argument. If you advanced a hypothetical position, you realize, I would not automatically assume that you are proclaiming yourself the Knower of Answers. I hope you also realize that I am not doing that either.

Sorry about the italics, everybody!

Spiritus: Well, I’ve been thinking more about why I fumbled on the computer-as-conscious-being-capable-of-empiricism question.

Here’s part of what threw me off, I think: I believe that animals of all levels of development (perhaps plants as well) posssess corresponding levels of phenomenology (which, as I’ve mentioned, is how I’m defining the “simplest form” of consiousness for the purposes of the Chalmers debate). This belief is informed by my rejection of the “flashpoint” model of consiousness, which would seem ridiculously anthropocentric if used in this context.

Now I don’t know if you agree with me on this, Spiritus, but I assumed you did, and I quickly lost myself in the argument because of the assumption.

How did this work? Well, you appeared to have presented two possible and mutually exclusive paradigms for a conscious being to observe the universe: solipsism and empiricism. This is what it looks like to me, anyway…

So, if animals have phenomenology and are thus conscious (ie not Chalmers zombies, and again, my assumption, not yours), do they practice empiricism or solipsism?

Wouldn’t you say empiricism? If we consider only the most “intelligent” animals (ie dolphins and chimps), we certainly see evidence of trial-and-error methodology, which strongly implies an empirical worldview. One could even make the argument, I believe, that solipsism is impossible until a conscious being reaches a certain “level” of consciousness, at which point it becomes possible to conceive of the notion that the outside world is not, in fact, a “sure thing.” Which would seem to put all the “dumber” animals firmly in the empirical category as well.

So when you said that a computer cannot practice empiricism because it is not conscious, all these assumptions kind of tumbled over each other.

I agree with the idea that the computer is not conscious in the way we are, which is how I interpreted your statement. If you hold, however, that being conscious the way we are is a prerequisite for empirical thought, I’ve got to take issue with you (see above re: animals and consider it in the context of the “flashpoint” option I had previously claimed you to be faced with).

On the other hand, I believe it possible that the computer is conscious in a way much simpler than we are (only the most rudimentary phenomenology, in other words, and nothing more). This is informed by my belief that a silicon neuron would function identically to a real neuron, which is itself informed by my assumption that the electrical impulses are the important part, not the medium.

Given those beliefs, and my previously stated feelings about phenomenology/consciousness (hand in hand with empiricism) being present in even the simplest animals, it seemed more worthwhile and ultimately more consistent for me to argue that computers are conscious (and therefore capable of practicing empiricism) than to argue that non-conscious beings are capable of empiricism. So I switched arguments. D’oh…

Anyhow, I’m sorry, Spiritus, for doing that. So how about we start over by having you dissect all the things I’ve said in this thread? :slight_smile: