We are on such vastly different wavelengths that it probably won’t be productive for me to respond to your other points, but I felt compelled to at least address this statement, which I find particularly odd. You suggest that the onus is somehow on us to justify a position supported by all scientific evidence, and contradicted by none? I cannot prove the position is correct, but I sure as hell can justify it! Can you point me to a research article in which a dissected a piece of human is found to contain material whose behavior is inconsistent with the physical laws that govern all other matter, whose characteristics have been studied and confirmed to ncredible precision?
If both of us were actually nothing more than computers produced by evolution, then how would it be possible for us to be on “vastly different wavelengths”? What can you possibly mean by “wavelengths” in that statement, other than using the term metaphorically to imply that we’ve chosen different philosophicaly presuppositions? And wouldn’t that undermine your entire argument in this thread? No two computers have ever chosen different philosophical presuppositions.
It seems to me that by phrasing your position in such a ridiculous manner, you’re in effect admitting that it can’t be upheld in a reasonable manner. A human life is a process, and as with any process you can’t learn anything by dissecting a small part of the material that was involved after the process has ceased. As you say, the behavior of particles can be predicted to incredible precision. If humans being were nothing but collections of particles acting under physical laws, then you ought to be able to predict human behavior with the same precision. Can you predict what I’m going to write in my next post, what Barack Obama will say in his next press conference, or what the actions of millions of investors will cause the stock market to do tomorrow? If not, then why should I believe your claim that “humans are, like the rest of the universe, nothing more, ultimately, than a collection of particles moving dumbly according to the same basic laws, blind to any greater process of which they are a part of”? What is the evidence that you put forward to justify this claim?
And, as a side note, if you believe that humans and computers are the same, why don’t you just program your computer to print out and endless series of messages agreeing with you, as opposed to debating with human beings in this forum? Aren’t they both, according to you, the same?
Consciousness is awareness is the mind. (What is mind? No matter. What is matter? Never mind!)
But seriously, your position is logically impossible. Say you were just a computer being fed strings of data that granted you the wherewithal to have this conversation in the first place. That is not impossible. What is impossible is that you could be convinced that you are conscious of it, or aware, when in fact you are just being deluded by false data. Why? Because there is no you to even have a concept of yourself to begin with, let alone convince yourself you are something that you are not, because you are not there to begin with. You are blackness, void, a computer, whatever you want to call it, but you are not an aware, sentient being. Therefore you may be able to state that you are conscious or, in your case, not conscious, but it is impossible to convince nothing that it is something.
I hope that makes sense by some stretch of the imagination.
Sure, but you are misrepresenting my position. It is surely possible for an “unconscious” computer (note the quotes – let’s not start wrongly calling me out again for tautological use of a term I am rejecting) to have written everything I have written in this thread. Do you disagree?
I’m cool with that.
I agree. But like I said, if the computer is “unconscious” or unaware, then consciousness may not exist, but who was there to build it? I’m not saying verbal and written communication were results of design, I’m saying that they were a result of intelligent manipulation of our senses and abilities to speak and make marks on things. Such manipulation, or even senses or abilities, wouldn’t arise or be of any use without awareness (which is how I have simply defined consciousness. Are you aware of that? ;))
I am making a distinction between “awareness” in an informational sense, and “awareness” in any sense in which more meaning is attributed to it (as when phrases like “subjective experience” are used).
Correct…
…incorrect.
Does that preclude the possibility?
Yep, just give me a computer far more powerful than any currently available and the tools to quickly and systematically record the structure of the brain on a microscopic level (we actually aren’t that far away from this being possible IMHO).
This is, believe it or not ;), a good question, which I addressed in the last paragraph of my OP.
You’re overestimating humanity’s technological progress like whoa. We can’t even predict “easy” stuff like weather, earthquakes or disease. The brain is one of if not the most complicated systems we know about.
The alternative – that the brain is not made up of particles and that it’s acting under non-physical laws – is not exactly favorable to empirical inquiry or logic. I’m not exactly sure how one would tell the difference anyway.
No, I disagree that an unconscious computer could have written what you wrote.
Unless you mean that in the trivial sense, because the computer on my desktop “wrote” all those messages, and the computer on my desktop isn’t conscious.
But I deny that it would be possible to create a computer system that could generate and sensibly respond to natural english sentences about a complex and wide ranging and unbounded topic like this one, and then claim that computer system wasn’t conscious.
That’s a variation on the notion of philosophical zombies–people that interact in every way like conscious people, but who aren’t really conscious. Except the notion is impossible, because in order to act as if you were conscious, you’d have to actually be conscious. Or to put it another way, there’s no difference between acting as if you were conscious, and actually being conscious. They’re the exact same thing.
In order for a system to act as if it were conscious, it would have to have a memory, be capable of learning, be aware of its surroundings, be aware of its internal states, be capable of modifying its internal states, be capable of modeling the internal states of other systems, be capable of modeling its own internal state based on counter-factual hypotheticals (“If I get that promotion to assistant manager, then I’ll be happy!”), and so on.
Such a system wouldn’t be simulating consciousness, it would be conscious, because that’s what consciousness IS.
And if that’s all there is, fine. No problem. We can look at it and measure it. But I think most people associate consciousness with a little something extra. They want a “there” or "what it’s like"ness in the machine. I think what iamnotbatman is saying (correct me if I’m wrong) is that the robot wouldn’t have a “there” hanging in the void, or its own internal world. And we don’t either – we just think we do.
Just to clarify – you mean as opposed to a “conscious computer”? Or do you mean you disagree that any deterministic machine could not have written what I wrote.
Just to allay your concern, no, that’s not what I mean ![]()
It sounds like you are defining consciousness to be that which is present when one “acts conscious”? I guess it’s the other way around though, because below you define “acting conscious” in terms of a definition of consciousness. Which is a little confusing (genuinely – I’m not being sarcastic).
I agree that however one defines consciousness, it must be no different than any sufficiently complex attempt to simulate it. If you stick to defining consciousness as nothing more than what you describe above, I agree with that definition.
But the machine WOULD have its own internal world, just like we do. Because it would have to think about things, imagine things, hypothesize things, remember things, learn about things, forget about things, think about what it’s thinking, think about what others are thinking, and so on.
That’s what an internal world is, and that’s what the machine would have to have to be able to carry on a conversation like a human being.
Of course I don’t believe there’s a non-material component to human consciousness, rather I contend that a conscious machine (such as the human brain) can be constructed of ordinary matter arranged in ordinary ways and obeying the ordinary (hah!) laws of physics.
If you can look at an object (like the pen on my desk), and close your eyes and remember what the object looks like, and imagine turning it around in your mind, and imagine touching it and imagine what it would feel like, and imagine the sound it would make if you click the clicker or tap it on the desk or scratch it against paper, then you’ve got an internal world, and you’re conscious.
If you can’t do that–if the pen is just an object and you can’t create an abstract representation of that pen in your mind–then you’re not conscious. You might be able to interact with that pen in certain ways, like how animals can react to various stimuli (like the frog responds to insects). But you wouldn’t be conscious. And we can tell the difference between an system that is conscious like a human being, and a system that isn’t, like a frog, even though both are able to catch flies. And the conscious human system isn’t anything more than a bunch of non-conscious subsystems exactly analogous to the frog’s brain, wired together in a self-referential way.
What about a computer program that can take a picture of the pen on your desk, use object recognition to create an abstract representation of it, categorize its color and topology and compare it to representations of other objects in its memory, extrapolate a 3-dimensional representation and apply rotation transforms to it and record them for future playback, record the sound of the pen tapping and record the association with the abstract representation, analyze and compare its wave-form to those in its memory banks, etc… and provide this computer program with an algorithm that periodically re-reads the data and re-analyzes it and re-records it, perhaps using the previous abstract representation to re-create an approximation of the original picture and then using that picture as input to re-create another abstract representation, and so on… the program could also include an algorithm that periodically takes a snapshot of its own internal state and creates an abstract representation of its own algorithmic structures, file structures and content, etc… Would this computer program be ‘conscious’?
I think that I don’t like that wording. There is something in the formulation “consumer of signs” that implies more passivity on the part of the consumer and more active meaning inherently embedded in the sign than I’d be inclined to ratify.
To be conscious is to be an entity to whom a sign can have meaning. To be an interpreter of signs, perhaps. Signs cannot possess meaning unto themselves. Meaning inheres in the interaction between sign and consciousness.
It works for me. Consciousness is just an arbitrarily complex level of self-reporting on internal states and modelling of that and others’ internal states. I’d add something about anticipating future states as well, so a predictive component (not necessarily accurate) is, IMO, essential to the definition.
Add in some arb level of unavoidable error in memory writing, some weirdness in processing of sense inputs before they are written to memory the “conscious” systems can access (filling in “blind spots”, independent, inaccessible pattern recognition algorithms, arbitrarily linking previously known olfactory input patterns to associated event memory spaces with a replay flag, etc), some (large, even a complete copy) sectors of memory and computation that aren’t accessible to the “conscious” part but run the same algorithms and can (randomly) write to it (not an abstract representation, but another whole system - dreams, basically), and add the ability to rewrite most of its own algorithms (learning) too, but not all of them (instinct, reflex). Now make , say, 1000 copies of the whole system (but give the gestalt the ability to create more/delete some (but not all) on the fly), and* link them all to the same one set *of sense/physical inputs and outputs, and memory space, and have them compete for access to both, as well as the ability to communicate with each other. The *gestalt *will be conscious at some level. The individual “programs” (systems, really), not so much.
We are not at this level of computation, AFAIK. I’m not sure that exact set up is even being attempted by AI researchers.
Note that I was merely providing an example that met the criteria given by Lemur866. The point was that is was something currently achievable by AI researchers.
But a machine can coldly power through this with symbol manipulation and zero understanding (this is where Searle comes in). I don’t see any reason why any of these processes requires subjective experience. Why does an entity need to experience anything to recognize self from non-self? Or plan ahead?
It just seems like the next juicy target for the wrecking ball of materialistic reductionism. First gods, then souls, then the will, and then next up is the experience itself. We already know the self is often grossly mistaken about the hows, the whys, or the origins of its decisions. The next logical (although deeply counter-intuitive) step is to show that the self is grossly mistaken about what it’s like to be a self, too. Then we can all be happy p-zombies. ![]()
Frogs seem pretty conscious to me. Not as conscious as we are, but it’s a continuum. I don’t know if they think they have an experiencing self, but it’s hard to tell if anything does. Stupid problem of other minds.
That’s where I’m disagreeing.
For instance, human-style pattern recognition has proven really, really difficult to do algorithmically - so far. Things like reification, multistable figures and modal completion are all currently lacking in artificial perception. So no, to my mind, the type of perception a computer is currently capable of is *nothing *like that a human is capable of. This isn’t special pleading, it’s experimental observation of the way humans perceive the world. Until we teach computers how to do these sorts of things, their internal models (abstractions, if you will) will not be analogous to those of people. They just don’t see the same things we do.
I’m also unaware of AI research that creates an unconscious for the AI, but I’m not widely read in the field. But I do think this is essential for the other things to happen - a lot of the things that our consciousness is conscious of, are the products of automatic behavious like pareidolia and reification.