i’d go a step further and say: this is all the particles humans are made up of do.
you seem skeptical of any materialist claim. indeed it is a tricky thing to accept that there may in fact be no “me”. at the same time, there is no way to prove that “i” do, in fact, exist.
as for the copies being like identical twins, i think the two pairs are so unlike each other that the comparison is uninteresting. twins are born at the same time with no memories and no concept of self. the copies have the same memories and the same concept of self. they both think they are the person who existed before the surgery. which one is correct? which one would you be? is it still a safe assumption that you still exist after the surgery? indeed, is it a safe assumption that you exist at all?
Turing’s Machine, link here, breaks human consciousness into several different parts, each of which is comparable to a part from a machine. One thing about real life that really supports a lot of Turing’s ideas is that we truly aren’t aware of any of our deeper mental processes. For example, when we read a word on a page, we almost immediately come up with a meaning for it within the context of the sentence. Our brain has to see the word, recognize it as the word it is, pull a definition of that word, then figure out which definition applies in that particular case. All of that is done without any of your knowing, ample evidence that we don’t understand and can’t control any of our lower mental processes. Our brain takes in an input, and returns to us an output. I don’t see why you would somehow differentiate between a human doing this, and a machine doing this. For a machine to perfectly emulate human consciousness, then it would have to be able to convince one or many third parties that it was a human over a long period of time. If something was able to do that, why would it be any different than you or me? The only reasoning I can think of is that because it’s “artificial”, it must not truly be “human” and so doesn’t deserve the same respect.
If it’s simulating personality to the extent that you can’t tell the difference, then you should feel horrible about mistreating it; it would have to be as human as you. If I tie you in a chair and threaten to kill you, and you begin crying, how can I really tell that you’re not just faking it? How do I know you’re just pretending to be scared of death? Truth is, I don’t, but if you exhibit it to such an extent that you can convince me (while I’m clear of mind), then I must accept that you probably are scared for your life and crying for that reason.
I agree, we are not aware of the functions of our own brains and as such consciousness or awareness does not influence brain activity. If it doesn’t influence brain activity does it do anything besides observing and possibly choosing what to observe?
And I think you’re confusing consciousness with what it is aware of.
If I cannot know with certainty whether another human is conscious, meaning all I can do is guess, then I don’t see how I can know if a machine is conscious.
TVAA wrote:
For all we know consciousness is not energy, after all it cannot be observed even as a cause to an effect. And that which does not exist cannot be said to be energy.
What I mean to say is that we can approximate the functioning of our brain with a basic computer system, and from our own experience, that is more or less how it works. Our minds are, in a most basic sense, organic computers or organic input/output devices. Our brains take in input through eyes, ears, mouth, nose, skin, and return it back to our foremost consciousness as awareness of our environment. One big thing about Turing’s Machine is that you must recognize that you are not your brain, and your brain is not you. Your brain would be the executive and the storage component put together, it holds information and determines what is to be done with the information, and the whole process altogether is our conscious behavior, our ‘sentience’.
Conscious != sentient, if you mean you can not tell if another human is sentient, then that’s absolutely true. The only way to be absolutely sure that someone else were sentient would be to experience their thoughts and experiences, only then would you be able to 100% tell that they were sentient and self-aware. As it is, we are never able to have that 100% certainty, so we simply go on what observations we see. If you want to try and refute that, you can, but I don’t think it’d be easy. So, if we can’t even be absolutely sure that everyone around is sentient, that means the only thing we can do is make educated guesses about the sentience of the people around us. Given that that is the case, I don’t see why a machine capable of fully emulating a sentient human would be considered anything less than human.
I’m talking more about the idea of a soul than of consciousness, but even with consciousness, can you point out to me where in our brain it exists? What happens to this part of the brain when we go to sleep? What happens to it when we are knocked out by a sharp blow to the head? What happens when we die? Unless we can somehow point out exactly where in our brain consciousness exists, we can’t be absolutely sure that it exists solely as a physical component in the brain. Thinking about it a little more, I suppose you could say that all the components in our brain + body functioning together is our conscious, and that actually makes sense. So yeah, I was more talking about our soul or spirit being a form of energy, and that was proposed simply as something to think about, not as an argument in and of itself.
Yes, consciousness is not self-conscious. (Dualistically speaking)
If a machine could fully emulate a sentient human it wouldn’t be a machine; it would cease to be a machine, since there remains nothing about it that distinguishes it as a machine. If you emulate something fully or perfectly you are no longer “emulating” you are that thing. For all we know there are machines among us who have become human.
I don’t think a machine can have all the characteristics of what it is to be human (whatever they are) but not look human. If you don’t look human how can you act human or think like a human?
If you never have the urge to urinate then you are lacking one characteristic of what it feels like to be human, indeed to be an animal. If you, as a machine do not know what it ‘feels’ like to have a human body to take care of, to feed, do not know what it feels like to breathe then you cannot think like a human because humans cannot “think” without “feeling”; human thinking is grounded, contextualized, conditioned and informed in and by a feeling body.
Some, so called eastern philosophies; Buddhists, nondualists, vedantists, etc, equate soul and spirit with consciousness.
I asked, “Isn’t the validity of dualism exactly the whole crux of this debate?”
And specifically is consciousness the sum of behaviours and/or the sum of internal physical states, or is consciousness dependent upon some ethereal other thing, the “soul”, or whatever. Hence, this debate is indeed about the validity of dualism.
When Mangetout suggested that “It may well be argued that this is all that humans do [i.e. they don’t actual “think” at all; like Searle’s Chinese Room]”, you argue from an anti-dualist perspective.
I would ask anyone who claims that a machine “merely” programmed to respond deterministically to inputs cannot be conscious to provide a cite that suggests that we humans are anything more than such a machine.
Agreed. Just a thought… so would it be helpful to think about consciousness/sentience as a sort of simulation program? As I said earlier, minds don’t just arrive, they develop. Babies imitate long before they could know what it is they are imitating. Children emulate and pretend…
perhaps we pretend to be sentient, and in doing so, become so.
Both are correct in thinking they are the person who existed before surgery. You can’t ask which one would “you” be because “you” now have two bodies. Given that memories are continually edited in the light of new experiences, it would be interesting to interview them a couple of years after the surgery and see how far their memories of pre-surgery life coincided.
Babies cannot imitate consciousness, because it cannot be observed. Nor can they emulate feelings or thinking for the same reasons.
They mirror and ape behavior.
Good point, >>>** Babies imitate long before they could know what it is they are imitating.**
They also think and feel before they know what thinking and feeling is. They probably exist before they know they exist, (generally speaking since we (I, you ) don’t know that we even exist, we being “self”).
We have consciousness (or consciousness has us) yet we don’t know what consciousness is.
So babies are not unique in their ‘self’ ignorance.
How can we build a machine that knows what it’s like to be human and knows what it’s like to think as a human with human like intelligence, when we are so ‘self’ ignorant? You can’t make a copy or replica of something unless you know what it is you are coping.
I agree. Though as a contemporary philosopher said, “Materialism/dualism has to be right because the alternative (idealism, mentalism, nondualism) is unacceptable”……But if it’s true it’s true.
**
And yes agreed we cannot know for sure if it would be conscious because we don’t even know for sure if other humans are conscious; it cannot be proven or observed.
Bu what would it be conscious of? Can it be programmed to “feel”, to have bodily sensations, to know what it feels like to be hungry? Can it be programmed to feel pain? If not how human can it be?
In part, to be human, (to think as humans do) is to be conscious of what humans are conscious of.
Hmmm. I don’t know about that, it might make it easier to know what something is, but I can conceive of copying a thing widget by widget, gizmo by gizmo, link by link, piece by piece and ending up with an accurate facsimile.
I don’t understand this at all, materialism is the antithesis of dualism, is it not?
Was it ever our “intention” to make an Artificial Human? Surely we just care if something can display consciousness/sentience, are there some reasons why only a humanoid could experience these things?
Yeah, I don’t understand the want to make this new sentience into some sort of exact copy of a human. When I said that a sentience would have to convince us it was a human I did not mean to say that it had to act exactly like a human. I meant to say that it had to have the mental ability to converse and act like a sentient being. I don’t believe something has to have the same emotions, body parts, and nervous system as us to be able to be sentient. I guess my last post was a bit carefree in the use of the word human.
When NoCoolUserName observed that I’d been reading too much sci-fi or watching too many “Terminator” movies, he (she?) may not have been as far from the truth as I would have liked to think. I was moved by fictional creations (H. A. L. and Lt. Commander Data in particular) to speculate whether such “beings” were possible in the context of foreseeable technology, and whether creating them would be a good idea or not. Further, if they were possible, I wondered what our reponsibility to them as independent “beings” would be and what duty, if any, they would have to us?
Some of you have touched on those specific questions. But the discussion has grown into something much more interesting than I ever anticipated. I was proposing a question on “sentient” beings without defining sentience. That was my mistake. Also in the back of my mind was an ultimate comparison between human/sentient machine and god/human. I mistakenly thought that the latter question would naturally flow from the former.
Suspending disbelief, as I suggested earlier, can we re-state the question in more specific terms? If H. A. L. was able to recognize himself (rather than itself, since the device was given a masculine voice) had he not the right to defend himself from the perceived threat posed by Dave? And should he/it have been developed to that level in the first place? Further, did Dave have the absolute right to “kill” H. A. L. because he was “just a mcchine”?
The character of Commander Data is even more interesting, because his self-awareness is so great that he knows he is not human, but strives to “become” human. This ultimate servo-mechanism is conceived as being “independent” yet “programmed” not to harm humans–a dichotomy. The response to Data is very complex and illustrates part of my original question. The humans surrounding him in a military context must regard him as an equal, or in some cases even a superior, but are aware that he can be “turned off.” If it were possible, should such a “being” be created, and if it is, would it’s “life” be as sacrosanct as our own?
My compliments to all the posters in this thread so far. This has been very stimulating, and beyond my poor knowledge of philosophy. At 61, I am still striving to become less ignorant. I appreciate the help.
You agree that we do not know what human consciousness and thinking is or how it comes about and therefore cannot copy it as such, and yet you say, “” but I can conceive of copying a thing…” and that “thing” is human consciousness and/or thinking.
We end up with an accurate facsimile of what?
Or put it this way;
Can I conceive of copying a “thing” piece by piece if I don’t know what that thing is?
If I don’t know what it is I am not copying it.
Yes there are many dualisms. I was thinking of the dualism of the so called material/physical world, and the mental one.
If we are trying to mimic human intelligence I don’t see how it can exist in a vacuum and be separate from and blind to what it is like to be human.
Human intelligence is not distinct from being an animal.
I also do not believe/think that something has to have the same emotions, body parts etc. as us to be sentient.
But if a “copy” doesn’t have the same body, nervous system etc., is not aware of what humans are aware of in terms of being aware of their human embodiedness then it is not a copy of a human being.
You appear to be considering something like a dog that has the intellectual capacity of a human.
Imagine the ideas that might spring forth that no human ever had.
So would it have a ‘self’ awareness? Does it identify itself as being what it is aware of, as the thoughts it observes? Does it think?
If so, what is “thinking”
Does thinking produce thoughts? If humans are not aware of pre-thought brain activity, of what gives rise to a thought, then we may not be aware of our own thinking, we may not “think” at all. We may just be aware of a succession of thoughts.
All the “thinking activity” appears to take place unconsciously or outside of our awareness.
Would a machine or “human intelligence copy" be in the same predicament?
And does it identify itself as having a body? Etc.?
It may appear that I am arguing from a dualist perspective, but I’m trying hard not to; I think that it is probably true to say that my personal beliefs reflect some kind of dualism, but I fully recognise the futility of applying such ideas in a scientific context.
Another point though (courtesy of Greg Egan); if our consciousness is nothing more than an interesting side effect of a vast interlinked set of biochemical calculations, is it not (theoretically, if not practically)possible that these selfsame calculations could be performed with pencil and paper and if so, would such an activity still generate consciousness?
Permutation City truly is a great book, I recommend it to all.
Iamthat, I think you’re getting my posts mixed up. I guess that’s somewhat to be expected since I’m talking about a few different things and they can be confused. The “new sentience” I was talking about in my previous thread would be the fabricated sentience that we somehow ‘programmed’. I was not referring to the end-result of copying a person.
Say I had a model kit. I don’t know what the model is supposed to be of, and what most of the parts are, but they’re all in this box with directions. So I dutifully attach each piece to each piece according to directions, having no clue the function of each piece or anything like that, and I end up with a fully constructed, functioning model of, ummm…, a Gundam. That’s how you can construct something without knowing what it is or knowing the function of it’s parts. If I had the right parts, I could probably duplicate an ac adapter or a calculator just by putting stuff together so that it looks like the original. It’d be hard of course, and I’d probably have at least one failure, but it could be done.