Moral implications of AI

Yes I understand that ** dakravel**.

But the dilemma is, if you are the model (consciousness) and you want to make a copy of your “self” you have to know what that self is. But since we don’t know the nature of the “self” (if there is one) we cannot begin to even consider replicating it.

Let alone the fact that awareness cannot observe itself, i.e. an eye cannot see itself seeing.

I guess the question is, Can we create something that thinks and appears to be aware but has no self? If we could it might be a human replica.

I had an interesting discussion this evening with a person who is involved in a project to create a “robot” that will be a “conversational companion” to folks in China who are trying to learn English. They envision that the project will eventually become an actual tutor/teacher of the language.

The reason I’m bringing it up here is that they are parsing actual conversations in English and giving the robot reasonable responses–and a way to create original reasonable responses–for the conversational gambits they observe.

Here’s my question. If I know for a fact that the responses I get are simply clever programming but you don’t, how does that change my moral responsibility compared with yours? Since your interaction with this being will appear to show that it has sentience (but I know it doesn’t), will you be outraged when I turn it off (bash it with a stick/whatever)?

For all I know I could be turned off at any minute, :slight_smile:
Many people are emotionally attached to non sentient things such as their car or washing machine and get very upset when they are damaged or hurt, :slight_smile:

for all you know, you are.

There may be many different types of artificial intelligence in the future;
those entities which mimic the qualities of humanity might be in the minority (although they probably will be the ones which we can interact with most easily).
Self awareness is a respectable goal, although not the only one; if an artificial intelligence can be grown which has definite goals, and can work independently and efficiently toward achieving those goals -
yet is not self aware (something like a superior ant’s nest)
then this would be a very useful type of artificial entity to have around;
we wouldn’t have to constantly worry about hurting its feelings for example, and could give it orders without it arguing or wanting to be appreciated or going on strike.


SF worldbuilding at
http://www.orionsarm.com/main.html

not my viewpoint exactly, but since no one has mentioned it yet…

when we understand enough of ourselves to replicate sentience in machine form as easily as writing a difficult program, sentience will be a dime a dozen. deleting that program that you can create from scratch or have a backup handy will be like killing the sims - no moral implications involved, nada.

What do you mean, you “know” this robot doesn’t have a sentience?

In order to sufficiently mimic human speech patterns, the device must eventually become human. ELIZA once fooled people, and sometimes still does, but we’ve become more aware of what simple algorithms can accomplish.

Now, a sufficiently complex system will essentially be a human mind, albeit encased in a silicon and plastic incarnation. Will you abandon empathy when its “clever programming” is as clever as yours?

as far as clever programming goes, what moral issues will the debugger face?

what happens if NewAIv2.0 seg faults? is the programmer guilty of anything?

When you said that, it made me think of Windows.

Carry on…

Replicating human intelligence is about producing awareness in what is understood as inanimate objects. Which is like trying to add more redness to red within a nondual perspective since the true state of awareness/consciousness is non-local, meaning the subject and its objects are one and the same. In that sense awareness is already one with all its perceptions and observations. It just a matter of what consciousness identifies with, if anything.

Damn! Now you’re saying it. Stop saying that, or at least provide some arguments that the human mind-brain whole is the only architecture capable of supporting consciuosness/sentience.

Absolutely.

An AI might be able to monitor all its own internal states to a high degree of accuracy,
have access to vast amounts of external data and stimuli,
never forget any data unless it chooses to do so,

and be able to divide its consciousness into separate agents to achieve tasks which then may or may not be reintegrated into the main self at a later date.

This is not simply emulating human consciousness, it is improving upon it.


SF worldbuilding at
http://www.orionsarm.com/main.html

Think about it. If I aim to pretend to be the best soccer player in the universe, I will have to prove my ability in soccer. If I suck at soccer, then it’ll become obvious I am not the best soccer player in the universe. Now, substitute in anything for “best soccer player in the universe”, with grammatical changes as needed. The point is, if something emulates something to such an extent that you can not tell whether or not it’s faking or real, then it may as well be treated as if it weren’t faking. We do this on a daily basis when we talk to other people. We don’t know for sure that they are all autonomous beings, but they display most characteristics of being such, so we assume that they are.

Indeed, but from a philosophical PPOV, it would be nice to know what (if anything) is really going on inside.

I think the soccer analogy is just very slightly flawed because ‘being the best soccer player’ is the be all and end all of the subject - one cannot pretend to be the best soccer player without actually being the best soccer player, whereas (I believe) a machine could (if cleverly constructed) convince a person that it is sentient and has real hopes, dreams, ambitions, fears etc without actually being sentient or actually feeling any of those things - the technical limitation here is in our ability to interrogate and examine the subject’s inner life (if any), not the subject’s ability (or otherwise) to be a real person.

From a philosophical point of view, that is the point of view. Why else did Turing devise his idea of a “Turing Test”? Not the speaking to a computer test that most are familiar with, but his original concept. His idea was that if something that passed the Turing Test, for say, face recognition, then it could be said that that thing could recognize faces. What it comes down to is that there is absolutely no way to determine the sentience or self-awareness of some other person or machine short of actually experiencing that person or machine’s point of view. If you could somehow develop a way, that’d be cool, but as yet, people have been struggling with it for a while and nothing has come to mind yet.

Well, I may have misunderstood a bit. If you’re asking for the mechanics of it, Turing already did that with his “Turing Machine” idea. That’s just one idea though, there have been others.

dakravel you seem to be confusing the Turing Test with something else. Maybe it’s me, but I have no idea what you mean by the Turing Test for Face Recognition (incidentally I believe there are neural nets trained in face recognition that outperform human beings, and they certainly aren’t sentient).

It’s just the Turing Test. A device designed to fool people into thinking that it’s a human-type consciousness is eventually human.

Hmm! There’s no question in my mind that anything that passes the Turing Test must have sentience, BUT, I think I can imagine some sentient being not able to pass the TT, that’s all.

Before the current Turing Test that we all know and love came around, there was Turing’s concept of a Turing Test. That’s what I’m referring to, if you study Turing, he mentions it, and so do others who talk about him. It’s simply what I stated above. A test that, if passed, implied that the being had the capability of whatever was being tested.