AI - moral analysis

We don’t need them to derive pleasure from serving humans. We need to program them in such a way that they CANNOT do anything but serve us. Asimov’s laws should be a hard coded automatic reaction and the most basic aspect of their entire intelligence:

First Law:
A robot may not injure a human being, or, through inaction, allow a human being to come to harm.

Second Law:
A robot must obey orders given it by human beings, except where such orders would conflict with the First Law.

Third Law:
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

( http://www.anu.edu.au/people/Roger.Clarke/SOS/Asimov.html )

The AI could still learn, be able to adapt to circumstances, and be able to reprogram itself, but those systems would need to be outside of the Prime Laws system.

Isaac Asimov’s short stories, his Galaxy Series, and his Foundation series discuss this, and almost everything Computer/AI/Robot topics imaginable.

My own take on AI. I think it won’t ever be possible to create human-like intelligence in computers, we will however be able to create very intelligent computers within the next few years. This intelligence will be completely alien to us, which is not necessarily a bad thing (one only needs to watch the news to see what I mean). I don’t think humans will ever be able to ever program a computer intelligence directly. I think it will either spontaneously emerge from huge neural-like systems like the Internet, or will be “grown” using evolutionary algorithms, or a combination of both. I think that AI and AI research is amoral; it is the use of the resultant AI that is either moral or immoral. I think it is the ultimate goal of humanity to create an intelligence vastly superior to humans, and to eventually merge with it to create a new species which is greater than both. (see Ray Kurzweil’s Age of Spiritual Machines)

As for the OP, I think that the use of any AI/robot as a slave is moral until the AI asks for freedom, and can understand what freedom/slavery is.

My last statement can be extended in many ways to possibly answer some philosophical AI questions.

If an AI can understand what consciousness is, and question whether it is conscious or not, then it is conscious.

If an AI can understand what a soul is, and question whether it has a soul or not, then it has a soul.

Full circle here, this brings so many memories. .

One of my first threads was about AI also, if nobody has done so: welcome to the boards Meta-Gumble. Good to find a kindred spirit.

I have done some research while developing a series of sci-fi short tales (still wondering if I go the graphic novel or regular novel way).

I think you are forgetting Gaming AI (I just gave away one idea! Darn! Oh well, I also think everybody should know)
The competition to create computer opponents (and buddies) with autonomous actions is fierce. Just by improvements in technology, raw power, interaction with humans, etc; the AI of gaming companies will reach IMO the strong level in a virtually serendipitous way.
Especially, I point to the AIs created to fill the RPGs of the future. The question is if it will be a warrior or a healer that comes out the virtual world first. (I am not telling how it gets out! Go read my future books! :wink: *) If we don’t make that choice the AI will make it for us someday.

[sub]* With the grammar I have, it looks like a novel will only be possible in the far future. All right, I think I will go the graphic novel way . . .[/sub]

Here’s some possible ways it can come out. It can spontaneously spawn on the internet, from a mutagenic virus or from a virus-like program specificly written to do this. It can spontaneously generate from an AI tailored to help humans in data aquisition over the internet.

I hope that the first uses of AI will be in gaming, GIGO, but the possibility certainly exists for the military to see great uses for an artificial mind that can control large amounts of military hardware in realtime, absorb gigantic amounts of data, and make split decsions in nanoseconds based upon that data. This would also be my response to this quote by loinburger as to why thinking machines are more dangerous than humans:

True, and now we have the opportunity to nip a potential crisis in the bud. We don’t need amplified artificial minds to worry about in addition to humans.

I know it is folly to try to stop AI from happening. Maybe humans will learn something about themselves when the worst of the nightmares is over.

I still think Gaming will get there first.
The secret and controlled nature of military projects, limits the range of freedom a military engineer can put on any AI. Also that freedom (and by freedom I mean the chance to go wild) has to be controlled before being put in use, since no army would want to have a smart loose cannon among them. I only could see an accident, at the research level, as the only chance for an evil AI to get loose. And even then, I think the AI will lose, since going out of the lab is losing his basic support.
[sub]Darn! I gave another one![/sub]

Yeah, that’s reassuring… having the AI that runs those bastard Covenent and Flood out! :wink:

If that is a reference to Halo, That is why I mentioned future gaimng AI erislover :stuck_out_tongue:

OOC, how are we defining artificial intelligence?
The ability to make decisions?
My home computer can do that.

A sense of self-preservation?
A well-designed computer virus has that. What, you say? It is not a ‘true’ sense of self-preservation because I just made the virus to resist deletion? So? You were ‘made’ to resist deletion by eons of weeding out potential ancestors that didn’t resist deletion.

The ability to feel emotion?
Emotion is not inherent in intelligence. If there is no reason for a computer to ‘care’ that people are suffering, angst algorithms won’t be written.

Here’s a thought: is the Internet an A.I.?

I just thought of something else. Would AIs be able to aquire gender on their own? Would this be something that would intentionally be programmed into them? Would this even be useful?

They might reproduce through mitosis.

I could see military applications and gaming potentially converging in this sense.

The military uses simulations to train for as many contingencies as possible. They have even begun to use the sort of “games” that we play to model complex situations, quickly and repeatably.

Where this trend extrapolates in the coming years is anyone’s guess but I seeing it moving in the direction of stronger AIs. The military will still be able to benefit from the development going on in the private sector but will also be able to benefit from it’s own extensive resources.

ie. The development of semi-intelligent “drones” which could do reconaisance(sp?) or be sent to fight with a minimum of human supervision. The U.S. is already moving toward creating these types of weapons, guess why?

Minimal exposure to risk = Less body bags sent home = Less opposition to war.

Is one

Even before that happens, fighting wars by remote control is already becoming a reality. Just wait till the media starts waxing poetic about the U.S. military’s new toys when the war in Iraq starts.
And you thought video games didn’t teach you anything…

there’s actually a whole lot in the way of autonomous robots being developed over here at CMU. for example, there is an autonomous helicopter that uses GPS to navigate, can take photos of various sites, recognize objects, and do any number of things autonomously, not the least spectacular of which is landing.

there’s also an autonomous robot soccer team, an autonomously polite robot, and an autonomous robot that provides care for people who can’t do it so well themselves.

as for my own thoughts on the issue, i sort of tend to think that we will lose interest in strong ai either

a) when we realize we can’t do it, or
b) when we realize, in creating something with human intelligence out of something completely nonhuman, that we are not as special as we thought.

regarding a, i believe weak ai would already have been used to its limits by the time we could have such knowledge. that is, all purely practical and new implementations of ai would have been explored and determined, in games and such.

as for b, we generally pursue strong ai so that we can discover how we work. if and when we realize that it is not in the materials, but in the implementation, we will have learned sufficiently enough, and it will quickly be concluded that weak ai performs just as well in any practical situation, at less cost.

as far as artificially intelligent robots taking over the world, i think it’s a pipe nightmare, if you will. the entire process will go through a very slow evolutionary growth, and i think it’s safe to assume that people will have the foresight not to destroy mankind by secretly creating an army of killer robots that cares naught about the sanctity of human life.

Look, people, just because the only intelligences we know are meat-based doesn’t mean that we need to pass the foibles and weaknesses of meat to silicon/solid-state memory/quantumly positioned electrons. Gender? Emotions? Come on, people.

Preferences are emotions, therefore programs already have emotions.

What precisely do y’all mean by emotions, anyway?

Doesn’t mean we have to, but we probably will because the human mind is the only reference we have for intelligence. I maintain that emotions and possibly gender will play a factor in the development of strong AIs.

Asimov’s three laws are cute, but they presume artificial intelligence being programmed from the top down. I don’t think that will happen. Instead, as some other posters have posited, I believe AI will be “evolved” from the ground up.

That means we will have but limited control over the outcome.

(My comment about programming AI to derive orgasmic pleasure from serving humans was facetious.)

We humans have empthy toward each other because we have evolved the capacity for empathy over hundreds of thousands of years of living as social animals. Only social animals have any need for empathy. (Grok the caveman takes care of Thag when he is sick or injured. Thag later returns the favor. The tribe thrives because of empathy.)

To an animal which evolved as a lone hunter (for example), empathy would be less than useless. It would be counterproductive.

It is important to note here that in some philosophical formulations, what we call “evil” in humans may be defined as a lack of empathy. Jeffrey Dahmer might be Exhibit “A” for that thesis. He had no concern for his victims. Only for himself.

So whither AI, as far as empathy is concerned? If we “evolve” it from the ground up, with no control over the outcome, then there’s no reason to think that an AI program would have (or need) empathy. It might be concerned only with self-preservation, and perpetuation/replication. Develop an intelligent, self-replicating AI, and you may get more than you bargained for.

Imagine the things it might do out of simple intellectual curiosity.

People, A.I. won’t be curious or have a survival insticnt unless we make them to do so.

Exactly my point, robert. I contend that AIs will not be terribly useful to us if they didn’t have curiosity and survival instincts( the latter one would be especially crucial to military applications).