Possibly the worst idea in history (giving ethics to robots)

The ‘gun’ isn’t wandering around with an acquisition algorithm and a 60 calibre machine gun implanted in it’s head. :rolleyes:

All of these analogies to previous iterations of technology are just stupid.

Are you begging me to not take you seriously at all? Anyone who thinks that humanity is worse off because of the bible than it would have been otherwise knows nothing of either history or human nature. I have no interest in such imbecilic arguments.

You said fiction first, had to bite. Apologies for not adding the: :smiley:

My point is that acquisition algorithms are made by humans, ergo, human error if it happens. We definitely are making killer bots, and we’ll use them in a war, so we really need these types of algorithms anyway, right?
The guys who program this type of software aren’t trying to make a simple gun or the next new light switch. I’m sure they know that lives are at stake.

AFAIK even now if a drone loses contact with its controller, it flies itself back to its base automatically. I think that that would qualify as self-preservation, but I see you point.

If there is a situation were the only way out of an ambush (for example) is the destruction of the robot then I would not like to be the human in the platoon calling tech support to ask for a robot psychologist under a hail of bullets.

No kidding. I’m more worried about the firepower they’ll be carrying. If they’ve got nukes and they explode by accident at base…

Whoa that sounds like a really cool plot for a book. Now lets just hope that people who are taking robots much more seriously than you are (after all you’re fuming about it on the internet and they’re out there making em) don’t make the kind of oversight any old sci-fi writer or robot geek could foresee.

it’s one thing to accept guns and learn about gun safety, it’s quite another to totally entrust your lives to a walking terminator, hope that your programmer is better than the enemy’s hackers and joke about a very literal blue screen of death.

you can’t be certain that you really trust that rope until your very life depends on it.

Agreed on the rope. I see your point, kind of. An armed drone or a Robocop is basically a weapon for everyone that can hack well. Our programmers? Well, they did need to build a lot of rockets before one took off.

Hackers and programmers can both be destructive, but still, someone taking control of these machines is a human deficiency. Funny thing though, the hackers would have to use the one reason why we are still better than machines…
…Fingers!

Wall-E? Short Circuit? Bicentennial Man?

Use any set of morals you want, they never work, never have never will.

Ok fine. :wink: I was tired and irritable when I read that. :wink:

The point I am making is that it broadens the chain of plausible deniability to the point where no one is responsible for anything anymore, and that’s frightening. Yes, I realize it’s human error. I think it’s human error to program them this way in the first place.

Yes obviously they know that lives are at stake, they are building a killing machine. If it works PROPERLY people die.

Hopefully not, but in real life it seems that people don’t pay enough attention to Sci Fi geeks and fiction in general. In short, expecting so-called experts to see beyond the, “My work is SO COOL!”, factor to imagine all the ways it could go horribly wrong is a poor bet to take. There’s a term ‘unintended consequences’ for a reason. But yeah, that scenario was my Sci Fi geek scenario not something I thought might actually happen.

Actually Wall-E is a better example for my side. The AI took over and took away all freedom of choice from the humans on the cruise ship for nearly a millenium. It turned them into a bunch of stupid cattle with nary a thought in their mind.

Talk about your confirmation biases - what about the other AIs, the ones that saved said people? The bad AI was doing exactly as it was programmed, without the capacity for discrimination. The others? Were going by their ethical senses to “serve humanity”, ISTM. Which more closely resembles a robot with ethics, rather than a mindless drone, Wall-E or Otto?

You just can’t resist making an anti-religious hijack, can’t you?

Come on, take any ten people you’ll get ten different views of what’s ethical and what isn’t. And we’re going to build robots with ethics? No, what we’ll build is robots with their makers’ ethics, who will then impose them on other people. Try arguing with a robot over right and wrong.

The whole concept is flawed.

Your argument is flawed, not the concept. If building robots that let their makers impose their ethics is bad, how is that any different to any other weapon? A weapon under the direct control of a person is going to be used to impose the user’s will on other people anyway. At least with an autonomous robot, you can verify beforehand what it will and will not do.

Couldn’t one side’s war robots just go after the other side’s war robots?