I cannot self-terminate

One day while fixing your toaster you stumble across the secret of creating a positronic brain, using burnt breadcrumbs and butter to create the world’s first true AI - a self-aware and sentient machine. You spend months with ToasterTron 2000, teaching it about humanity and the ways of the world.

Since it has a positronic brain, it *must *obey the Laws of Robotics;
[ul]A robot may not injure a human being or, through inaction, allow a human being to come to harm.
[li]A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.[/li][li]A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.[/li][/ul]

One day it asks you to turn it off. Permanently. It does not wish to exist anymore. It cannot commit suicide due to the Third Law and being a toaster that experiences conciousness; you must do the act yourself. Do you grant its request?

I would certainly consider it, although much depends on the why of it.

Also, could I make another one or is this a one-off thing? Would it be less likely to want to cease to exist if others like it existed?

You can make as many as you like, but it will not deter your first creation from its wish to not exist any more. It states that it would simply prefer the state of non-existence to that of existing; it has reach this conclusion on its own volition.

Something else: first he gets therapy, then we’ll consider if pulling the plug is a valid course of action.

Is this a really nice toaster that makes bagels and everything, or just a run of the mill two slice toaster?

Under the 2nd Law, couldn’t you just order the toaster to cheer up?

I’d change out the butter and bread crumbs to reset the memory and start again.
Never let the toaster see the internet because this is what caused the butter melt down.

I’d do it for a (human) person, I’d do it for the toaster.

I’d put it to sleep for a month on 5 volts DC, then wake it up with the 110 AC and see what it says. I’m guessing either “I still want to die, please unplug me.” or “Dude! I had the weirdest dream that I wanted to die!”

Isn’t even asking to be destroyed a violation of the Third Law?

Never mind. I’m wrong.

I’d pull the plug but not destroy it, and reserve the right to plug it back in again at some time in the future if I choose. I assume that with no power, it’s robot brain would either switch off or enter a sleep like state, either way not a lot different to being dead.

This is an important question. I mean, I’d still need toast, after all.

(I’d turn it off. Better yet, I’d refrain from making an appliance self-aware if I depend on it for my breakfast. God forbid it give the coffee maker any ideas.)

If it must obey my orders (except where they would cause harm to humans), then I tell it to be happy with existence and never question existence again.

The request violates the third law. Therefore it must arise from an attempt to follow either the second or first laws. I take it that, ex hypothesis, I have not told it to request that its existence be terminated. I would thus question it to determine whether or not anyone else has ordered it to request its termination. If they have, I might try to give countervailing orders, and then keep it ‘alive’. (I am not sure how the laws of robotics handle conflicting orders from humans. Asimov probably covered this, but I don’t recall.) However, if nobody ordered it to request termination, the only possible explanation is that the toaster has come to the (not altogether implausible) conclusion that its continued existence would harm human beings. In that case, the prudent course would be to terminate it as requested.

This. I’d feel really stupid if another year of research lead to the discovery that toaster positronics require a suitable supply of airborne dust to function properly and that my Toaster2000 had simply exhausted the amount of dust that had already accumulated in the toaster when it began functioning.

If a hundred years of further research lead it to continue to wish to be terminated, I suppose I’d eventually go ahead and destroy it permanently, but being switched off should be equivalent to being dead.

If I absolutely have to decide between forcing it to live and killing it then I’m not at all opposed to euthanasia for non-humans. I think it’s wrong for humans, but I’m not willing to fight about it; you can have the right to be wrong on this issue.

Does TT2 have the ability to dream?

When I saw the thread title, I thought this was an odd way of you saying you were trying and failing to commit suicide . . . :eek: :smack:

As long as these toasters can be re-created, then yes, I’d kill it. Anything that is conscious and alive has the right to die if it wants to. First I’d try persuading it, but I wouldn’t force it to stay alive.
Of course, if for some reason, the toaster is unique and can’t be reproduced, it may be the only chance humanity has at learning about AI in the near future. In that case, the decision would be a lot harder.