No, the goal of war is to make the enemy bend to your will when diplomacy fails. Lots of wars have been started against countries that had no ability to project force in the first place. That’s probably the most common warfare, in fact.
Ultimately, wars are about resources, real estate, or religion. Sometimes they are about grave misunderstandings and category errors. They often end while both parties still have plenty of ability to project power.
That excuses Asimov, but not Roddenberry. They use the term in NextGen like it’s a thing we should all just know, instead of a bomb waiting to go off. And needlessly so, because positrons and electrons switch just the same in transistors or transtators or whatever the 24th century brings.
I should have added, or resist the force you project. If your side has an army and the other side doesn’t you win. If they have an army you need to defeat that sides army, and then you win. But you don’t have to kill the other army to defeat it. If you destroy all their stuff so that they can’t stop your army you still win.
It’s really to get the other side to suffer enough that they give up. Now, if you are destroying industrial capacity, then it’s likely you are killing people in the process.
So, it ends up just having two groups of autonomous machines fighting over land that no human occupies.
With Asimovian level AI, we either won’t need to go to war, or the robots won’t let us.
Buck’s second retcon of the thread:
Perhaps the positronic brains are a type of quantum computer that relies on the indeterminate state of proton decay.
“Do you just put ‘quantum’ in front of everything?”
I assume it was an homage to Asimov. And it’s not like any of the technobabble in Star Trek actually made sense.
I used to, and then I started choreographing ballet.
Now all the dancers are lost in time.
‘Positronic brain’ is technobabble. Asimov assumed that standard electronics and computing could never get us there, so he invented positronics. Might as well have been powered by dilithium.
I’m getting confused. Is this thread about science fiction concepts? Or about whether we could build an AI that hewed to something like the three laws? I’m not sure what we gain by talking about positronic brains. They are pure fiction. At least the Three Laws are concrete enough that we could try to apply them to real life and see if they work.
Oh. go rephase a plasma conduit!
One of the things about the first law is that the positronic brain, or conventional AI, would need an extensive database of how strong and how vulnerable and how much manipulation causes how much pain in every human body part. With adjustments for age, weight, gender, disease, local gravitational field, etc etc. After all, how can you cause no harm if you don’t know what can cause harm?
Which also means said AI would make an excellent torturer, if the laws were removed.
Interesting that the 'droids of Star Wars clearly don’t follow the three laws, but I think that’s more because of weak writing than a well thought out philosophy on AI behavior.
It seems someone just said that…
Oh here:
Because the three laws are just as much fiction as replacing electrons with positrons. The OP asks if the three laws, as written, make sense in our understanding of computers. The answer to that is no, they are science fiction. The way the laws worked in the stories were very literal and language based.
Now, as to whether or not we can fetter an AI to not harm humanity and to follow instructions reliably is a much broader question, one that is being explored in this thread, but technically, it’s more of a hijack than talking about positronic brains.
The three laws may be fiction, but they are concrete enough that we can talk about how/if the could be applied to AIs, or something like them. We can also talk about whether or not they even make sense, even as fiction. The answer is, they don’t. Is driving a car at 100 mph on a city street an action that will cause humans to come to harm? Nope. It just raises the probabiity of harm. But the AI driven car could harm someone driving 30 mph, too, and that’s acceptable behavior. The distinctions between harm and positive value are fuzzy, and sometimes impossible to predict. The three laws never made any sense if you thought about them a bit.
Positronic brains, as you say, are pure fiction with no relevance to anything we are doing in AI, and therefore not useful to the discussion.
I don’t know that it would need all that. I don’t have all that, and I avoid harming other humans.
There may be a “never exceed x pressure or y gravity” built in but it’s not going to need to be that extensive. Not unless you are building a massage robot that you want to be able to get deep into those knotted tissues, but not rip your muscles out of your back.
There’s also the fact that Asimov envisioned his robots as not just far smarter, but stronger as well. And that’s possible, we can make strong limbs and servos, but for most applications, we will make them just strong enough. If your home robot maid decides to take over, you can just knock it down and rip its limbs off.
Asimov was my first, and I read him extensively in my impressionable years. The first time I read another author using robots that didn’t have the three laws, I was confused.
No, they don’t. The second law most of all. Do you want a robot that follows the orders of a human, or do you want a robot that follows the orders of its owner?
First law kinda makes sense, but is unworkable as a simple statement. The stories range defining harm from things that would injure or kill you, to things that would leave you bored and unfulfilled.
And the third law is pointless unless we are making suicidal AIs, that seems cruel.
Unless we are talking about the history of the subject, and the mindset and knowledge of the person that coined the laws we are looking to replicate. That it’s not interesting or relevant to what you want to talk about doesn’t mean that it has no use or relevance to others.
But you do! You good naturedly punch your friend in the shoulder, you are not causing harm. Even if he goes OWWW maaaaan! You know not to hit him in the balls that hard, or the kidneys.
Right, but that’s not because I have precisely calibrated how hard I can hit without causing harm, it’s because I keep any such contact well below the level it can cause harm.
Besides, I don’t think that we need robots punching humans out of a sense of camaraderie or friendship. And we don’t need them punching us in the balls or kidneys at all. (We don’t really need humans punching eachother in the balls or kidneys either, for that matter.)
There’s also the short story that the Will Smith movie ripped off, where they need to figure out which robot among a host of identical ones had its Laws modified slightly to where it could allow, through inaction, a human to come to harm. But Susan Calvin makes it clear in that short story that meddling with the Three Laws is messing with forces that they don’t fully understand (and indeed, it soon turns out that by virtue of this minor change the robot is able to do all kinds of things that it shouldn’t be able to do).
The ‘Black Box’ bit may be conflated slightly with how robots invent interstellar warp drives in a latter short story?
Yes, but…a little girl is hanging from a tree over a cliff. How she got there? Who knows. Suddenly a nice robot passes by. Save me the little girl cries. So the robot quickly reaches down, squeezes her hand so hard it breaks every bone, and then pulls the girl up so fast her arm rips out of its socket, causing her to fall to her death.
You don’t have to worry about that happening. Robot designers do.
But if you want an AI to drive a car, you are putting it in a situation where many of the choices about harm are fuzzy. If the computer really wants to take the least-harm path, It will simply refuse to drive.
If you tell your AI to make good stock trades for you, and they were good trades, the person on the other end of the trade was harmed. If you ask it to negotiate a price for you, it is choosing to financially harm the other person to benefit you. If you ask it to play tennis with you, it could make a shot that causes you to stretch out and be harmed. Or it could fire a serve that hits you in the face because you missed the return. Or it could work you to the point where you have a heart attack.
If the AI can’t harm anyone and its manufacture or power causes externalities, it might just destroy itself because that would be the only way to avoid harming others.
Then there are the trolly problems. A robot sees a child on the train tracks. It can pull a lever and divert the train, which will then hit a different child. Or it can do nothing and let the first child be hit. What can the robot do to avoid violating one of its three laws?
There are few actions that are 100% safe or 100% injurious, but many actions where safety falls on a continuum of risk. How does our rigid robot navigate that?