I was watching the show human and it made me wonder. If you designed an artificial intelligence is it better to create it with emotions or should it be perfectly logical. It seems to me that a perfectly logical AI would likely make decisions devoid of morality such valuing efficiency over human frailty, such as deciding for us what job we should do when regardless of our own preferences. After all what could be more efficient than humans acting as slaves and a eugenics program to prevent food wasters. I think an understanding of morality requires emotions including the ability to feel pain so that it can know empathy.
What do you mean, “logical”?
What’s logical about valuing efficiency over human frailty?
Why should the AI care that humans waste food? Why would it try to fix the problem of wasted food by enslaving humanity? Before the AI can decide that inefficient use of food is a bad thing, you’d have to program it to believe that wasting food is a bad thing.
Logic just means that if you are given certain premises you can evaluate the truth value of certain conclusions. So if you program the AI with these premises:
You must always take whatever actions are necessary to carry out the most important goals.
Preventing humans from wasting food is more important than human life.
Then don’t be surprised when the AI kills all humans to fulfill the goals you programmed it to fulfill.
Or, you could decide not to program it to value wasting food over human life, or you could decide not to follow its advice when it tells you to grind up babies to use as fertilizer. You’re the one who created this AI, so you get to decide how to use it.
A “logical” AI won’t do anything at all. Logic can tell it how to accomplish its goals, but doesn’t actually give it any goals. Your AI won’t develop an unhealthy obsession with food waste unless you give it an unhealthy obsession with food waste.
TVTropes calls things like your “logical” AI Straw Vulcans. Not logical, so much as callous, un-creative, and douche-y. In real life, we’d have to explicitly program in the douchiness.
I’d also like to point out the misunderstanding of the OP concerning logic vs. values. I think the OP has been watching too many crappy Hollywood movies. Reality is completely unlike that. (What a surprise.)
If you code into the AI’s logic that X is more valued than Y, then the AI will act accordingly.
If you have certain preferences, the AI will learn those and take those into account. Siri already does this.
It’s an absolutely ridiculous notion that we would be the slaves to “unfeeling” AI masters. The AI serves us, not the other way around.
We’d only be slaves to the people programming the AI if we were stupid enough to let that happen. (And given the idiotic things people do, it might happen.)
There are AIs out there that create music and paintings based on analyzing many popular instances and generalizing. Some are pretty good, but others … need work. The whole core of this is to have it learn values so that it can incorporate them to please people.
The thing is, humans and animals have certain drives–to live, to reproduce, to eat, to find true love. But the reason they have these drives is because they evolved those drives. Creatures that didn’t care whether they lived or died didn’t leave as many offspring as those who wanted to live. Creatures that try to reproduce likewise.
So your survival drive, your curiosity, your need for companionship, your empathy for your fellow human beings all exist because without them your ancestors wouldn’t have produced you.
So for an AI designed by humans, why would it care if it lived or died? Why would it care about wasting food? Why would it care if hu-mons were going to destroy it?
It would only have the instincts or behaviors that we put there, either on purpose or by accident. It might kill all humans, but if so it would be due to some unforeseen glitch, not because it believed the hu-mons were destructive, or because it hated waste, or some instinct that would make sense in a creature that evolved via natural selection but not one that was created out of whole cloth.
Even if the AI is created by some sort of genetic algorithm that involves a sort of natural selection, it will only have the sorts of drives that made that sort of intelligence successful according to the sorting algorithm we impose on it.
Even if the AI is in total control and even if it is logical… there are plenty of nicer solutions to these problems.
For example, an AI under any rules would recognize that people do a better job if the job is something they like. Thus, you tend to get the job you like, or at least a job you hate less than others. For jobs that no one really wants to do, humans already have a solution for that: offer them more money to do it. If the AI needs more garbage men, it just has to raise the wages for garbage men until the amount is high enough to find people who are happy enough to take the job. The AI can improve on our existing job markets without throwing out the baby and the bath water.
Empathy may be necessary to take human feelings and responses into account, but empathy and logic are not necessarily enemies. Game theory is all about understanding how real people approach problems, and how an idealized logical approach would work differently. There’s no reason an AI with pure logic couldn’t apply game theory logic to maximize happiness. (Provided, of course, that it has a way to measure happiness, which is a bit of a challenge even for us humans).
Yeah. Even if an AI was purely logical, an AI which couldn’t understand emotions would be one crappy AI. And would make all sorts of bad decisions.
This thread is a great example of when sans serif fonts don’t work.
AI looks just like Al, and it’s annoying.
Rant over. I feel better now. Please continue.