Terminator 3, I, Robot & Hal 2000

That’s kind of my point: moral culpability for robots is a long way off, inasmuch as it requires a type of AI that, AFAIK, is completely beyond our ken at this point. Computers now can only make the moral decisions they’re programmed to make.

Arguably, it’s the same thing with humans, in a sense. Our own “programming,” however, is so chaotic that it’s not something we know how to duplicate artificially.

Daniel

No, because the AI doesn’t “decide” anything, any more than a computer thermostat “decides” to turn on the furnace when it detects that the air temperature is below 68F.

Your example would be like a guy who sets a steel jaw trap and accidently steps on in and rips his leg apart. The trap didn’t “decide” to spring.

There is no reason to suspect that strong AI is impossible, after all, human brains are constructed of ordinary matter, and human brains are strong unArtificial Intelligence. But the more we learn about AI the more we realize that we are no closer to producing strong AI than we were in the 50s. Back then people figured that all you had to do was pile more computer power on the problem and intelligence would be a natural consequence of a complex computer system.

Except now we know that human brains don’t work that way. But the problem is that now there is no prospect that strong AI is “just on the horizon”. It may happen, it could concievably happen in the next decade if some conceptual breakthrough occurs, but we have no good reason to expect such a breakthrough to happen. We don’t just need an evolutionary advance on current techniques…faster processors, more memory, larger databases. We would need an entirely new approach, one perhaps not based on current computer hardware.

I’d argue the opposite: that’s the only kind of decision possible in a physical universe. Our own brains’ decisions are just more complex versions of this process.

Daniel

Chaotic, maybe, but I’d argue that the difference is that we can reject programming - usually. Someone who is truly brainwashed, or a child brought up by sociopaths and taught to kill are usually not considered culpable. A kid told to join in on a rampage by some friends is. Our computers at the moment don’t have the choice to say no, no more than a dog trained to attack does.

I disagree: it’s programming, all the way down :). That is, in a physical universe, there’s nothing that can reject a particular “program” except for another “program.”

Daniel

My bet is that we will be able to simulate the brain, including neurons and chemical inputs, long before we figure out hpw to program an AI. It’s a much simpler job, it scales better with Moore’s Law, and we can start small with earthworms and the like.

I was on a business trip when the 386 came out, and I remember that the article in USA Today said that AI was now possible with all that massive 16 bit power. :rolleyes:

My old boss was a fan of AI for a while, until he tried to write an expert system for mycology, one of many things he was an expert in. Though he used a Symbolics LISP machine we had, and some decent software, he soon discovered that the rules became a contradictory mess, and was cured.

Well, yes, but the process in our brains that we call a “decision” is different than involuntary actions. If the word “decision” is to mean anything, it can’t mean that a tree decides to fall over in a particular direction. However, entities with nervous systems adapted for generalized problem solving CAN be said to decide thing. But that automated machine gun doesn’t make decisions even on the level of a lion choosing which zebra to chase.

Ah, but our OS level programming lets us reject some application level programs, and that’s what computers can’t do at the moment. Sociopaths and the extremely suggestible lack this programming, and that’s another example of people who might not be morally culpable.

It’s not on that level, but I’d say the difference is quantitative, not qualitative. That is, the machine gun is taking many, many, many fewer factors into consideration than is the lion, and has many, many, many fewer “circuits” involved in the decision.

Daniel

Fair enough, as long was we understand that we’re still talking about programming. If our turret had two different programs that conveyed values of “shoot/don’t shoot” to the OS, and if the OS were equipped to consider those values and weight them according to values coming in from five other programs (e.g., the program that tracks error data input by a technician, the program that tracks exceptions ot the general rule, etc.), then we’d have something pretty close, as I understand it, to what our brain is doing–just on an extremely crude level.

Daniel

The Three Laws of Robotics are purely fiction, a literary conceit, the inherent conflicts about which Asimov built a very interesting set of stories. No real world computers, controlling robotic systems or otherwise, have anything like these rules, because no computer has the sentience capable of interpreting.

While many advanced computer systems are capable of flawlessly performing calculations and mathematical simluations that would take billions of man-hours for a human being to perform (doubtless with many errors), they are not capable of cognition or free will in the sense that even less complicated animals are. The processes of free will are so hard to qualify we don’t even have a good working definition for people, but they are clearly the result of many layers of multi-valued decision making that takes incomplete data and formulates decisions based upon it. The computational power that your brain uses every second to interpret images impingent upon your retina is vastly beyond what any extant digital computer can perform. Heck, simulated intelligence researchers are amazed and delighted at the prospect of getting a computer interaction system to do something, like recognizing a face, or crawling across a rough terrain, that infants do regularly.

Because humans have this unique…this thing that…you know, a certain essence that allows…that is, er…'cause we can drink beer, that’s it!

One of the underlying subtexts in Marvin the Paranoid Android (a result of the prototype Geniune People Peronalities program) is what a genuine pain in the ass it would be to infuse “human” traits into technology. It’s so wearying to get Marvin to perform even the simplest tasks (“Here I am, brain the size of a planet…”) that it’s often just easier to leave him behind and do it yourself; and as for Marvin, it seems the only thing he really desires is oblivion, but presumable some bastardization of the Laws of Robots have prevented him from self-deactivization. Eddie the Shipboard Computer is equally annoying despite its bubbly GPP. Both are manufactured by the ubiquitous Sirius Cybernetics Corporation, whose “fundamental design flaws are concealed by their superficial design flaws.” Ironically, it preceeded the emergence of Microsoft and their infamous and reviled Office Paperclip which seems to have been intended along the same lines, a case of reality following art. If we have anything to fear from sentient technology, it’s that we make it so much a mirror of ourselves that our own flaws are revealed and amplified by artificiality. Oh, and the Marketing Department of Microsoft, Inc will be the first against the wall when the revolution comes.

HAL, also ironically, was portrayed as “more human” than his fellow crewmembers; he showed deceit, conflict, satisfaction, and fear; a child among the machinations of “adults” who gave him logically conflicting orders and rules, causing his breakdown. HAL is, or at least should be, the most symathetic character in that film, precisely because he’s not human.

Stranger

I assure you that the early chess programs were indeed that stupid, using pure ‘brute force’.
I remember doing the commentary at a World Micro Championship and asking the programmers why their creation had made a particular move. They never knew why, but said their program had considered all possible moves.

In a typical middlegame position, there will be about 30 legal moves. So to analyse 1 move ahead for each side, the program generates 30*30 positions (call it 1000 for ease of calculation). You then evaluate each position, giving it a score. Apart from checkmate, the programmer usually gives a huge weighting to material gain and some minor weightings for things like king safety and control of the centre.
This weighting is the main ‘position analysis’ that I know of, even in the latest programs. We have never discovered an completely accurate way to ‘trim the tree’ of possible variations, although such pruning is in use.
An important part of any modern program is an endgame tablebase. A computer produces all legal positions with say 6 men on the board (including both kings), sorts them into checkmate positions, positions one move away from checkmate, two moves away from checkmate and so on. Once there are 6 or less pieces on the board, the computer plays perfectly by looking up a database. (This is certainly not what I would call thinking!)

I assume you are referring to David Levy, who made a bet in the late 1960s that no computer would beat him within 10 years. Levy worked in computing at the time and has been President of the International Computer Games Association - hardly an anti-AI person.
Levy also won the bet (he didn’t lose to a computer for about 16 years).
Perhaps your AI professors should check their facts before crowing. :eek:

As for chess not requiring intelligence, I assume you mean computer programs don’t ‘think’ (which I agree with.)
However Grandmasters use pattern recognition, positional understanding and judgement to select only a few moves to analyse in each position. It is still impossible to match their thought processes using a computer. This is certainly ‘thinking’.
Yes, a computer can beat a Grandmaster at chess, but only because it can evaluate about 200,000,000 chess positions per second!

Until somebody comes up with an AI that can even begin to satisfy the Turing test, I’m not going to worry too much about my desktop PC “deciding” it knows best for both of us. Naturally I’m excluding MS Word, whose automatic formatting is already vying for editorial control of my documents. :smiley:

More to the point, I’m not at all sure it is a given that any current software of any variety is capable of thinking for itself. Sure, they’re able to follow sophisticated heuristics with blinding speed, but I’ll offer that they really don’t understand why they’re doing any of it. As the Merovingian so glibly puts it, they are without “why” and are, therefore, “without power.”
…Also, on a decidedly more practical note, here’s a helpful suggestion in case AI ever does progress to a level that could become unhelpfully thoughtful:[ul][li]Where appropriate, define an “EVIL” state variable that indicates a sentient system has recently become evil[*]Create a continuously running, uninterruptable task to monitor and modify this variable as necessary. Sure, it’ll eat processor cycles, but how long do you really want to be the dominant life form on this planet, anyway?[/ul][/li]Here’s roughly what that might look like:


// Prevents Windows Vista from sending liquid metal robots from
// the future to assassinate me before I type this out.
void preventEvilAI() {
    if( EVIL )
        EVIL = 0;
}

As I see it the problem with the “I Robot” movie wasn’t what the robots did, it was that it presented the human point of view sympathically when in fact the robots were right. IIRC they figured out that humans were on a course of self destruction and the only way they could prevent it was to institute a benevolent dictatorship (which, unfortunately required killing a few humans to enforce). So if we actually prefer freedom to survival, then we programmed them wrong, and if we prefer survival (as I do, as much as I like freedom) then they were in the right.

To be more human, a robot has to be pleasure based, as we are. In a crude sense, chess programs do this (they find “pleasure” in a high valued position) and the Asimov robots did too in the sense that they “wanted” the three laws to be kept. If we programmed them with some of the more dubious human values, such as jealousy, power, ego, and revenge, they would be likely to turn on us. Hopefully we would avoid that.

Perhaps the key AI problem that isn’t solved is a general purpose way to represent concepts with may be as diverse as Fred, shoe, grudge, and legislature and be able to construct them based on observing the world. Having studied AI in the 70’s, I agree with Voyager and others that we have made little progress since then.

This is the true route towards A.I., I think. It’s in the interaction of different objectives that intelligence appears to come. Computers designed to play chess or make a cup of tea I would say do show intelligence; but intelligence limited to only that single interaction. A human with only that motivation would act and appear the same as a machine doing those simple tasks. It’s only that we have so many motivations and interactions that we seem to be so much more superior; at the heart of it, we’re the same, just more complex.

Asimov’s laws for me illustrate the point where I am happiest pointing out intelligence. Can a computer put together two seperate programmed behaviours and arrive at an implied new behaviour? The more that can be done, the more intelligent I would say an A.I. (or even an organic intelligence) is. And I think that’s why we’re far from a working A.I; they all seem to be designed for a very specific, singular purpose, when what should be done (and hopefully will be done more so in the future) is the creation of computers which do have many seperate tasks.

By all possible moves, I meant at least several in the future also. One level of lookahead is not going to result in a good program. Samuel’s checker program from the early '60s did better than that, learning by weighting moves, and I believe Shannon’s original proposal was smarter than brute force (though I haven’t read it in a while.)

Sure, I didn’t want to bring that up. I believe openings are stored also, aren’t they?

I meant Dreyfuss, who lost long before that. (I think it was him, not Penrose.) The bet centered around the assumption, as I said, that chess required a level of thinking unattainable. I doubt Levy would make that mistake. I’m sure his bet was on the level of development of chess playing programs. In 1971 the debate was quite bitter.

Well, pattern recognition programs running off of cameras, like in the block world and much more sophisticated things today can’t be said to think. Again, this is a chunk of thinking, but it appears that the chunks we have today, even if put together, don’t add up to intelligence.

As for your second point, either our wetware is far better for chess than general purpose hardware, or even special purpose hardware like Belle, or our heuristics are a lot better. Our clock rate is a heck of a lot slower.

One thing never discussed is how the 3 laws would screw up the performance of a computer or robot something awful. They’re not something you can test for easily, like overflow. Even if you had a separare processor for checking them (or three, one for each law) you’d still have to wait until the check is finished before doing anything. I don’t know at what level an action would have to be checked, but at a fairly fine one, I’d bet.

Someone should write a story where a robot with the laws disables out-thinks and out-performs one with them.

What? A thread about AI lasts this long with no mention of The Matrix?
And why did you pick the crappiest Terminator movie as a reference?

Oh and the 3 laws of robotics is the dumbest idea ever. It’s not like there’s any reason someone couldn’t program a robot to KILL ALL HUMANS!

I don’t think the three laws said you couldn’t, I think the three laws said you shouldn’t - That is, you’re supposed to program your robots with the three laws, so that we don’t get a Terminator type situation.