AI rights

Yeah, I think it’s dangerous to assume that an “intelligence” will necessarily be benign, or that it will be closely analogous to human intelligence.

Hell, will we even recognize AI when it happens? Or will it be so dissimilar to human intelligence that it will pass beneath our notice? (Or maybe that should be above our notice.)

Well, it’s pretty self-evident that we need to have some sort of criteria to establish something as “worthy of rights”. We can’t just assign rights to every rock and toaster that we see. So what should those criteria encompass? I think “intelligence” is a starting point. Now, what kind of rights are we talking about? The right to vote? The right to exist? Let’s start with that last one. The right to - speaking of a computer program, here - not be dragged into the Recycle Bin, so to speak.

How do we define intelligence, now? If it can add really fast? If it can make decisions? My video games can do that. Have a favorite color? That’s trivial. Choose it’s own favorite color? Have distinct and non-deterministic favorite things? Again, not hard to program (depending upon how you define “non-deterministic”), and these are all things I could find in a decent sim-type computer game. Choices, the ability to make decisions, these are all things that can be done by programs today, and I would guess that we don’t want to rush out and give property rights to the people from The Sims. So what does that leave? Self-awareness is the only thing I can think of. It’s at least consistent with existing laws. We currently give limited rights to things that aren’t that intelligent, but that are sentient. Dogs, cats, and such. I wouldn’t trust my kitties to do my taxes, but they at least recognize that they exist enough to meow at me incessantly when I forget to feed them.

Complexity doesn’t imply that something is intelligent, or worthy of respect. Climate simulation programs (though lousy at actually telling you much about the climate) are extremely complex, and mathematically very impressive. I greatly respect the programmers who created them. But do I have respect for the software, in the sense that you’re talking about? Hardly. And I feel the same way about the current batch of programs that are stabs at actual AI. Nice, but I don’t have “respect” for them. I certainly wouldn’t feel a sense of moral conflict over whether or not to delete one of them.

Jeff

Yes, neural nets are one way in which one can develop a program that does more than churn through a list of preordained macroscopic conditions and responses; there are probably others and there are probably other methods of implementing neural nets than the ones I’ve read about (at the moment, they all seem to work on the idea of computed ‘frames’; I think a more reactive analogue system would be nice). Edward De Bono made some interesting statememts about self-organising systems in his book Water Logic, which, although they were not actually directed at AI, may be applicable (in short, the network could organise it’s own connections, as well as just adjusting the ‘weights’).

Interesting. I feel much the same way about human beings. :wink: