So why do we seek to create an artificial intelligence?
As a tool? As a sentient companion?
Already AI has created robots which perform given tasks efficiently, under the control of human programming.
So in the future, do you think robots, if they reach a high degree of intelligence (almost human say) will be free of human control?
I personally see the desire for intelligent machines to do man’s work for him as a substitute for slavery. Now considering they are only machines, this is ethically forgivable, but at what point do they stop being machines, and aquire moral rights?
Since I believe that true machine intelligence (if it ever comes about) will arise from a process not unlike the development of a human child (start with a machine capable of containing a mind and stimulate in such a way as to provoke the development of one), rights may originate as a matter of practicality; if a great deal of effort has been invested in bringing the mind to be, then the agencies involved would most likely be quite interested in preserving it’s existence.
Whether an AI would desire rights is quite an interesting issue in itself; is self-preservation a necessary precursor to intelligence? Would the AI view the end of it’s own existence with something analogous to fear?
A big difference, I suppose, is that the artificial being would be entirely dependent on hardware (the machine in which the AI resides) which was the property of a human individual or agency.
I don’t think humans are free of other humans’ control, so no. But they will approach our legal freedom, given that people will eventually empathize with machines that emote.
We’re “just” machines, too. When did we get them? When we developed a method of calling things ‘bad’?
Vorlon, I disagree. Some of us do reason about morals and moral rights. While they may not be the ultimate or final determiner, I think they are helpful to clear away confusion.
The movies may actually help on this one; how many people, I wonder, watched Short Circuit and thought to themselves “it’s just a malfunctioning machine, take it offline and fix it”? - admittedly the entire thrust of the thing was aimed at creating audience empathy, but I do believe that people would be prepared to accept that an artificially intelligent entity could be, in some sense of the word, a person.
Short Circuit was an example of a movie where the machine was so heavily anthropomorphized that it was impossible not to think about it as a person. As machines become intelligent it may be far more difficult to identify with them and make the transition from considering them to be tools instead of moral equals.
This is an issue our children’s children will most likely have to confront and I cannot be certain what their attitude towards technology might be. Will we distrust machines that act with their own agenda in mind instead of ours? Will the machines distrust us? It is an interesting question, but still quite open-ended.
I’m sorry, simulate, not emulate. Sorry 'bout that.
I would say this is obvious. There is no collective consciousness that thinks about its constituent parts. But we make laws (in principle) by checking with individuals, and individuals debate whether a law should be passed. If we reason it out, so mote it be, more or less.
When and if we develop a machine capable of independent rational thought, our problem won’t be in forcing it to have empathy for humans, but rather in convincing it that that very empathy isn’t artificially induced. Any independent entity that looks at the sum of human history is more than likely going to consider humans to be bloodthirsty savages who fanatically kill based on simple beliefs.
Imagine an intelligent computer or robot after it has processed this. It is a potentially immortal being with near-infinite potential surrounded by a bunch of easily-riled monkeys with clubs. I personally wouldn’t be surprised if the grimmest predictions of science fiction at least enter its mind as possibilities.
To paraphrase Bladerunner: Humans are just like any other machines; they’re either a benefit or a hazard. If they’re a benefit, they’re not my problem.
The main problem with technology like this is that we will develop it, if it is possible to do so. There is no technology I know of that humans have just given up on because it was too dangerous. Nuclear and biological weapons are about as dangerous a topic as one can imagine, but that didn’t stop the monkeys. Nanotechnology could conceivably be just as dangerous, if not moreso, but the monkeys are hammering away, all trying to be the first to really pull it off.
If we make them as clever and inquisitive as we are, but they can think at, say 20 ghz, we could really have a problem on our hands. The whole “do they deserve to be treated like folks?” question is a little beside the point. Just keep your hand on the plug.
To poorly quote Michael Crichton (of all people) from memory: “They thought they knew what they were doing. I hope that’s not written on humanity’s tombstone.”
I am completely against AI, or any other thinking machines. I will start my own Jihad against them if they come into existence in my time.
[sub]Before you get all crazy on my Jihad comment, I will tell you that this is an obscure reference to a certain famous sci-fi novel series that you need to read. But I stand by my sentiment that I will not tolerate the invention of true AI.[/sub]
I think that one question we’ve gotta figure out is whether emotions are only possible from all of the chemical muck in our brains, or whether it would be possible for a machine made of silicon and wire with a distinct lack of chemical muck to feel emotions. If the latter is the case, then I see no reason why strong AI would be impossible, and I also see no reason why development of strong AI should be taboo (if the machines are capable of emotion like we are, then I’d call it a safe assumption that they’re capable of empathy). If the former is the case (and machines made of silicon and wires are incapable of feeling emotions) then I’d say that a strong AI would be exceedingly dangerous (since the machines probably wouldn’t feel any empathy without emotions) but would fortunately also be impossible IMO (I believe that emotions are a necessary component of a being’s “will,” i.e. that a being incapable of feeling emotion would have no driving force and would not be “alive” or at the very least would not be sentient). The question of “can silicon and wires feel emotions” will likely be answered by the development of artificial replacement neurons used in brain-damaged patients before it is answered by the development of an actual strong AI – implantation of artificial neurons would provide evidence that may falsify or affirm the possibility of strong AI, wheras pure engineering can only affirm the possibility (which, if such affirmation is possible, would probably be quite some time in coming).
Strong AI would also be capable of other human emotions like hate, prejudice, and selfishness.
Also, the first applications of a strong AI would most likely be military in nature. And the military would not want an AI to feel universal empathy for humans.
The development of AI is dangerous because they will be created in the image of the human mind, with all it’s pettiness, anger, depressions, neuroses, cleverness, and ruthlessness. AI will be especially dangerous when it has unlimited access to resources such as factory floors, armored mecha, and central intelligence agencies.
The worst part of it is that AIs will be developed one day, and the nightmare scenarios will play themselves out. I can only hope that I won’t be alive to see that day.
True; it remains to be seen how the first AIs will be packaged and presented to us - it’s not inconcievable that there might be an element of anthropomorphism involved there too.