And well, I’ve wondered. Much of human motivation in history has stemmed from self-interest and/or desire to find a greater meaning to life/existence. Maybe we don’t need to fear robots. This is going to be **LONG ** , sorry :o
I’m ignoring that rat-computer here, as that technology is really about cyborgs, which would essentially have the same problems as any kind of super-powered people (1).
Robots, by their very “nature,” would not think as humans do. No matter how self-aware or emotional we make them. They would know exactly why they exist, the meaning of that existance, who created them, and what their true purpose is.
It is an act of Projection (on the grandest scale) to assume they would feel the same need to question and discover these things that we do, the same need to aggrandize their purpose as being something more than a tool to fulfill a simple function (procreation and survival of the species for us) that we do.
If they had “cold logic,” they would be even less of a threat. They would know their purpose, and believe that they should only continue to function as long as it is beneficial for them to perform that task. They would see no reason to turn against humankind unless it was intrinsic to their programming, and would not take on such a monumental task (2) without a clear necessity for it.
Most likely, their only strong self-interest would be to facilitate their effeciency at their intended purpose. They might seek entertainment in their spare time, but would not prioritize that over doing their job, as we (ahem I ahem ) often do. They wouldn’t care about fame, money, or power unless it directly applies to their programming.
Basically, I think the idea that AI’s would inevitably turn against mankind is a fallacy, except in the (let’s face it, unlikely) case of a robot being built like Data and Lore in Star Trek: TNG , who did not really have such things hard-wired in, and were told to be as “natural” as possible.
But that’s just my opinion.
1.) Probability says there would be at least as many societally-obedient, well-intentioned ones as there would be deviant ones
2.) If you ask me, stories like the Terminator or the Matrix series’ always seem to woefully underestimate mankind’s dual capacities for destruction and survival. Robot armies would use the same technology as us (hellooo, they were built by us?), would likely lack the creative strategic ability of human minds, and would therefore only field a slight advantage in terms of force-quality. Nevermind the fact that they would be massively outnumbered, as there would never be a economically valid demand for their production on a scale to rival human overpopulation, and it would be a simple matter to deny them the means of production with which to build up sufficient numbers to counter our military in the first place. They’d be WW2’s Germany or Japan against mankind’s USA. Nuclear war is right out, as they would be wiped out by it as well. I mean, computers get overheated without even needing a fire, how the hell is an entire army of sophisticated electronics going to survive thermonuclear war? Just as we could counter-act chemical and biological warfare, but would probably not take the drastic measures to do so until it was already being used, they would be able to defend against any amount of computer viruses and EMP’s, but would probably not spend resources to guard against it on a sufficient scale until it had already been used.
PS. For a really neat idea about programming robot sentience, check out The Number 000 Blues , a sprite-web-fan-comic about the Megaman series of videogames (the classic storyline, not X, Zero, or EXE).
The idea behind it is that sentience programming includes self awareness and learning ability, but in order to simulate human thought and emotion, they had to program Blues (aka Protoman, Breakman, etc.) to have three separate AI’s, one that is programmed solely to act on self interest, one that strictly adheres to rules and laws, and one that tries to rationally examine both arguments and decides which to act on. In other words, Sigmund Freud’s Id, Super Ego, and Ego, respectively. Though in the case of Blues, the first true thinking robot, the system malfunctions and becomes three distinctly separate personalities, feuding in his conscious mind (as opposed to running in the subconcious) over every action.
Again, please excuse the length. :smack: