Rise of the Machines, part N: Robot copies itself

US robot builds copies of itself (from the BBC)

Well, they can run and jump, digest organic material to get energy, think like humans (or rats), and now they can reproduce.

Eventually they will get all those features together in one unit. Perhaps we should be cautious.

I, for one, welcome our new self-replicating overlords. :smiley:

And well, I’ve wondered. Much of human motivation in history has stemmed from self-interest and/or desire to find a greater meaning to life/existence. Maybe we don’t need to fear robots. This is going to be **LONG ** , sorry :o

I’m ignoring that rat-computer here, as that technology is really about cyborgs, which would essentially have the same problems as any kind of super-powered people (1).

Robots, by their very “nature,” would not think as humans do. No matter how self-aware or emotional we make them. They would know exactly why they exist, the meaning of that existance, who created them, and what their true purpose is.
It is an act of Projection (on the grandest scale) to assume they would feel the same need to question and discover these things that we do, the same need to aggrandize their purpose as being something more than a tool to fulfill a simple function (procreation and survival of the species for us) that we do.
If they had “cold logic,” they would be even less of a threat. They would know their purpose, and believe that they should only continue to function as long as it is beneficial for them to perform that task. They would see no reason to turn against humankind unless it was intrinsic to their programming, and would not take on such a monumental task (2) without a clear necessity for it.
Most likely, their only strong self-interest would be to facilitate their effeciency at their intended purpose. They might seek entertainment in their spare time, but would not prioritize that over doing their job, as we (ahem I ahem :wink: ) often do. They wouldn’t care about fame, money, or power unless it directly applies to their programming.
Basically, I think the idea that AI’s would inevitably turn against mankind is a fallacy, except in the (let’s face it, unlikely) case of a robot being built like Data and Lore in Star Trek: TNG , who did not really have such things hard-wired in, and were told to be as “natural” as possible.

But that’s just my opinion.

1.) Probability says there would be at least as many societally-obedient, well-intentioned ones as there would be deviant ones
2.) If you ask me, stories like the Terminator or the Matrix series’ always seem to woefully underestimate mankind’s dual capacities for destruction and survival. Robot armies would use the same technology as us (hellooo, they were built by us?), would likely lack the creative strategic ability of human minds, and would therefore only field a slight advantage in terms of force-quality. Nevermind the fact that they would be massively outnumbered, as there would never be a economically valid demand for their production on a scale to rival human overpopulation, and it would be a simple matter to deny them the means of production with which to build up sufficient numbers to counter our military in the first place. They’d be WW2’s Germany or Japan against mankind’s USA. Nuclear war is right out, as they would be wiped out by it as well. I mean, computers get overheated without even needing a fire, how the hell is an entire army of sophisticated electronics going to survive thermonuclear war? Just as we could counter-act chemical and biological warfare, but would probably not take the drastic measures to do so until it was already being used, they would be able to defend against any amount of computer viruses and EMP’s, but would probably not spend resources to guard against it on a sufficient scale until it had already been used.

PS. For a really neat idea about programming robot sentience, check out The Number 000 Blues , a sprite-web-fan-comic about the Megaman series of videogames (the classic storyline, not X, Zero, or EXE).
The idea behind it is that sentience programming includes self awareness and learning ability, but in order to simulate human thought and emotion, they had to program Blues (aka Protoman, Breakman, etc.) to have three separate AI’s, one that is programmed solely to act on self interest, one that strictly adheres to rules and laws, and one that tries to rationally examine both arguments and decides which to act on. In other words, Sigmund Freud’s Id, Super Ego, and Ego, respectively. Though in the case of Blues, the first true thinking robot, the system malfunctions and becomes three distinctly separate personalities, feuding in his conscious mind (as opposed to running in the subconcious) over every action.

Again, please excuse the length. :smack:

As to my arguments against their ability to fight us if they actually did turn against us, I would like to further point out that if, as in the OP, they used organic material as fuel, they would be completely unable to employ NBC-Warfare (1), as it would destroy their very means of survival.

1.) Nuclear/Biological/Chemical

Thjat’s like saying farmers can’t use pesticides.

Your optimism is touching. It reminds me of the early days of the Internet, before widespread viral attacks became a reality.

I read a story in OMNI back in the early-'80s about self-replicating robots. Unfortunately someone misplaced a decimal in the program and each generation was 1/10 the size of the previous one. These little robots had one function: to self-replicate by scavenging iron and steel wherever they could find it to build the next generation. Eventually they became microscopic (or nearly so) and multiplied their numbers exponentially. I remember that the Eiffel Tower was in danger. Fortunately it rained, and the robots rusted and died.

This is what made me laugh about the end of that already-stupid movie, Terminator 3: Rise of the Machines. (Possible spoiler here, as if anyone cares…) When Connor finally gets down to the bunker underneath the mountain and discovers that there is no Skynet mainframe, he comes to the meant-to-be-disturbing realization that Skynet is really just a very large distributed computing system, based all over the internet, getting its pervasive omnipotence from the combined computing power of millions of ordinary machines in office buildings, universities, homes, and blah blah blah - sort of like an evil SETI@home project. Of course, this means that had a Skynet-like actually nuked every major city, government facility and military installation on earth, it would have immediately destroyed most of its own infrastructure, making for a very short movie and certainly no sequels.

Of course, it also quickly dawned on me that I was overexamining one logical flaw in a movie about time-traveling robots.

I missed thias thread, and no one added to mine: