Well, I now enter the ranks of the double digit posters, so i want to make my big, one zero, a good one.
This post is about some thoughts I had about technology and machines replacing humans. This all stems from me re-reading Ray Kurzweil’s book The Age of Spiritual Machines (Its all about the evolution of technology, so if your interesting in that stuff i suggest you read it). Anyway to get on with my debate:
I was wondering why there is always a great battle between machines and humans in media (Matrix, 2001 A Space Oddyssey, etc.) Eventually of course humans end up winning. But why do people feel so threatened by the possibility of being replaced as the dominate species? Wouldn’t it make more sense to just hand over that title to machines when the time comes? Do some people believe that machines will never become as intelligent (or more intelligent) than we are? If so, why? What do you think will happen to humans as we are replaced by smarter and smarter machines?
Looking forward to some interesting insights…
Wearia
Err… No. And there is always the feeling that since we created them, we should dominate them. Also, please clarify how exactly these machines will evolve? Or are you using the word in a more general sense?
hmph… Well intelligence wise I guess. While now computers are superior to us in speed we outmatch them in every other aspect (thought, emotion, etc). But at some point with advances in AI machines will be able to mimic thought and emotion, and past mimicing actually become able to experience thought and emotion. Its not evolution as such, but its basically the same idea.
Why the dominance? Whats wrong with co-existance? As long as us humans get comfy jobs as writers and poets I’d be perfectly happy with machines taking over politics and labour.
Wearia
That’s why I like Iain M. Banks’ Culture series. He postulated that AI’s would want to further the development of society by working with organics, not against them.
[/hijack]
Anyway… I would have no qualms against an AI that didn’t want to kill me. I can completely see machines - intelligent ones - that can develop their own personalities and quirks… which poses the risk of one of them going nuts and wanting to kill all “inferior” beings (think Hitler with servomotors). Ideally, other AI’s would note the aberrant behavior and stop it.
Further, I can envision AI’s using other, lesser machines to also do their bidding… not every machine needs to be intelligent, after all. And I’m certain that a race of sentient machines would NOT want to exterminate humanity… hey, we’re kinda cute and entertaining. However, I’m sure that they CAN develop their own code of ethics that people may occasionally come into conflict with.
However, if people design these machines to think like, well, people (as opposed to thinking like carrots), the machines would probably develop similar morals.
I think the whole “Machines hate people!” scenario is the product of a Christian culture… “If God didn’t make it, it’s EVIL!” That sort of thing. But that’s just my opinion.
There’s probably some of that, but human/machine conflict could also arise as an unintended consequence of building an intelligent creature with the imperative to be as efficient as possible in carrying out its work. Human morals and aesthetics frequently force us to do things in a roundabout way. Unless we are careful to teach machines all the nuances of our value systems, we will likely find some of the ways intelligent machines go about accomplishing their goals to be very objectionable. Even if we make sure that all our robots are proper WASP’s, there’s always the machines made and taught by those “crazy” folks on the other side of the world.
By job I meant dominate species. Job has more of a ring to it. And by dominate species I mean the species that: Preserves other forms of life on planet, sustanes the enviroment, makes t-shirts, etc. We don’t seem to be doing a good job of that… Except maybe the t-shirts
Wearia
Sadly, he seemed to not factor in the human response, which is to kill everything that is not just like you. Peaceful robots would soon learn the horrors of the human race, and decide the best bet for survival would be to wipe us out. They’d be right, but it doesnt mean i’d go without a fight.
Human beings’ main evolutionary advantage is adaptation to new environments. However I have seen where people fail to adapt through various social or psychological reasons. A machine might be much better at adaptation. Therefore the machine would almost be a continuation of human evolution, at least in spirit.