Ok, I was browsing through crap and found the new hype about the playstation 3. Seems that it will incorporate technology that lets it go online and use idle processors of other systems for computing power.
If you see where I’m going with this, you’ll know what my worries are about. What happens when this is incorporated in a system that actually has AI. Not the crappy AI of today. But AI that gives it a mind. It has a language. If they can give it a mind it will surpass that Czech scientist’s theory. And once a computer can say make all the computer’s across the globe do mathmatical calculations for it, it should be able to evolve rather quickly.
I don’t have much of a point here, just a random burst of paranoia. Any thoghts on this topic would be great.
Personally, I think this thread would garner more results if posted in “In My Humble Opinion”, or even “Great Debates”. GQ is for questions which have factual answers. You can email a moderator and ask to have it moved.
Welcome to the Straight dope! Not bad for a first post, but consider your placement.
Computers aren’t like brains. Neurons, if you lump enough of them together, create ways to interact with themselves and form new things. Bigger, fancier computers just sit there doing bigger fancier calculations. They don’t evolve. They do exactly what you tell them to, no more, no less. If you tell a computer to add 1+1 over and over forever, that’s exactly what it will do. It will never figure out the futility of it, optimize it, or anything.
Computers aren’t general purpose enough for any sort of AI program to take one over. It’s kind of like lumping together tractors in a field. No matter how many John Deere’s you stack out there, they aren’t going to develope a hive mentality and take over the farm.
I can’t possibly ee how a stand-alone game machine could borrow proccessor power from other computers across the planet. I mean, who wants lag when they’re playing a multiplayer game? Now who wants to add it to a single-player game. Can you say “marketing dead-end”?
The big + unsolveable problem with distributed computing is that the network is always slower than the computer, so your tasks have to be architectured specifically to be divided up, solved, and then recombined. Most computing tasks of end-users aren’t suited to that, so if an average person hooks up to a distributed system, they are essentially giving away their resource and not getting anything in return for it.
Outright charity has carried a few common-interest projects, but won’t work for everything.
~
If you are paranoid now, try reading Vernor Vinge’s essay on the singularity. I’m not sure how accurate his timing is, but I have a feeling he’s much more right than wrong.
I have a feeling that he’s full of sh*t. What is intelligence? What would make a machine smarter than a human? The ability to more rapidly make calculations? Pah. Computers already do that, and they’re stupid. AI can only be programmed with what we know already; they can just do it faster. No big deal. A computer capable of learning might “end the human era” but I rather doubt it. Ingenuity and cunning can never be programmed. Neither can what I can only term ‘spirit’ (which sounds silly, but whatever). I imagine that being a self-aware computer would suck immensely.
engineer_comp_geek says:
Computers aren’t like brains. Neurons, if you lump enough of them together, create ways to interact with themselves and form new things. Bigger, fancier computers just sit there doing bigger fancier calculations. They don’t evolve. They do exactly what you tell them to, no more, no less. If you tell a computer to add 1+1 over and over forever, that’s exactly what it will do. It will never figure out the futility of it, optimize it, or anything.
Computers aren’t general purpose enough for any sort of AI program to take one over. It’s kind of like lumping together tractors in a field. No matter how many John Deere’s you stack out there, they aren’t going to develope a hive mentality and take over the farm.
No offense, and with all due respect for my own newbie status, but you seem to be missing the point on this particular issue.
It’s true, a field of tractors will never develop. This is because the tractors do not interact. Computers do.
A single computer, connected to a network, can handle simple tasks.
A program can determine ways to carry out tasks, given proper programming.
When you let a computer write the programs… and let the computer tell other computers to write programs… and let them decide themselves how to inter-connect… you have something remarkably similar to the type of thing you find in a neural network.
Obviously it’s a rather large step to go from something like this to something like a global computer intelligence utterly determined to destroy humanity…
but the harware is there. Complex system, able to alter itself. Let it run, I say. Let’s see what we can make! All else fails we’ll just unplug 'em!
I would feel better about your protestations that computers will never develop consciousness if I thought you or anyone else had a handle on how consciousness developed in organic life. Obviously consciousness can develop spontaneously – it already has happened once that we know of. With conscious beings to midwife computers along, intentionally or otherwise, it seems to me very likely that it will happen a second time.