I have read a bit about this- follow the links and do the relevant google searches…
For those who don’t like to follow links, basically it is the awakening of a AI - machine intelligence that will surpass our own and then snowball into superintelligence that can dwarf our own- probably self replicating and not necessarily (tho we can structure the initial being in our favor) congienial towards humankind.
I personally can’t wait for the singularity, but based on my own experiences, I think it is further off than 2020-2040… Unless there are geeks super-me in dark rooms and basements with a larger supply of diet Mountain Dew than I can imagine… oh… right… :smack:
I want it to happen simply because I have been predicting it (and things like bluetooth and kindle since I was about 6yo) and wanting to live in that world.
I truly hope it can happen in my lifetime- and we can call Ian M Banks a prophet rather than an author…
You can certainly have all my cd’s- I think I might have one or two- the rest you will have to hunt the ex down for, and I am pretty sure they are in pawnshops…
Seriously though- it is something my co-workers really stress on and most of my aquaintances and friends dismiss as trivial or not worth worrying about.
Obviously I am not worried, I just expected more of a reaction, as our leading computer gurus seem to expect it momentarily…
Maybe something like this might happen. I don’t think it will, but maybe. As to it “not being congenial” to mankind, as long as everyone doesn’t grow up to be a 99 pound dork, there will still be plenty of people around (like me) who would be more than ready to pick up some bricks, sledgehammers, shotguns, et cetera, and happily go about destroying whatever machinery and power supply serves this “singularity” if it turns out to be a liability instead of an aid to mankind. Trust me, I don’t give a shit how smart it is, it’s not going to last long if someone destroys its electrical cables.
Frank Herbert wrote Destination: Moon, a story about a spaceship full of clones, headed to Tau Ceti to colonize it. Along the way, the ‘organic mental cores’ (i.e., standalone brains) that run the ship fail. The running of the ship is too complicated and delicate for a human crew–only a dedicated consciousness can do it, so the clones are (deliberately) being put into a position where they have to develop artificial intelligence to survive. I’m putting the rest in a spoiler box if you don’t want to know what happens:
The crew succeed, along the way questioning whether or not they are fully conscious beings. Proving that they aren’t, the consciousness that they construct to run the ship is fully conscious, and has total control over time and matter. It drops them off at a habitable planet and warns them that it will return and expect them to worship it, since it’s now effectively God. Several sequels follow.
It’s an interesting take on the question of A.I. insofar as it raises the question of whether or not we’re as conscious as we could be, but having studied A.I. in university, two problems stand in the way that I’ve seen:
[ol]
[li] Economically, there’s not a huge benefit to develop a human-like consciousness. It’s interesting as a research product, and may offer insights into the way we function, but why put all the expense and time into developing a computer that acts human when there’s six billion (and growing) human consciousnesses already walking around, many of whom will work for minimum wage? The productive, interesting work in technology has to do with amplifying humans, and overcoming our weakness. Databases improve our memory, the Internet allows us to communicate with far greater breadth, etc. The economic benefits of technology are in doing more than we can do individually, not in artificially copying ourselves.[/li][li] Several decades of A.I. research has failed to come up with anything that illuminates our own being or significantly mimics us as to fool us. Computability does not seem to be the direction in which humans will create intelligence–a course in A.I. planning or natural language processing very quickly demonstrates the limits of implementing in silicon the things that we do easily with neurons.[/li][/ol]
A.I. has discovered some of our human limits, but overall the field has been promising ‘in the next decade’ for the last fifty years. I’ve seen or heard of nothing to give me optimism.
Part of the point is that the singularity would push past human-thought-speed issues on development of things like nanotech, and you as a human opposing would just get sick and die by opposing the machine, considering it was anti- or even neutral- human interference. Or you would be auto medicated by the same function.
Or, you would just be considered raw material for its bioplants- if it needed them. Much more likely just autonomous slime to be liquidated, should it not be violently curious about humanity…
Well, I don’t suspect it will happen in the form of some kind of “Nerd Rapture” or armies of Terminators, iRobots and big-tittied Cylons screaming EXTERMINATE!! and KILL ALL HU-MANS!! as they emotionlessly tear us limb from limb to use us as battaries (and possibly ask us to play chess).
Most likely you will be unable to simply “pull the plug” as computers and AI will be so ingrained in our society that living without them would be difficult to impossible. Think of processors and sensors being ingrained in and networked to everything. Your car, your home appliances, your clothes, the materials your buildings are made out of, even you. Everything networked together feeding information about everything into everything else.
Maybe there will be a whole new industry for AI psychologists’ who have to go around treating paranoid androids, depressed robots and sentient toasters who are suffering an existential crisis because they want to be a sentient dishwasher?
I think it’s mostly fantasy of the same sort that 50 years ago (or so) predicted flying cars, atomic robot servants and the dream kitchen of tomorrow. I think it has about the same chance of happening as these.
Nanotech the way the post-human cheerleaders seem to invision it (grey goo, fabricator clouds, etc.) is a pipe-dream, and I don’t think that AI will continue exponential leaps in intelligence, anyway. I believe there are physical limits to processing speed and power, and basically the Singularity requires magic to work.
Anyway, AI is like fusion - always 10 years away…for the last few decades.
Exactly! I have heard nothing that would lead me to believe that even a low-level ai is current, so I fail to recognise that a super-sentient ai is coming soon…
Unless it is so far from human that it will just arise by spontaneous over-networking…
I can’t find the link, but there is a paper that expresses this as one of the post-human destruction options (along with the universe destroying supercollider and the matrix and the supervolcano) that are the most likely ends of our genetic lineage…
Admit it, you own the whole Terminator collection, including the Sarah Connor action figurine that turns into the T2000, don’t you?
“The Singularity” is to obsessive technophiles what “the Rapture” is to religious fundies, and there is about the same degree of factual basis to both. We’ll become progressively more capable to modify our environment and ourselves, of course, but the amount of remaining intellectual real estate to cover will still be vast, and no matter how ‘capable’ we are, someone is still going to be writing The Complete Idiot’s Guide to Genetic Modification and The Seven Habits of Highly Effective Cyborgs. And ignore anything you read from Bill Joy, who comes off like a guy who made his pile off of technology and now like Lear expects to protectively dissuade anyone else from doing the same.
Since I think it’s likely that we are living in a simulated reality, I think the creation of computers with rapidly increasing computing power will either be limited by the rules of the simulation, disrupt the simulation, or cause it to be shut down.
I downloaded a PDF ranking existential threats to humanity and simulation shutdown was on it in a high position, but I can’t remember where it was on the list. It’s on my old computer in the garage, though, I’d have to hook it up again to get the file.
Did a search, this is it - simulation shutdown is #3, after deliberate misuse of nanotechnology and nuclear holocaust.