I'm afraid of the "Singularity"

I’ve been aware of this idea of Ray Kurzweil’s for a while, but I didn’t really investigate what it entailed until recently. I don’t like the future that he describes, but I don’t see anyone else complaining much. Which surprises me.
The picture that is painted is of a type of “borg collective”, and it’s not a pretty picture to me. I for one, don’t want to become part of a machine. I don’t want the Earth, Solar System, and unexplored portions of the universe to be incorporated into the support structure of a vast intelligence. The outcome of the “Singularity” sounds like a dead universe with an intelligent machine in it, that’s it. How awful.

Do people take this theory seriously, and if so, why isn’t anyone worrying about it more? Is there something I’m misunderstanding about this?

Do you also spend time worrying about the eventual death of our sun? 'cause that’s just a few hundred million years away don’t you know!

We are so far away from even the possibility of that future, really, that nothing you say or do now will affect anything in that far flung future.

Also I’m not sure why a sophisticated AI wouldn’t be considered alive.

There is no Singularity. There’s a horizon. But the thing with horizons is, you don’t fall off the edge of them. You never even reach them. When you approach the horizon, you find that you’re now facing a new horizon, still just as far away.

Basically, the whole notion of the Singularity is that technology advances so fast that the technology of 30 years in the future is so far beyond what we have now that we can’t even guess about what it might entail. And that’s true. In 1982, nobody could have dreamed of our world of smart phone and the omnipresent Internet. From the perspective of someone in 1982, we right now are living in that scary future world. But for us right now, it’s not scary at all.

Well, you know, my statement that I was “afraid” was exaggerated to get some responses. I was hoping I wouldn’t have to explain that. Besides, by the time the sun is set to expire, we’ll either be extinct, or have figured out a way to avoid catastrophe. But please don’t deflect this question. Kurzweil says the “Singularity” will happen in 2045.

A series of documentaries have explained that Skynet will kill us all rather than absorb us into the collective. So the point is moot. :wink:

Seriously, though, humans tend to be remarkably passive about hypothetical dangers. Nothing ever gets fixed until AFTER somebody gets killed.

The good news is that the singularity will not come until after we perfect talking computers, flying cars and fusion energy. That means it will always be at least 50 years from now, no matter how far into the future you go.

But even if we were worried about it, what would you do?

I somewhat agree with the OP.

If we all merge into one sorta giant AI that doesn’t really appeal to me to be honest.

Now if it just makes me hyperintelligent and able to enjoy the Simpsons in a 5th dimensional way then that doesn’t sound too bad.

We of course cannot imagine what life might be like post-singularity. But I wouldn’t use fiction like the borg as a reference.
For example, who says that we’ll become emotionless? If anything, I expect we’d want to make our experiences more vibrant (within the constraint that of course some emotions do hinder us).

And of course the notion of us merging into a single AI is baseless. But if it happens, it will be because we think it will be better.

I agree with that but following the analogy through, we’re accelerating towards new horizons. First crawling, then walking, now we’re on a pedal bike and some guys reckon they might be able to fit a motor to it…

There, there. It is not going to happen, it is just a figment of Kurzweil’s fervid imagination, fueled by his neurotic fear of death.

The idea of the singularity is also the product of a congeries of confusions about the nature of intelligence. We may indeed have intelligent, conscious robots one day, but just because machines can be intelligent it does not follow that they can be very significantly more intelligent than we are already. Intelligence is not on a linear scale. Furthermore, even if machines could be in some sense superintelligent, it does not follow that that superintelligence would give them any very special powers. In most respects, after all, smart people do not consistently do better in life that average or fairly stupid people do. If it did, Dopers would rule the world, and have hot and cold running babes coming out of golden taps.

Also, true Artificial Intelligence has always been (at least) 50 years in the future, and Moore’s Law is not really a law and, thanks to real laws of nature (not to mention economics), is bound to break down at some, probably not very distant, time in the future.

You are quite right that it is a very good thing that the Singularity is not going to happen, but these adolescent nerd fantasies are no cause for alarm.

Well, Kurzweil is a bit of a nut. The singularity will never touch him because his head is in the clouds.

To make my point clear, I dont think any singularity of his imagining is coming, and certainly not any time soon.

On the other hand, have a read of this short story. It is only 2 pages long.


Didn’t he originally say it would happen in 2010? Even if not that exact date, he was making predictions 20 years ago about dates that have passed that still haven’t come true.

You need to be afraid of the Singularity the way you need to be afraid of the Mayan Apocalypse. Many people have been saying that for the past 20 years. But that doesn’t sell books.

Well, if the current state of AI programming in computer games is any indication, you have absolutely nothing to worry about for the foreseeable future (even with the caveat that games are ostensibly quite a bit behind the alleged “leading edge” of AI research).

[del]No it’s not.[/del] Yes it is.

Unless Moore’s Law stops in the next 20 years or so, regular home computers will almost certainly be able to do pretty much anything a human brain can do.

By 2030-2035—as a result of this, my guess is that food production, mining, construction, and manufacturing in the First World first, will become, functionally-speaking, completely automated. Cars will be 100% self-driving (Google already has a largely self-driven car now).

As a result of all these things being completely automated, hundreds of millions of people simply won’t have jobs. As a result, there will be some very fundamental shifts in the world economy. I seriously doubt that people will be content to starve to death, so I imagine they’ll work something out so that everyone gets fed, clothed, and housed.

All of the processes I mentioned above are already partially automated. This isn’t wild-eyed insanity. It’s the natural result of Moore’s Law continuing for another 20 years or so. If it DOESN’T continue, then those things won’t happen, or at least not in the same timeframe. There’s no avoiding those things happening if Moore’s Law continues.

You are equating thinking with doing.

A monkey could lay roofing or dig for coal. Yes COMPUTATIONAL ability is growing fast. But how FAST are robots that actually do shit growing? Particularly ones that can do shit without human intervention, supervision, maintanence blah blah blah.

While it gets short shrift these days, most of todays comfort is do to somebody somewhere making or physically doing shit. The computers and robots are helping some, but IMO thats about it.

The singularity is a fuzzy line and we won’t notice we’ve crossed it until after it happens. In the modern world it might as well have happened already. People are doing things all over the world that affect our lives and we don’t know who they are or what they’ve done, and we can’t predict what they’ll do next. When the singularity comes computers will be doing those things instead of people, but we won’t be able to tell the difference.

Kurzweil is not the only one who has predicted the Singularity, and it’s not the product of wishful fantasies. You do a graph of the rate of technological change, and somewhere around 2030 it goes right through the roof. There’s plenty of room for dispute … just because a graph exists, it does not mean the real world will follow the trend lines. But the trends lines are what lead to the idea of the Singularity, not vice versa. My suspicion is that, failing a major collapse of civilization, something like the Singularity is likely in our future. Hell the Internet alone is accelerating change, by allowing people to conduct research more easily and quickly.

Try billions. In the third world, I can EASILY see hundreds of millions, possibly billions starving, because the people on top have NO sense of identity with the people on the bottom. The phenomenon of the One Percent in the US leads me to believe that things will be really rough here too. Europe with its higher sense of social cohesion may do better. Africa and Asia are gonna be hellholes.

I am under the impression that the Singularity is the point when the technology we have created is more capable of creating and sustaining it successor than we are.

This new, incomprehensible, technology then creates its successor, and so on, in an accelerating cycle. We move further and further from understanding and therefore meaningfully participating in the world that is forming around us, a world that is more and more likely to be shaped to serve it’s creators than to serve us.

At some point, given the assumed vast disparity between the needs of the human race and the needs of a machine super-intelligence, we will find ourselves in a world entirely unsuited for human survival.

Isn’t that the Singularity?

It’s not necessarily my belief, just my understanding of the idea of “Singularity”.

I guess we better keep one hand on the switch.

Sigh. I remember being on a business trip when the 386 was announced. The USA Today confidently predicted that AI would now work since we had 16 bit computers. We in Bell Labs, who had been using 32 bit super minis for some time, laughed our asses off.

Faster computers do not mean smarter computers.

Self driving cars <> Singularity. To automate all of manufacturing and agriculture, we need far better robots than we have now - not smarter ones. We also need to make them cheaper than people. Even today some factory automation projects don’t happen since the pay rate of workers is less than the depreciation for the machines which can replace them.

New process nodes are already slowing down, not only due to technology but because fabs for them are so expensive it takes longer to be economically justified. Plus we are running out of room - structures today are not that many atoms thick. I work at the bleeding edge of this stuff and I’m glad I’m going to be retired before we get to two process nodes from now.
Notice that computers are not getting that much faster? Physics says that the faster you go the more heat you dissipate. We’re going to see more relatively slow cores, unless you want your laptop to burn a hole in your pants. That is not going to make smarter machines also.

I took AI in 1972 - and in a fundamental sense, we are no closer to it now than we were then. We have most of the problems presented in my class now solved - Google Maps, Mathematica both solved things we discussed. But making computers conscious - no progress at all.