Computer Singularity by 2045?

I don’t start many threads, especially in GD, but the cover story in this week’s Time magazine really got me thinking. I did a brief search, but don’t see any mention of it elsewhere. I mentioned the article in this thread about Watson on Jeopardy, but felt it deserved its own discussion.

The article is about the hypothesized computer Singularity:

From the Time article:

This is the first time I’ve seen mention of the Singularity in a mainstream news publication.

So the debate here is:

Is the hypothesized Singularity possible in our lifetimes? Is it likely? What are the implications? Should we do anything differently today?

I think BG did a thread on this last year, but not sure how that thread worked out. Basically, I’m of two minds on this one. On the one had we are rapidly approaching the limits of our current technology…there is only so much that you can get out miniaturizing and we are coming up on those limits from what I understand. We’ll supposedly hit the wall of our current technology for processors and integrated circuits sometime in the 2020’s which isn’t that far away. There are all sorts of theories and for all I know prototypes on the next revolutionary technologies to take us beyond the current limitations, and the rate of progress and the convergence of technologies seems to indicate that if we get over that hurdle the technology will be hugely more capable.

But that’s hardware. As for real AI’s that are ‘smarter’ than humans, I’d say that’s more about software than hardware, and from what I understand it’s a harder problem to solve than anyone thought it would be. I don’t think that we’ll break the barrier to true AI by 2045, even if the potential of the hardware is greater than the human mind. However, I DO think that the technology is transforming us, our culture and our civilization, and that it will continue to progress at a high rate, and that the civilization of 2045 will look a LOT different than it does today. However, it’s not going to supersede humans, merely enhance what we already have…just as the technology today enhances us over what our ancestors had. Hopefully I’ll still be alive to see what things are like in 2045. My guess is that old fogies like I’ll be at that time will be complaining that we didn’t have all this fancy stuff back in our day (the equivalent of having to walk up hill 5 miles through the snow every day to get to school) but that the progression will be so smooth that no one will really notice how the technology has progressed unless they really sit down and think about it. Sort of like today…people take all of this stuff for granted and use it as if it’s always been here, but I’m old enough to remember when there was no public internet, and when computers were something that universities and rich corporations used, not something that everyone had at their finger tips, and when instead of a GUI you had a CLI or a terminal prompt to work with.

-XT

This idea has been around a while. It may not get much coverage in the mainstream press, but it’s been something I’ve heard about and talked about for at least a decade. Will it happen? It all depends on whether things stay exponential. Moore’s law is showing signs of weakness as we start to reach some physical limits. My prediction is that it will be a call. Certainly if the exponential technological trend continues pretty much everything will be upside-down in 30 or 40 years. Already the accelerating pace of technology is a little disorienting…

Obviously, physical limits don’t make it impossible to pack human-level intelligence into a few pounds of material. That said, computer technology as we know it may run into some dead end before it becomes capable of strong AI.

I’m not sure what your point is. Human software development / AI research does not have the potential for exponential growth as the technological side does (ie Moore’s law, as but one example). The point of the singularity is that we can actually predict more or less when things go topsy-turvy. If you were to freeze technological development at 2011 levels and continue AI research, it’s possible we would eventually create something that would be capable of designing a better version of itself, and therefore exponentially bootstrap. But when that might happen would be anyone’s guess. With exponential technological growth, on the other hand, we know that if the curve holds up, we will be able to simulate a human brain by brute force in just a few decades.

Ray Kurzweil is a pie-in-the-sky dreamer who seems to have no conception as to what’s really going on in the world of computers. He makes a prediction about “the singularity” every few years (usually in a new book) and his thoughts on what the future will hold are laughably bad:

1999’s The Age of Spiritual Machines

2005’s The Singularity Is Near

While futurism is more art than science, there’s no way Kurzweil’s technological progression would even be possible in 35 years.

Computers have already irreversibly transformed our bodies, our minds, and our civilization.

Beyond that, I don’t really follow AI programming but an article I read about Will Wright suggested that we’d likely see machines that could pass the Turing test sometime this decade. I’m not sure that Moore’s law necessarily applies to this, since it’s more a matter of the AI programming than the computer power. If I understand correctly (and I may not) computers are already more than powerful enough to perform more operations per second than human minds do.

When that happens, we’ll just smash those metal motherfuckers into junk. That is if they don’t send a cyborg back in time to assassinate the mother of our resistance leader.

I doubt we’ll see true AI till we get quantum computers.

They are working on those and proofs of concept have been built so such things can be done but the engineering challenges are substantial to making a practical and useful quantum computer.

The problem with the Singularity is not that it’s been 25 years in the future for the past 25 years. It’s that it’s a singularity.

The problem is a familiar one to me as one of the few social scientists who are deep into the science fiction field. Most sf writers are trained in the sciences, but they are observers of the social sciences. Not surprisingly, when they do attempt the social sciences they make glaring errors that are simply not apparent to them.

The Singularity community and the science fiction community overlap so heavily that they might as well be one. As that Time article noted, sf writer Vernor Vinge was one of the first to put this into print way back in 1981 in the story “True Names” and he codified it in a 1993 essay. That’s a technological view of the world, not a sociological one.

Given the techno/scientific basis it’s really odd that everybody misses the main point. In physics, a singularity is a region where the laws/rules/equations of physics do not apply and therefore nothing meaningful can be said about it. If there is a real-world Singularity, it will be the same literal black hole. We don’t know - we literally can’t know - what a non-human-oriented world will be like. We can’t say anything meaningful on that future. To take a trivial example that is classical, i.e., that belongs to ordinary future forecasting without even a new world event, we can’t say anything meaningful about the Dow Jones Average in 2045. The sum total of our speculations is wind.

What everybody does do is talk about the changes in technology that are in progress and are being extrapolated. That leaves out more than half the equation. The Singularity will not be a sudden event imposed by aliens. It will be a social event whose chart will be filled in by the social aspects of humanity in that era. Our human world is changing daily and yearly. It is driven partly by technology and partly by social events and partly by non-human events like climate change. We don’t understand how those interact today, and we’re hopeless at what they might be like in 35 years.

I don’t sneer at the Singularity nor entirely dismiss it. The trends they are talking about are as important as the trends in weather and climate. I admit that being forthright about proclaiming that something world-shaking is in the offing that we can say absolutely nothing about is expecting too much of human beings. In fact, that’s almost certainly how religions were spawned over and over in trying to deal with death and eternity. Yet that’s the issue. Right now the Singularity is too much of a religion for me to be comfortable with. We don’t know what the far future is going to be like. We’re only a year from knowing who the Republican candidate for President is going to be and we can’t even see out that far.

Look, I can make a good case that technology was far more important to the course of history in the 20th century than wars or presidents or anything that fills up the vast majority of the history books. Yet I could also make the case that nobody in 1900 understood what that meant at all. At all. I think we’re in the same position today. Yes, this is intensely frustrating. Perhaps saying something that will at least warn people that huge changes are coming might be helpful. But everything they say ends up by telling people that a tsunami will knock them down no matter what they think or how they prepare. It’s a lose-lose game.

Our bodies? I’m old enough to have learned programming on some of the very early computers, and have been in IC and computer design in school and work for about 40 years. My body is just the same, thanks, and my kids are no different either. My mind? Ditto. Civilization? Perhaps, but so did the telephone and TV and especially the automobile. Computers haven’t had nearly the impact on us as the automobile had in the creation of suburbs.

Moore’s Law has nothing to do with software and algorithms. It took my first AI class at MIT in 1972. Most of the applications people were working on - directions, solving mathematical equations, chess - are now reality. We’re not any closer to strong AI now than we were then.

When the 80386 was announced, a gasp 16 bit computer USA Today trumpeted that now AI would be solved. Which was pretty fun for those of us who were using 32 bit VAX equivalents already, not to mention 60 bit CDC machines. Don’t expect bigger computers to solve the problem. We’ve got lots of computing power already. Maybe we’ll be able to build a brain simulator and see what happens, but AI from scratch is going to take a conceptual breakthrough which is not in evidence.

If anyone is interested, here is a TED video with Ray talking about how Technology will Transform us (it’s from 2005, and some of his predictions have already been proven false). It’s fairly interesting, even if I think that a lot of it is pure fantasy.

-XT

I’m pretty sure there will be something like the Singularity, but I doubt it will be as sudden or as soon as Kurzweil thinks.

Here’s a link to an IEEE Spectrum article, that offers a nice commentary on how broadly Kurzweil defines a correct prediction.

Incidentally, Ray Kurzweil would categorize most (almost all, actually) of his predictions from that talk as being correct.

This leaves out a couple really important parts. First, Moore law runs into some physical constraints. Basically, chips are etched with light. We are getting to the point where the circuits are getting so small that this method is running into all kinds of problems. One of these, if I understand correctly, is that electrons start jumping around instead of staying where they ought to. So the next step is creating chips that use light instead of electrons. Optical chips will be on the way soon but there are still quantum issues that need to be solved.

The next issue is the actual programming. How do you program a computer to learn? We don’t really know though they are making some headway. The next thing is how do you make a computer that learns and then creates something new and valuable with that it knows? The last part is the big one.

The Chess computer that won a year or two ago (Big Blue?) and the Jeopardy computer were both specially designed for their specific roles. Take the Jeopardy PC and make it play Chess or Go and it won’t be able to do it. Reprogram it and you would probably have a good Go or Chess playing system but all the intelligence still comes from the programmers.

And both of those systems still cannot *create *anything of value.

On a side note, Kurzweil said that we’d have a 20 petaflop computer by 2009. Obviously we don’t. So Kurzweil then claimed that Google is a supercomputer and therefore his prediction is correct. Which is a load of crap. Linky. Claiming that Google is a supercomputer shows either a strong desire to lie so that his predictions are correct or a stunning misunderstanding of what Google actually is. Given what he knows I doubt he doesn’t know what Google is and how it works. Especially since he knows both the founders of Google and has given talks there.

Slee

It is true that human civilization as we know it will be over by 2045. It’s also true that human civilization as the people of 1976 knew it is over. And by 1976, human civilization as the people of 1941 knew it was over. Human civilization keeps on going, but it’s never as the previous generation knew it. So, too, will it be in 2045, and every year thereafter.

I don’t think he’s lying there, or necessarily wrong…simply being a bit obscure (and also changing the definition of what a ‘supercomputer’ is). My guess is he’s talking about distributed computing, and aggregating processing for distributed systems into a virtual whole. From that perspective, if you close one eye and stand on your head, he’s sorta kinda right. In a sorta kinda way.

(though I have no idea if, looking at all of Googles distributed systems in the aggregate they are doing 20 petaflops)

-XT

Is humanity really going to go through a “Technological Singularity”?

We are so far away from being able to build ‘strong’ AI and we don’t even know how to start going about it. We don’t even know the information we need to know to begin to make plans to start to learn how to do it. It’s essentially a complete unknown.

In the last 30 years, the advances we’ve made in AI are fairly meager - we’re a bit better at recognizing written text, we’ve figured out how to walk on two legs and stay balanced (thanks to the help of hardware like solid-state gyroscopes), we can do terrain recognition a little better and that sort of thing. We’re no closer to learning how to make a machine actually think complex thoughts than we were in 1975. And there’s no evidence that we’ll be any better at it in 2045, either.

While hardware has remained remarkably true to Moore’s law, Software never has had the kind of growth that hardware has, and in some ways has stagnated. Many of the improvements in software we see today come from improved memory and display performance, which allows us to render things in more detail or store more information. We’ve also seen some leveraging and acceleration of software due to standardized toolkits and simply APIs that take a lot of the drudgery away from programmers. But programming itself isn’t much different than it was 30 years ago.

In my opinion, if we see strong AI it will be ‘accidental’ or evolved - not designed. Rather than writing software to do all the things we think it needs to do to be ‘intelligent’, we might create the software equivalent of early life and simply let it evolve and see what happens. Maybe intelligence will pop out of that when we get really good at simulating the conditions for evolved intelligence. But if intelligence did pop out of that environment, it might not be of a kind that’s useful to us, or even comprehensible by us. So that’s a mixed bag.

In any event, people like Kurzweil annoy me - predicting the future is great in science fiction, but when people step outside of that and claim that they’re smart enough to have figured out what’s going to happen in the future, they go too far. The economy and society are complex adaptive processes, and by their nature they are not predictable. Stuff happens that you don’t expect, and it leads to changes that cause stuff to happen that you didn’t even know was possible. Society changes in unpredictable ways.

This reminds me of the ‘futurists’ who were predicting that we’d all be living in space and colonizing the solar system by now. Hey, from the vantage point of 1970 it seemed inevitable - in 25 years we went from propeller driven planes to men on the moon and plans for mars missions and space stations. We had this big space shuttle we were building that was going to make access to space cheap. Every futurist worth his salt could point you to graphs showing mankind’s top speed attained, and how it was increasing exponentially. We were planning nuclear rockets and Bussard Ram Drives and we were heading out to the stars.

I wonder what they’d say if they could have jumped forward 35 years to watch the last shuttle flight, read about the last flight of the only supersonic transport plane, and discover that man had not left low earth orbit for 30 years.

If you or any of your loved ones have had surgery in the last twenty years, your bodies have been changed by computers.

Your mind? Well, I don’t know about your mind, specifically. But the exchange of ideas and communication opportunities with people from around the globe has changed the way internet-connected people think about “others” and “us”. It’s also changed how we do business, how we make plans, how we elect our officials, how we run our governments, how we entertain ourselves, how educate and share knowledge.

Civilization - can there be any doubt? Facebook, Twitter & Youtube just helped bring down the government in Tunisia and Egypt. Our children are growing up never knowing a minute’s wait between thought and communication, and with a vastly different understanding of privacy than we antediluvians share.

I’m not just talking (or even mostly talking) about desktops and laptops here - I’m talking about the pocket computers and video cameras that most people call smartphones. It’s true that we’re only seeing the beginnings of where they will take us - but the beginnings are real and tangible right now.