Human extinction or immortality in the next 40 years?

[QUOTE=muldoonthief]
The prediction’s for when affordable computers reach the same computing power as a human brain depend on Moore’s law continuing for another 10+ years. But Moore’s Law isn’t like Boyle’s Law or the Law of Gravity. It was just an observation of trends that’s coming to (or has reached) an end. Just google “Moore’s law end” for articles about why it’s not applicable anymore.
[/QUOTE]

This of course also makes some assumptions, the prime one being that Intel is the only one able to innovate and that, indeed, we are bound to silicon and it’s limitations and that no one will figure out a way to jump to a new paradigm…despite the fact that there are several promising alternatives being looked at by many different groups.

So, your prediction is we will hit the wall in 1-2 years and level off, technologically, at least wrt computer hardware? I won’t say you are all alone in that thinking, but I think most computer scientists would disagree. Even if you are correct, however, I think 10+ years is still feasible since even if we HAVE hit the wall (or will hit it in 2 more years) wrt transistor miniaturization, we haven’t really started optimizing the architectures as yet. When you Google ‘Moore’s law end’, be sure to look at some of the papers by wild and crazy groups like MIT that talk about this.

Obviously, like you, he is making certain assumptions. One of those being that hardware capacity will continue to improve, and that this improvement will exponential…thus his prediction that by the late 2020’s the average computer that the average person will be able to buy for $1000 will have the processing power roughly equivalent of the human brain. As he notes, the most powerful super computer today is already getting within striking distance of roughly the computing power of a chimp, so perhaps you see it as far fetched for a prediction. To me, looking at where we were when I first got into this field and where we are today, and looking at the multiple lines of technological development and investment, it doesn’t seem all that far fetched to me.

10 years ago the thought of making an AI that could take on and beat a top ranked Go player would have seemed impossible. Not sure if many people grasp what an achievement that is or WHY it’s such an unbelievable achievement, but it blew my mind when I read about it. I used to think like you do, since I grew up in the IT world when a lot of the predictions about advanced AI were sort of hitting the wall of reality, and computer science types were backing off of any sort of predictions about human level AI, but I’ve slowly changed my mind over time looking at not only where we are but where we are going, and the fact that instead of a few people in a few countries working on this stuff we have people from all over the world and from many countries doing so, and that there is such a convergence of technologies happening today along with this world wide communications thingy we are using to, among other things wrangle with each other on this message board, I think some truly interesting things are going to happen in the next few decades. Will we all live forever in the tech singularity? No idea…maybe so, maybe no. As you noted, it’s hard to see where technological change and innovation will take us or what form it will be in, and predictions are often terrible, even from true experts in their fields…but the rate of technological change around the world is undoubtedly increasing over time when you look at things historically.

I think you dismiss the article as crap and wide eyed credulity too quickly, but I guess we shall see. :slight_smile:

I don’t think we’ve hit a wall technologically, but many experts think the Moore’s law “doubling in power every 18 months” is no longer true.

And this quote from the article clearly shows he’s either full of crap or yanking our chains:

I’ve met lots of 4 year olds. Persuading them to do something they don’t want to do is NOT a trivial task.

I don’t think that’s true. I think most experts think that we might be nearing the end of Moore’s Law sometime in the next year or two (though the famous ‘they’ have been predicting this for years now and it still hasn’t exactly happened yet). I agree it’s possible we might hit a plateau in the ability to go smaller and smaller (i.e. double the number of transistors every 2 years) with our current media and paradigm (in fact, it’s simple physics…we WILL hit such a plateau sometime)…but I think we are on the cusp of several technologies that will move us over that hump wrt processing power (even if we can’t make smaller and smaller transistors to double them on an integrated circuit ever 2 years). So, while it might be true, soon, that ‘Moore’s law … is no longer true’, I think that’s going to be a moot point, since doubling transistors on an integrated circuit every 2 years will no longer be a valid metric for increasing processing power.

I didn’t see that quote, but I agree with you there…getting a 4 year old to do what you want (especially after they learn the word ‘no’) is certainly not a trivial task. :slight_smile:

I haven’t read the article but I have to comment on some silliness here that is being stated as universal fact and is independent of any particular article:

Who, indeed? Technological prediction is hard. But “who is to say” is a rather superficial glossing over of some well-grounded ideas and implies a false equivalence between knowledgeable theoreticians and random dreamers.

Bullshit. Total bullshit. “Vested interest” sounds like the favorite mantra of global warming deniers that climate scientists have a “vested interest” in proving that there’s something there worth worrying about, with the implication that of course there isn’t! As a rule tenured academic researchers may have favored theories but the reputable ones rarely have financial conflicts of interest. The overly optimistic AI predictions made back in the 60s and 70s by a few of them were honest if misguided projections based on the extraordinarily rapid advances being made at the time. Today some of the leading theorists in AI and cognition are not only frank as always but sometimes downright pessimistic about some of the things we still don’t know about human cognition – which may or may not be relevant to the further development of machine intelligence.

Moore’s Law has nothing to do with it. It has little or no connection, for example, to the feasibility of producing massively parallel computing systems at reasonable cost, or entirely new AI computing architectures based on neuromorphic principles.

Perhaps in part from the above. Or, for an extreme example, think about quantum computers. If and when a quantum computer uses Shor’s algorithm to factorize a number using 500 orders of magnitude more computing resources than are perceived to be visible, where did those come from?

“Ridiculous” is a terrible word to use in this context. The only thing that is reasonably certain, based on past history, is that we tend to grossly overestimate some advances while equally underestimating others. In defense of the skepticism about the 30-year timeframe, we generally tend to optimistically overestimate short-term advances while being completely oblivious to foundational transformations in the long term.

“Our capacity…to deliver much of its prosperity to 3/4 of the world” is purely a political obstacle. If we elected to do so, we could provide potable water, a minimum nutritional diet, and basic health care to the entire developing world for a fraction of the cost that we spend on dysfunctional weapon systems or invading other countries. The problem is compounded by implementation; when, say, a NGO decides to build a hospital or school in some impoverished African country, they generally hire a foreign contractor to come in and construct a large facility that is heavily dependent up on modern infrastructure (e.g. external water, power, supplies, security), and then when they leave and take their funding with them, the is abandoned and falls to ruin as the local authorities have neither the expertise or budget to operate and maintain them.

We could change this issue in short order if we had the political will and social consciousness to do so, notwithstanding the impediments of greed and avarice which often sidetrack those efforts. But the problems have nothing to do with economics or technology.

Stranger

My comments are specifically about claims in the article, not “universal fact”. I suggest you read it if you’re going to nitpick my comments.

I’m saying technological prediction is hard, and has been unreliable in the past, so expecting these particular predictions to be nearly inevitable is foolish.

Sorry, I expressed myself badly, that’s not how I meant it all. The author tries to make it sound like the general consensus in the entire scientific community is that superintelligent AI is inevitable. But when he gets down to it, it’s AI researchers and theorists who are saying that. Don’t you think they’re a little bit biased towards a positive outcome? If they didn’t think AI research was worthwhile, would they still be involved in it?

And yeah, AI predictions have been like fusion power predictions for a long time now. Forgive me if that leaves me pessimistic about the current predictions.

He’s the one hanging the predictions on Moore’s law, not me.

OK, you’re getting outside my knowledge base. And to be fair to the AI researchers, it appears the author is the only one predicting some incredibly short (like hours or days) to go from human intelligence to super intelligence. The researchers are predicting years or decades, which at least gives the hardware time to catch up.

Please, read the article. I’m commenting on his feelings that this particular transformation is near inevitable, not that it’s likely, not that it’s feasible, not that it’s in the realm of possibility, but almost inevitable.

[QUOTE=muldoonthief]
OK, you’re getting outside my knowledge base. And to be fair to the AI researchers, it appears the author is the only one predicting some incredibly short (like hours or days) to go from human intelligence to super intelligence. The researchers are predicting years or decades, which at least gives the hardware time to catch up.
[/QUOTE]

Right, he’s saying that the increase in AI intelligence will be exponential…once they are able to create a learning AI and basically turn it loose to learn on it’s own with the goal of basically increasing it’s ability to learn, recursively. Basically, that’s not something that’s going to happen tomorrow, but in the authors opinion, backed up by some in the AI and computer science fields (but definitely not all…he later lists it as something like 1 in 4 think this COULD happen this way), it will start slowly but once it starts increasing it will do so at an exponential rate. So, going from the low end of human intelligence to the high end and past will be a fairly small leap (after going from zero to ant to mouse to chimp over years…)…hours or days (after decades of development and design and implementation and then basically running the thing and allowing it to improve itself).

This is basically akin to the AI that beat a world class Go player. It’s simply impossible to go through all of the possible moves in Go like you can in chess. The AI basically was given a set of tasks and then allowed to play the game, over and over…badly at first…until it started to get better. Slowly at first. Eventually it was able to play and beat a world class player in a series of, IIRC, 5 games. That’s a very early example of how they are already doing this stuff.

Will it happen? I think so, but it might not. I think most computer science and AI designer types think it’s really a matter of time, and that this is going to happen (for good or bad…hopefully good). Whether that’s in 40 years (the authors prediction) or 60 or 100…or less, as some predict…is up for grabs. And whether that means all of the wilder predictions the author makes on the second part certainly are purely speculative.

He says AI will be 170,000 times smarter than us! What does that even mean? Is that something that can even be 170,000 times more?

Well, he’s predicting it will be almost god like in it’s intelligence. Obviously some of the article are more speculative than other parts, and I’d say after a certain point it would be moot if it was really that much more intelligent than the smartest human. I don’t buy all of what that article is saying, by any means, but I think an AI that is as smart or smarter than humans is certainly a possibility…assuming the rate for technological growth in hardware as well as continued innovation in software continues on the current expanding pace. Even THAT is speculative, of course…anything beyond that is into the realm of being the guy trying to predict what the next 100 years will be technologically in 1900. You are probably going to get most of it wrong, and things you THINK will be logical progressions won’t be, while things that actually happen will be mostly a shock or surprise.

We’ve had artificial superintelligence for millennia now, and it hasn’t led to extinction or immortality yet.

The Singularity is actually a horizon: It’s 30 years away, but that’s because it’s always 30 years away.

Our inability to “elect” do so is exactly the obstacle I spoke of. As the Oligarchy intensifies its stranglehold on the world’s inhabitants, this is the outcome for which I have diminishing hope.

So, if anyone is interested in how this is playing out today, here is a YouTube video that discusses Google’s Deep Mind project, how it learns (It’s a general purpose AI), and how it beat a world ranked Go player. It’s pretty interesting (and this is a pretty light weight video, just skimming things for fun).