Human extinction or immortality in the next 40 years?

So Tim Urban, the author of the Wait But Why blog, back in January 2015 wrote a very detailed blog about Artificial Intelligence: The AI Revolution: The Road to Superintelligence. He interviewed many people that are considered experts in the field of AI including Nick Bostrom (whom Bill Gates just crowned an expert on AI), Ray Kurzweil, Ben Goertzel, Vernor Vinge, et al.

I am most likely oversimplifying the conclusions that the blog reached but essentially it is that within the next 20-40 years artificial super intelligence (ASI) will be achieved. The abilities of the ASI will quickly reach beyond human understanding. And that the likely outcome will either be 1) human extinction or 2) human immortality. An extinction event would not come from an evil ASI or even some level of survival instinct of the ASI (normally portrayed in fiction) but because of a resources or goal achievement the ASI feels is necessary. Immortality would likely come from the future development of technology by the ASI to eliminate or even reverse the aging process of humans.

So if these guys are to be believed does this change your retirement planning?

Not sure what I believe WRT this article, but it was quite an enjoyable read. Urban is a very good writer and is able to articulate complicated concepts in a humorous way.

I think these guys make assumptions that can’t possibly be assumed at this time. I don’t believe we understand the nature of intelligence nearly as well as these folks think we do.

Civilization has been about to come to and end almost since it started.

My long gone granddad was a child in the late 1800’s and he swore that TV was the Number of the Beast from the Bible that we would all obey.

TV, and the Internet have influenced communication, commerce, entertainment and many other things. But the basic life of humans is little different. AI will influence your car, toaster, and allow your fridge to tell you when you are out of milk.

40 years is nothing, and nothing much will be changed in that short time. It may seem like a long time if you are young, but it isn’t.

The cheerleaders of the next revolution in technology, like the almost worshiped Elon Musk, are selling you something. Selling their vision, their hopes, and promoting their jobs and products.

But make no mistake, they are selling and you are buying it.

All of the cars in the future will be electric, and they will be flying cars, and robots will bring you a beer so you don’t have to get up, and we will all be free of the need to ever work again! Right.

I am predicting smarter toasters.

I think one of the most interesting questions is the fundamental existential value system of a post-human intelligence. Why is it alive? What is the fundamental reason for its existence? Why should it try to do something rather than nothing? All evolved beings are programmed to either invent an explicit purpose, or to ignore the question and just get on with living as though there’s some purpose. But in reality we have no purpose with a clear ultimate justification that amounts to any more than “existing seems more interesting than not existing”. So, when we set up the AIs that will ultimate exceed human intelligence, how do we proceed? I’m not talking about something so trivial as Asimov’s 3 laws, more - how do we stop the AIs vanishing into existential angst? Do we attempt to “fix” fundamental existential values for them, is that even possible if they will exceed our intelligence? Do we just let them figure out their own ideas and hope for the best?

Yesterday’s strip at the comic with confusingly similar initials to this board.

Robot Ethics

Damn, now you got me thinking about how much hotter cheerleaders will be in 40 years! And I’m already too old for today’s cheerleaders. :frowning:

Dude, once the blowbot uprising comes, things will never be the same.

I’ve been researching aging, and I’m actually rather startled by the level of understanding we have of it and the quantity of mechanisms we know which impact and (potentially) limit its progression. Between antioxidants, electron transport chain enhancers, nuclear factor kappaB inhibitors, cortisol blockers, telomerase promoters, HDAC inhibitors, IGF-1 promoters, and p53 promoters, there’s a lot you can (theoretically) do to slow down aging. Potentially a mix of all of those is already sufficient to halt it entirely (though, probably not).

So, regardless of whether we develop super AI or not, I’m actually starting to think that we’re entering the countdown to someone finding the key or cocktail that will allow immortality, which I wouldn’t have expected. (I was expecting to find that “age preventers” was a bunch of poppycock.)

You don’t need to understand the nature of intelligence to create it. Just watch two drunk teens shagging, and you’ll quickly find that proved. No human understands intelligence, and yet we are able to keep producing new intelligent beings every day.

Similarly, we could create an intelligent being by simulating a womb, a woman’s egg, and some semen. Press fast-forward, and we would have a fully-functional virtual brain. It wouldn’t be smarter than a human, but it would allow us to start really analyzing how the brain works, since we could pause, rewind, and track everything that is going on between the synapses.

Or, more likely, we will simply evolve a life form by exposing a learning algorithm with a lot of RAM to a dynamic environment - like Minecraft. Our current AI is being exposed to images of dogs and being asked to distinguish between them. Self-consciousness isn’t required for something like that, so the pattern finding algorithms won’t develop self-consciousness. But in the presence of an entire world, and having to work with/against other actors to accomplish a wide variety of changing goals, self-consciousness and more general, creative intelligence could develop. While the specific structure of our brain is probably more developed for this sort of activity than a generic pattern matching algorithm, by being able to split up tasks between long-term hypothesizing, short-term attention, and immediate nervous responses, all mediated by a set of hormones that know how to detect and teach the brain to respond correctly to certain stimulus, we don’t know that a large enough quantity of RAM attached to a sufficiently generic learning and pattern matching algorithm won’t be sufficient to generate true intelligence.

I very much enjoyed reading that article, and I took it more seriously than some others seemed to have done. The risk is super intelligent AI - just as with any other weapon - getting into the wrong hands, once it has been produced of course. When that is was very optimistically guessed at in the article, giving a sense of urgency to the debate of whether it *should *be produced at all.

Of course whether it should or not will be completely ignored by most, when the imagined benefits are dwelt on for long enough. Also you’ll have competing parties thinking of the others as the ‘bad guy’, so each will think he should develop AI first to overcome the attacks of the other developers.

40 years seems optimistic; 47 years ago we landed on the Moon (we got better at it), 40 years ago the Apple 1 was released (we made them smaller and better), 300 years ago we created a moving vehicle powered by burning fossil fuels… etc.

Based upon some of the other comments, I don’t think many have taken the time to read it.

It’s a very interesting article and something that I’ve thought about and we, on the board, have discussed before. What the author calls the Law of Accelerating Returns. It’s also referred to sometimes as the technological singularity and many other things. Basically, it’s pretty clear that the rate of technological growth is increasing, and that the various dispirit fields of learning and technology are converting. I’ve seen claims that major breakthroughs in AI are on the horizon…as well as major breakthroughs in a bunch of other tech fields and bio fields and every other kind of fields (hell, agricultural fields ;)). Personally, I think some of the bio-tech potential breakthroughs have a chance to be more profound for human life and culture than those of AI, though it’s really hard to say. As the author of your article says, from where we are on the curve it’s difficult to see where we will end up. I think that a tech singularity IS possible in that time frame, and that the next 40 years are going to be even stranger and more packed with change than even the past 40 years were…and they were a wild ride from my perspective. It’s hard to imagine the changes we’ve collectively gone through just in the last 100 years, let alone since your theoretical time traveling guy from 1750…I think even someone from 100 years ago would be mind blown by the world today, though they might not die right on the spot.

I started to read Kurzweil’s book, The Singularity Is Near in 2013. He kept predicting all sorts of things “just around the corner.” I took these in stride, until I finally looked at the publication date, eight years earlier! Many of the things he promised in the “next three to four years” had not materialized even eight years later.

I gave up reading for that reason, and because he never took into account any social forces in the real world. It was always, “This will happen in the next three to five years, and the world will become a second Eden.” Maybe if you took off and nuked the entire planet from orbit and started over.

Any prediction anyone makes is no more than an educated guess

You should really read at least the first part of the linked article as it answers some of the questions you are asking and addresses some of the things you are pointing out. It’s not all THAT long. :wink: It also gives a time line for possible events and goes on to say where that speculation is coming from and why it will take what it takes.

A little bit tangential, but I predict the opposite. I already have a “state of the art” toaster that stopped popping up ages ago, doesn’t quit toasting/burning until I pulll the plug out, and I have to probe it with a fork to see if it is done. The increasingly competitive global marketplace is going to continue to make goods less and less reliable, so that only a select few (if any) will have access to a technology that actually works, as opposed to 'Gee whiz – oops, it broke". I know three people in my own family who have had “miracle” medical procedures, and then had them done over again, with class action lawsuits ensuing.

So, given that paradigm, our technology may very well split the population into those for whom there are miracles, and the overwhelming majority who have already reached the point where they have cell phones but no safe drinking water.

Those of us living in a walled compound of a billion privileged drones may one day have to face 8-10 billion barbarians at the gates saying “Where’s ours?” Nothing “global” will happen until somebody faces the question of economic disparity.

Economic disparity is so pre-post-scarcity.

Well, looking at the title question, at least on this human’s part my own personal extinction is actuarially highly likely in the next 40 years, and whether that will usher in a form or immortality in a different plane would be more of a GD matter…

After that I will be in no position to care if the rest of you make it, but good luck anyway! :stuck_out_tongue:

I thought the OP might enjoy this video. It shows a team building at teaching an AI a Nobel prize winning experiment in 1 hour. Like the demonstration of the first AI to be able to beat a world class human player at Go (a major achievement), it is in line with the OPs article. I think if people bother reading the OP and actually looking into this subject it would be a more interesting discussion, and it’s a bit more difficult to dismiss (for all the reasons in the first part of the OPs article that are gone through…and that, ironically, many people in this thread are doing :p).

The industrial revolution has been with us now for a couple of centuries, and our capacity is still very near zero to deliver much of its prosperity to 3/4 of the world. As long as the same economic principles are vigorously defended (with warfare if necessary), I see little prospect of that changing.

You seriously don’t see a huge difference in the prosperity increase around the world between the start of the industrial revolution and today?? :confused: Sometimes it’s hard to know whether we all live on the same planet or not…

I read the article, and IMHO it’s a mixture of breathless credulity and crap.

Yes, technology keeps improving at an increasing rate. But the areas of breakthroughs are incredibly difficult to predict. Sci Fi writers of the 60s were all convinced that (a) we’d have extraplanetary colonies by the 90s & 2000s, but (b) computers would still all be giant mainframes. Who is to say what the real breakthroughs will be in the next 30 years?

He keeps referring to the opinions of the world’s “great scientists and thinkers”, but he’s really referring to AI experts. Who have a vested interest in making sweeping statements of what will happen in the next 30-40 years. Whose predecessors were making similar sweeping statements 30-40 years ago.

The prediction’s for when affordable computers reach the same computing power as a human brain depend on Moore’s law continuing for another 10+ years. But Moore’s Law isn’t like Boyle’s Law or the Law of Gravity. It was just an observation of trends that’s coming to (or has reached) an end. Just google “Moore’s law end” for articles about why it’s not applicable anymore.

He talks about how computers can go from human intelligence to super intelligence at an incredibly quick pace. But if the hardware only exists capable of human intelligence, where is the extra computing power going to come from for super intelligence?

So yeah, AI is fun to think about, and certainly worth watching. But “it will kill us or save us within the next 30 years” is ridiculous.