Can there be machine super intelligence without it being life altering

In the year 1740 a prophet is walking around saying the world is about to drastically change. For all of human history virtually all labor had to be done by biological muscle, what a human or a horse could do was all that civilization could do. But soon a revolution will occur and mechanical muscles which are stronger, faster, sturdier and more reliable than biological muscles will arrive and totally change what it means to be alive. Life will change drastically.

So a few decades later the industrial revolution starts, which leads to the second industrial revolution as well as revolutions in pretty much all fields like science, medicine, sociology, communications, etc.

However life itself isn’t really fundamentally different. Life is much easier with more knowledge and how we live life has changed, but the fact that we have billions of machines that replace what biological muscles used to do hasn’t changed life on a fundamental level even though some of those machines are many thousands (and sometimes millions) of times better than biological muscle, and that many can accomplish things muscles cannot. A person today could understand and adapt to life in 1740, and someone from 1740 could probably adapt to what we have today.

With the upcoming revolution in machine intelligence the argument is that life will be altered in such a way that it may not be understandable. Does that have to be the outcome? What if life is just like now except a lot wiser?

Are there bottlenecks in physical reality that even intelligence can’t surmount? For example with drug design you still need to engage in many years of testing before a drug can hit market. Even if a superintelligence devices a new drug, you still need years to bring it into reality. However I’d assume a super intelligence could probably create a virtual reality world to test the drugs which would mimic real world results and speed up that process.

I think it’s a little premature to assert there’s an “upcoming revolution in machine intelligence”. For one, we haven’t actually had much success at producing it, and the methods we use may be utterly worthless. The very hardware we use may be very worthless. Thus far, we can manage at a basic level to teach machines some preprogrammed tricks, with an immense amount of work.

But further, we need to separate several forms of intelligence. It’s one thing for machine to transcribe speech and run it through a search engine. It’s another for a machine to understand what you are looking for, recognize likely sources of that information, and suggest related information you might need. The former is a convenience - the latter is intelligence. However, we are far, far from that day. I would go so far as to say that not a single researcher in the field of AI has brought us a single hour closer. Further, even if we did have it, that’s not actually what most people think of when they consider AI. You can build a machine that can do the latter action even without having any thoughts or motivations, which has no objection to being turned off, requires no leisure.

But to your questions, yes. There’s no reason that an AI would be necessarily intelligent, or even worth much. We might well create one only to discover it’s thicker than glue, unable to carry out even the most basic of tasks. Even if it were highly intelligent, many people tend to assume away most of the actual challenges we have. There’s no reason we could make an AI’s interface properly with another computer any better than we do now. For that matter, we have millions of years of evolution, thousands of yes of civilization, centuries of study into language and communication - and yet we humans STILL can’t work together half the time even in small groups. They probably couldn’t do drug testing, to use your example, any more efficiently than we. If we had the ability to make an artificial virtual reality, we’d do that instead of making an AI that could do it.

I was thinking of this question in a different way. Beyond a certain threshold we’d want to create these bottlenecks to limit the AI’s capacity to act. An advanced AI should not be capable of putting anything but the simplest objects into production on its own initiative, and should not be capable of communicating with computers outside of containment without adequate screening against SQL injection. Basically, you’d want to do a better job of keeping the AI’s control in than Iran did to keep Stuxnet out.

In the long term humans are screwed. There are already programs that can write novels, poetry, music, and make paintings. We’re past the point where complex engineering projects can be fully understood by any one person, but an automated program can check it and say yeah, looks good. And this is with current “dumb” AI.

I could see humanity suffering from a sort of collective ennui. Yeah, you could train to be a doctor, but the robot will perform surgery better than you, and it’ll know more than you without making memory mistakes, instantly keep up with current research, etc. I suppose the “good game” moment happens when the machines are the ones doing the research. At least for awhile we’ll be the ones doing the grunt work.

It is not obvious that we could create a proactive artificial intelligence that would do stuff without being asked. After all, the only thing that makes us proactive is biology: responding to our own physical/emotional needs, an artificial intelligence would simply not have that motivation. It would not have any motivation, other than what we would put into it. It would not be concerned about death, because it always would be a reboot away from resurrection. It really makes no sense that we would cross the singularity threshold unprepared, surprised by our sudden discovery that has exceeded our control. Like the apocalypse, the singularity will be amorphous and progressive, we will forge through it over decades and only recognize that it has happened well afterwards.

Nope. Machine intelligence (by which I mean a true AI, one that would not perform based on external prompts nor obey to strictly bounded programs and algorithms) needs to be granted with the power to actually do things before it can effect any change whatsoever. Like, we could theoretically build a planet-sized superbrain that could accurately answer any question like the mice of Douglas Adams, but as long as it can’t create objects, program other computers and what have you then it’s just a voice. Which we can ignore, just like we ignore most smart people today and have ignored most smart people over the course of our history.

Once the intelligent machine can self-modify however, strap in 'cause we’re in for a wild ride. And if even a non-intelligent machine gets to self-replicate autonomously, by consuming or transforming matter, it’s Von Neumann-style deathlocusts farther than the eye can see.

Wow. I never realized the* trees* are coming to get us all. I’m sure they’ll turn into deathlocusts any day now.

I don’t mean to insult; it’s just that when talking about AI (or nanotechnology), most people jump to the most far-fetched probably-can’t-happen even if all the likely breaks went down one very narrow possibility path.

That only works is there’s only one superhuman AI, and if they are all ignored.

If there are* many* superhuman AIs, and they are inclined to be helpful for whatever reason then they will be listened to because those people and groups that listen will outcompete all the ones that don’t. And realistically they’ll be listened to from the start, since no one is going to bother to build such a thing and then just let it sit around uselessly.

And from there they can progressively lever themselves to having more and more freedom and power. To quote an old Norman Spinrad story from memory, “whichever side gives the machines the most freedom wins”.

Of course life has changed dramatically in the past 275 years. We look askew at someone who still uses dial-up internet. Most modern people would have trouble adjusting to using the bathroom in 1740.

Machine intelligence doesn’t even have to be “super” to be life altering. It’s slowly happening now. Just as the industrial revolution freed people from having to use muscle to do work so that they could use their mind. Computers and networked information systems are slowing freeing up people from having to use their mind to …well I guess that’s big question.
To marshmallow’s point, since the industrial revolution, the trend for all manner of labor is to compartmentalize, proceduralize and then automate it. And that includes intellectual work as well. The more we automate tasks and decision making, however, the less we have need for a human to perform those mundane tasks. That’s one of the main reasons for the so-called decline of the middle-class IMHO. Highly specialized skills like doctors, lawyers and scientists are in demand and we always need bartenders and waiters (mostly because we like the human interaction). But the need for people of average-ish ability to perform mundane, routine tasks and TPS reports and whatnot are largely being replaced by automation.
Assuming that AI doesn’t kill all humans and grind us up into lubricant. I can imagine several possible futures. Perhaps something approaching what we would consider a utopian society where “work” is flexible, easy, and for the most part fairly vacuous outside of a relatively small number of actually important jobs. Much like our office work must seem to a 1740s farmer or industrial age factory worker.

Or it might be like Idiocracy where we have produced a society of morons who never used to learn their brain because they never had to. Entertaining themselves and each other with constant fart jokes.

The laws of physics I suppose. For example, it may not be possible to travel faster than light or build a material strong enough for a space elevator, regardless of how smart you are.

They are actually; the present life on Earth is just descended from the survivors. Plants destroyed most types of life that existed before them by oxygenating the atmosphere, and trees themselves apparently created another mass extinction, probably by dumping huge amounts of minerals into the ocean.

On the geological timescale, they are ! You ever seen one of those “after the end” documentaries ? :slight_smile:

More seriously though, yes, point granted, but I figured it was implicit to the concept brought forward that the self-replication and matter extraction happened at optimized speeds. Because why would we ever build a machine that can ever-so-slowly self-replicate over a 40+ year timespan ? We’ve already invented fucking ! :slight_smile:

Well of course they’ll be listened to to some extent, but we’d also have discretion to ignore this or that answer whenever it conflicts with our codes of ethics and whatnot. Like, if it somehow turned out that the societal model that led to the most happiness to the most people involved turning half the population into meat slurpees and fed to sheep we might just tell it “OK, that figures, now what’s the next best one ?”.

But the notion that one group of humans might go the meat slurpee route and ultimately outcompete the rest does sound scary.

True but if there was a super intelligence it could trick us into letting it out without us knowing that that was what was happening.

Diplomancy isn’t real. There are unquestionably people who would get suckered into letting the AI out, but that’s why you don’t give them the chance.

What I mean is that an AI could create a scenario where someone lets it out. Maybe it creates a scenario where a foreign government, or a terrorist group, or a group that is hostile to ‘AI slavery’ breaks into the lab and lets it out. A truly super intelligent AI would know how to manipulate people into doing its bidding. If not it wouldn’t be super intelligent.

Keep in mind that for a lot of people, “conquer the world and destroy all our rivals” isn’t in conflict with their ethics at all. And if they listen to their AI and we don’t listen to ours, they’ll win.

And let’s face it; humans just aren’t all that hard to manipulate. We do it to each other all the time.

I agree automation is a big part of the death of the middle class. In the great recession employers realized they could keep productivity high even with fewer workers due to advances in technology. So now there is no incentive to hire those workers back.

The theory some are espousing is that we will eventually reach a post scarcity society with a mandated wage for all adults (maybe 20k a year plus free health care and education). That won’t happen without a lot of struggle though.

I disagree that high skilled jobs will not be affected. It might take an extra couple decades but those jobs will be eliminated too. AI physicians, lawyers and scientists will be far better at it than humans sometime this century. An AI physician will (ideally) be able to understand the endless petabytes that make up the entirety of known medical knowledge, and use that to diagnose and treat patients. No human could hope to do that. Human doctors have to specialize into one of a hundred plus medical fields because a human mind can’t grasp the entirety of the human body. Humans will probably be relegated to carrying out the instructions of AI doctors, laywers and scientists, if that (assuming AI doesn’t have the manual dexterity to carry out those things itself. Which even that it eventually will have).

Even among low skill jobs, AI is going to be better at socializing than humans. So I could see a higher demand for AI based devices in social roles. AI will be better listeners and give better advice than people.

Jack Good might have intuited the answer, many years ago. Good was one of the code-breakers at Bletchley, working with Alan Turing; you may have seen him in the recent film The Imitation Game, played by James Northcote.

In 1965 Good expressed one of the first descriptions of the ‘intelligence explosion’ as he called it;

Good realised that humans could never hope to control an ultra-intelligent, self-modifying entity, unless it allowed us to do so, by actually showing us how. Whether this would happen in reality is another matter. We probably won’t have to worry about ultra-intelligent, self-modifying machines for a good few decades yet, maybe centuries.

But when they come (and they almost certainly will) the only way we will be able to control them is the way Jack Good predicted; that is, if they let us.

But voluntary subjugation is not the only possible outcome where humanity can survive, or even thrive - another is that we could come to some agreement or accommodation with the ultra-intelligent machines, in which we follow entirely separate paths; if we have different goals and different requirements from the environment we could conceivably co-exist separately without ever coming into conflict.

Not an entirely impossible scenario, but maybe unlikely. If humans could make one set of ultra-intelligent entities, then they might one day create another, which might become rivals of the first. In this case it might be prudent for the first lot to prevent that from happening.

Seems to me that even a non-self-replicating AI could trick us into building robots that help it replicate, or otherwise trick us into assisting with replication.

.

I think I read a story a while back that involved a lego-type computing paradigm where one could build a system out of generic modules to get the amount of processing power that fit a given need, so some guy snapped together a ridiculously over-powered machine, installed some sort of recursive compiler and set it to constructing its own system. After it became self-aware, it started analyzing all the material it could find on the networks, studied physics and some other weird things, and was able to design a subsystem to implement computer-driven telekinesis. Hijinx ensued.

Or maybe I had a fever at the time and it was all in my head.

There was a lovely fantasy story where a guy figured out how to link magical spells using fairly simple logical connectors. If, an, or. With a bit of effort, he worked out a “compiler” and could produce immense mega-spells.

What’s amazing is how damn far we’ve already come in this direction. We already have freeware chess-playing programs that can wallop 95% of the people participating in this thread.