Can there be machine super intelligence without it being life altering

I believe the plot of the famous SF story “Neuromancer” was about an AI’s subtle plan to escape the limitations imposed on it.

As to whether super-intelligence is possible, I would guess that we could in theory emulate all the best human intelligences combined to create a polymath savant, and that its physical matrix could operate faster than neurons. It could presumably have nearly unlimited memory, even if it was only the built-in equivalent of a database search engine and 24/7 audiovisual archiving, rather than directly incorporated into its mind. So you could get the machine equivalent of Buckaroo Banzai, or Ozymandias from Watchmen.

As to truly transhuman intelligence, we’d have to reverse Forrest Gump’s advice and say “smart is as smart does”. At some point we might be too stupid to understand why the AI did seemingly senseless things, in which case we’d have to go by what it had and continued to impress us with. As long as it continued to be proven right in the end, we’d trust its judgment.

Along these lines there was an interesting sidenote in a depiction of the Aliens movie universe which I once read. It stated that the robotic AI’s ‘Synthetics’ were introduced into a military setting, for whatever reason they were not allowed to participate in combat directly. However their presence markedly raised the efficiency of human combat units, physically built to resemble average looking human males and females in their mid-40s with passive, calm and mild type-B personalities the, often much younger, human soldiers tended to view them as kindly aunt or uncle authority figures. They were trusted implicitly as neutral arbiters of disputes, confidantes and advisers and helped smooth the social interactions and group dynamics of the human teams.

I thought that was a quite interesting, and somewhat unexpected, depiction of the use of AI’s.

A couple of things. What does “superintelligence” mean? Is a machine that can find the square root of 1345629 in a millisecond superintelligent? Most modern AI work exploits abilities like these, manipulating giant datasets to find patterns that human brains couldn’t.

But then there are things that we would consider normal human intelligence, but more of it. I think this is what most people consider “superintelligence” when they talk about AI. An AI that could take existing physical experiments and propose new laws of physics based on them.

But there’s no indication that we’re anywhere near any kind of breakthrough of this sort. We’re learning how to really exploit the ways that computers can do things that human brains can’t, but we’re no closer to any sort of “general intelligence” like a Commander Data that could pass for hu-mon if only it could speak with contractions.

And even if we could construct a system that would operate “superintelligently” that doesn’t mean it would operate faster than a human brain. If it runs on regular old computer hardware then it might be orders of magnitude slower than a human brain, even if it could operate in some way “better” than a human brain. Of course, a slow superAI can be easily bootstrapped into a fast one just by tossing more and more hardware at the problem. But throwing more hardware at the problem only works to make things faster if you have something that works in the first place.

How about algorithms that can solve exponential-scale problems in arithmetic time. A system that can solve a packing boxes or travelling salesman problem fast enough to be useful. It would be a serious breakthrough in computation theory.

(I’ve heard it said that quantum computing will do this…but I’ve also heard it said it won’t.)

This sort of breakthrough could imply perceiving huge amounts of data as a “gestalt,” just as easily as we can look at a bar graph and perceive an upward trend.

The other possibility is that “superintelligence” is something so revolutionary that none of us can perceive it, predict it, conceive of it, or describe it. Superintelligent AI systems might be able to platt the twishers!

I don’t know if there is an agreed upon definition of intelligence. I would assume it is something along the lines of ‘the ability to engage in goal oriented behavior based on an understanding of goals, self and environment and an ability to understand and manipulate physical reality to achieve those goals’.

So an AI would understand the goals, understand physical reality, and understand how to manipulate physical reality to achieve them. People don’t want to die of prostate cancer, so it devices a therapy to prevent and reverse it. People want to be wealthy with little work, etc. I’d assume a super intelligence is one you can just plop into an environment and it will autonomously devise a list of goals (that overlap with human goals) and then innovate plans to achieve them.

I know there is a correlation between working memory and IQ, and humans are limited to about 7 pieces of info they can hold in working memory. I’d assume AI could hold an exponentially growing number of pieces of info in working memory.

We can do that now. We don’t need the hand-wavy nonsense of “quantum computing” to optimize routing. It’s a common task in logistics software. In fact, packing boxes is also a common feature of logistics software for warehousing tasks. Granted, in both cases it’s often preferable to use optimization techniques instead of solving but that’s also because in real life the variables are inherently unknown and often unknowable.

The idea of superintelligence is that the machine can devote its entire capacity to exploring concepts. Pattern matching is kind of trivial, because, quite frankly, humans, and for that matter animals, are extraordinarily good at that, perhaps even better than machines. When you look at talking, I know that, in my dialect, I can make sense of an uttered sentence that would likely be all but unintelligible to a machine, which I think relates in part to the area of the brain that handles music. Should this be a requirement for the definition of superAI? Does it need to have these kind of abilities in order to accomplish what we expect of it?

Consider, also, going down the street. Take a few minutes to fully pay attention. Notice everything that you typically ignore – ads, vehicles that are not moving toward your path, the glitter of the rain on the pavement, the birds singing in the trees, how cold it is, and on, and on. A huge fraction of your own mental processing capacity is devoted to perceiving everything around you, determining what matters and blocking out the rest. The core operation of a superintelligent machine, on its own, is not burdened with this level of processing. If it is a mobile device (android), some part of its system will need to deal to some extent with its physical surroundings, but the high-power logic that is doing the amazing things can be completely isolated from the environment subsystem.

Which is to say, what is “more intelligent than humans”? We are already pretty amazing, and so are some of our computers. I cannot generate a realistic model of what happens in a supernova, but I can recognize Joe from across a parking lot faster than any computer is likely to. So far, it looks like we will simply continue to use computers as tools, to do the things that are too much effort for us (math) and to explore ideas in specific ways. It is not clear that we should need or want them to reach or surpass parity with us. Superintelligence may not even be that big a deal.

What you say may be true for now, but machines will continue to get better while humans will not change.

The first locomotives sucked. They were slow and broke down constantly. I read a story about a man on one of the first locomotives getting disgusting at the constant breakdowns, so he got off and walked. He got to his location several days before the train. However our walking speeds aren’t improving, while trains do. Within a few decades nobody could walk faster than a train.

Since we don’t have any strong AI right now the machines can undergo infinite improvement without surpassing the intelligence of humans.

I have not much to add other than point out that the implementation of new drugs is a man made bottleneck. It can easily be “unbottlenecked”. However this comes with certain risks. Risks that may or may not be worth taking. I believe some of the recent anti Ebola drugs were used sooner than is normal.

Debatable. If we know how to build intelligence, we probably could apply that knowledge to humans. In fact we may concentrate on using cybernetics to improve human cognition rather than trusting a pure machine intelligence. The super intelligences of the future may be humans jacked into brain-augmentation devices.

True, but theoretically it will be harder to have fast biological and machine interfaces vs pure machine devices since they will not be limited by the rules of biology. Our biology would rapidly become the bottleneck through which faster/better cognition is unable to pass.

Plus when people realize that pure machine intelligence is faster and better than augmented humans, that will create a massive incentive to invest in it.

If nothing else brain augmentation could narrow the gap between humans and AIs, enough that we’d have a better idea of how trustworthy or dangerous they could be.

Not for long. Biological intelligence is always going to be finite, the only way to keep up is to replace more and more biological systems with machine systems.

It is like military AI. The US military says ‘there will always be a human pulling the trigger’. Yeah right. The time it takes for an image to be transmitted to the human in charge, for his eye to send the info to his brain, for his brain to process ‘that is an enemy, open fire’, for his finger to pull the trigger and for that info to get back to the robot will make that robot a sitting duck for any truly autonomous AI that isn’t limited by biology and doesn’t have to wait for that info to get to the human operator, for that operators eye, brain and finger to make a decision, and for that info to get back to the robot. Milliseconds count in those scenarios.

In military it seems like countries like the US are at the forefront of tech. China & Russia not far behind, as well as other OECD nations. Third world nations like Iran are only a few decades behind (if that). If in 2040 Iran has truly autonomous military AI and the US has a human in the mix, the Iranian AI will win 99% of the time. The same will happen with AI in general. Human augmented AI will be outperformed by pure machine AI 99%+ of the time due to the limitations of biology seeing how signals in nerves only travel at about 2-200mph, need time to reset, are affected by various things, etc.

No, there’s nothing even approaching strong AI running around out there. As was said before, it can get infinitely better, and still not match an unaided person. We don’t even have a good model of how strong AI might even work, much less have a path to get to it.

OTOH, we already have a pretty good idea of how a general brain/computer interface might work, and have plenty of people working on advancing and perfecting limited ones already. I think that I’ve got a better chance of being The Mighty Cybrotron than we have of developing strong AI in my lifetime. That may not be long in your estimation, but it’s further than most people can accurately predict.

Not for long = not long after strong AI arrives.

It doesn’t matter if the timeline is 5-10 years (Elon Musk has stated he thinks this is about the timeline of strong AI showing up) or 1,000 years.

Biology is going to be a massive bottleneck to pure machine intelligence. Augmented humans will be nice, but because they will require neurons they will always be slower and arguably weaker than a machine version.

But the fact is, it’s possible that the event horizon for strong AI is never.

Since we don’t even have a theoretical framework for how it would work, it’s hard to say what it’s limits might be, or what controls what we might put on it. It’s currently the computational equivalent of faster than light travel. It’s awesome, and certainly world-changing if we can figure it out, but we really haven’t even taken the first steps on it other than figuring out what doesn’t work. Along the way, we’re constantly realizing that it’s going to be harder than we thought.

Fair enough. Augmented humans being the best we can do would arguably be safer anyway. However it still comes back to the OP, if you had an endless supply of millions (billions possibly) of augmented humans who were several times better than the best unaugmented geniuses who ever lived in all fields, would that change the world on a fundamental level or would things just be ‘better’ without fundamentally being different?.

Yes, these are the things that modern computers are good at. Crunching numbers to calculate routes and optimize tasks. What computers aren’t good at is coming up with ideas for what tasks they want to do.

It’s been 5-10 years coming for like 30 years now.

A super-intelligent machine could also notice EVERYTHING that comes within range of it’s sensors. And it would notice it in more detail, on a much broader spectrum, and potentially with a fraction of it’s processing capacity while it works on 50 other things. And it could keep a perfect record of it in its archives.

If we ever invent self-replicating self aware AI (and keep in mind, it just has to have the ability to copy and edit it’s own code in the cloud somewhere), we would probably be the ones who appear as trees. Computers make decisions orders of magnitudes faster than humans. They transmit information to each other at the speed of light. We would just be these weird slow-moving, slow-thinking blobs of protein.

Yes but this time its true. Also they will be powered by solar energy and they will drive your car for you.

Self driving AI cars powered by solar energy in 5 years. You heard it here first folks.