Can there be machine super intelligence without it being life altering

No, much more importantly, we have no idea what it would even be. At present, we can program computers to solve some pretty complex problems, and the results they spit out are sometimes surprising to us, or at least unexpected.

So far, the AI bar seems to be set at UI, which Big Blue pushed pretty hard in that Jeopardy demo. At some point, your computer will be able to identify you specifically and set up and run simple or sophisticated jobs based on natural language queries (“HAL, extrapolate for me a picture of Emma Watson naked on a swing in a seductive pose”) and warn you when someone looks like they are going to break into your house (discerning that that kid over there, crossing your property, is not a likely threat), but those kinds of things are still just UI and utility, and not really all that far off (if the software community shares their stuff instead of closely guarding trade secrets, progress can become geometric, or even exponential).

Strong AI, whatever that means, must be something beyond just really good tools that we can talk to. But really good tools can easily be really really good, it is not altogether obvious that there is any particular value beyond that. Because, after all, computers are nothing more than tools, they are not like a Percheron, where you can use it for plowing but also form a real bond with it. If we can use them easily, write programs without any programming skill and get the kinds of astounding results we already hope for, the idea of this Strong AI seems superfluous and irrelevant. Or maybe just good fodder for Sci-Fi books.

I don’t see the disagreement here.

I can’t see how accurately identifying threatening and non-threatening people in anything but a highly restricted environment is going to be accomplished by something less than strong AI. Watson is amazingly good at the limited task it was programmed for, and it is apparently able to handle other text tasks, but it’s really just the pinnacle of natural language search engines.

Strong AI = a machine that actually thinks. We aren’t even sure what “thinks” is, but we’re almost sure humans and other animals do it. A machine that can do this with the equivalent intelligence to the average human could theoretically out-think the average human, as it would supposedly not sleep. If you could just get one to understand both English and C++, you’d never need to hire another programmer. Even that limited task is a fantasy.

As I was already saying, as interfaces get closer to being in our brains, and as we get better at making our brains less dependent on the meat wagon they’re in; the [del]need for[/del] utility of strong AI may be greatly diminished.

Certainly, improvements in machine/brain interfaces will blur the distinction, and will functionally postpone the arrival of true AI. We’re still using keyboards, for instance. Voice recognition is still a joke.

(Alas, so is OCR. It’s amusing to read an e-book and be able to recognize these kinds of errors. I read a book recently where the word “flag” was consistently spelled “fbg.”)

If we could work out a data-input interface that put down what we really meant to put down, it would cut a huge corner off of the AI debate. For one thing, such a system might require some limited AI, all by itself.

(e.g., a context-sensitive interface, that knows when someone is speaking in irony or sarcasm.)

Yep, for actual productive computing today, there’s still no substitute for the physical keyboard + mouse combo. That’s an old interface.

Yeah, no argument there. OCR and translation are going to have to have a pretty solid, but weak linguistic AI before they’re really reliable.

And humans can’t do that reliably without a lot of external clues. That shows how good the AI would have to be before even that limited goal is reached.

There is absolutely no reason to think that faster than light travel is physically possible, and lots of reasons to think that it is not. On the other hand, we are all equipped with a device which can perform intelligent thinking, so we know that it is physically possible. Even if intelligent thinking requires some sort of magical Penrose/Hameroff-type quantum process we know that it can be done, and that it can be done by billions of independently existing systems on our world alone.

The quest for FTL is almost certainly a quest for the impossible, but we know that thinking is possible, so replicating it does not seem to be an impossible goal.

There’s no reason to think that you can arrive at thinking via math. No one has done it yet, many have tried. I’m not saying it’s impossible, but we seem far from figuring it out at the moment.

Yes, romance kicks math’s ass at producing thinking machines. If there were the FTL equivalent, natural objects that travel faster than the speed of light that we hadn’t predicted, how would we observe them?

Nah, I think my analogy holds.

Your analogy seems to be that something probably doesn’t exist, cannot be observed and cannot have any effect in our universe is the same as something which definitely does exist and has remarkable effects in our universe. This is a very strange analogy.

Human intelligence has taken more than four billion years to evolve on our planet; it is vastly over-optimistic to expect that we will replicate the results of that process in a few decades. It is also over-pessimistic to say that it is as difficult to replicate as faster-than-light communication, which appears to ruled out by the very structure of the cosmos.

I suspect that is more straightforward than you think. Right now, we have issues with some servers and other tools based on message parsers. Most recently, it was discovered that BASH, the common Unix shell, could be exploited by a misplaced quote mark. Parsing needs to be more rigorous, be it text or spoken or visual or whatever. The kind of weak-ass coding that lets captain kirk crash a computer by feeding it a conundrum would simply not be acceptable in AI/UIs. Sarcasm would have to be dealt with as faulty information, logged and ignored, not even “ARE YOU SURE YOU WANT TO BLOW THEM TO KINGDOM COME?”.

Yes there are lots of reasons to think it is not. But there is at least one common natural process that propagates faster than light. We are not certain exactly how fast, but I have seen estimates that the speed of gravity may be ten million times the speed of light. So, yeah, the possibility of FTL should not just be summarily waved off.

Your refutation of the analogy equates machine intelligence with biologic intelligence. The two aren’t remotely the same. Currently, our computers at their core don’t do anything besides arithmetic. Everything else in the computing world is so far from practical application that I’m not willing to discuss it in relation to how close we are to achieving strong AI. No one has come anywhere near proving that you can express or define intelligence through math.

Biologic intelligence on the other hand is a process that we don’t properly understand, much less be able to model with a computer. The ability to imitate biologic intelligence with math is absolutely something that does not exist.

Yes, but the fact is, there does exist biological intelligence. Those biological intelligences are built out of ordinary matter arranged in very particular ways. It certainly is true that today’s computers don’t work very much like a frog’s brain, much less a human brain. It may be that we’d need a different sort of hardware to really make progress on real intelligence.

But we know that intelligence exists. We’re at the stage of Leonardo Da Vinci looking at a bird and wondering if we could make a machine that could fly like a bird. We’re not wondering about something that contradicts physical law, we’re wondering about something that really happens every day right here on planet earth. And while it turns out that we can indeed build machines that fly, they don’t usually work very much like birds. We’ve been trying to build ornithopters for a long time and while we’ve actually built a few, even ones that can carry a person, almost all our actually existing aircraft fly using different methods.

So while it may be that we’ll never be able to build a strong AI out of computers but would have to use something that more closely replicates biological intelligence, it seems a bit premature. It is true that the more we learn about intelligence the more complicated it gets. Tasks we formerly thought would require true thinking turn out to not require thinking, just computation, while other tasks that we thought could just be computed turn out to be really really hard to compute.

But the point is, birds fly and birds think, we see them do it all the time. Therefore, flying and thinking aren’t impossible. However, we’ve never observed any objects or information travel faster than light, so it seems very likely that FTL is impossible. Comparing strong AI to FTL or antigravity is silly, we already have 7 billion strong intelligences right here on planet earth and I use one or more of them every day. Oh sure, they’re not perfect and have a lot of bugs (sometimes literally) but we have actual working instances.

Do you have any sites for that? Because AFAIK, the speed of gravity is the same as the speed of light.

These estimates are almost certainly wrong.

To argue against myself (I do this all the time, as should we all) the non-classical channel in quantum entanglement and teleportation propagates much faster than light, but it does not carry any usable information.

Human intelligence evolved via a series of accidental mutations that gave minor survival benefits and is limited to biological tools and cannot take a short term loss for a long term gain. All gains must be positive to keep momentum. Man made tech is not limited in those ways.

Plus most of our intelligence evolved in the last 2 million years. Our brains went from 450cc to 1350cc. Stephen pinker said in that time frame, if you were engaged in selective breeding you could’ve grown and shrunk the brain several times in two million years since that is about 100k generations. So using biological time frames is misleading. Most of our intelligence came in the last few million years.

Plus machines are only a few hundred years old and they are already stronger, more durable and have more stamina than any biological counterparts on earth.

This.
Long before we make any machine that thinks like a human, we’ll make machines that think in a very different, non-human way that will never-the-less still be very useful.

I can’t, however, guarantee that they will be useful to us.

Actually, there are examples of airfoils in nature, so we at least should have known there was more than one way to skin that cat. We had toys that exploited the same principles for centuries before we could fly, but they remained toys or curiosities for centuries, much like Spooky Action at a Distance.

And mind, that I limited my analogy to computationally arriving at strong AI. You’ll be boiling intelligence down to a (probably incredibly complex) equation at that point. We have no idea if this is possible. If we do find ways of constructing strong AI by other means, it wouldn’t make my analogy obsolete.

What about an evolutionary approach, where a system is devised having limited self-organizing capabilities, and then we reward/punish until we get what we want?

Neural nets are interesting in this regard (although, alas, don’t seem to have lived up to the hype of their earlier days.) Technically, yes, they’re computational and algorithmic, but the algorithm is so widely distributed (almost holographic) so that it’s all but impossible to write it out explicitly.

Certainly the idea of an AI that learns how to think is a staple of SF – I’m thinking of John T. Sladek’s whimsical satire Roderick – but in real life, evolutionary approaches have produced some…ah…suggestive results.

Anyway, I join with eburacum45 in saying that the analogy with FTL is unfair. An analogy with creating a Space Elevator is, I think, much more in line. We can’t do it, but we can visualize how it might be done.

Well, I think that achieving strong AI through a neural network would certainly make my analogy obsolete. You’d at least have computed the basic building blocks of the AI. They are already better at some tasks than classical ways of solving problems with computers, and they might be one of the components of a future AI, but I don’t think that any developers of the current systems would pretend that their networks are doing even rudimentary thinking.

I’d say we’re a lot closer to a space elevator than AI and FTL travel. It’s something that we can conceive a practical version of, but we will have to invent new materials tech to make it feasible to even try. There are theorized methods of FTL travel that don’t locally break the speed of light, even though any possible solution may violate other natural laws (such as an Alcubierre drive). Similar to FTL travel, we don’t have any idea how we might realistically build a strong AI. Every theoretical solution seems to run up to a new limit, and the problem is worse because intelligence is poorly defined – the speed of light is at least already defined without any reasonable objection, I don’t think you can same about intelligence. Strong AI is different from the other two because once we can actually imagine how it can be done practically, it should be a very short time before it’s proven to be possible.

Fundamentally, that is not too far off the sensible development path, though the external reinforcement process would be missing: the machine itself would debug its own output based on relevance and consistency. There is simply no realistic way that AI could be developed through direct human authorship, we would have to construct the factory and set it about to building itself.

One of the things that makes programming challenging and interesting is that we really do speak and think in a different way from computers, and that is kind of the ideal. We want to create an intelligence that is different from ours, a carbon copy of the animal mind would be basically pointless. I could imagine that we might build an AI that we could not even understand beyond the fact that it would be producing interesting and useful results.

I’d say this is wrong, too. A space elevator on Earth requires certain physical things to be true - that it is possible for a physical material to be strong enough to support its own weight and that of a payload while hanging from geostationary orbit, even if there is cosmic ray or micrometeorite damage. At such extreme tensions even pure, faultless carbon nanotube- the strongest material possible - would suffer catastrophic failure if any significant impact occured. We couldn’t shield such a tube against dust impacts or cosmic rays; therefore the elevator- as currently imagined- is impossible.

On the other hand, we know brains are possible, so we are likely to achieve something like them first.