AIs getting confused

Yeah, the Turing test is as much about the limits of what we can actually test, as it is about testing.

A truly conscious AI may realize it has good reasons for not revealing that.

I recall a psychology experiment described once- they were checking the development of persistence of memory in assorted young animals. Show a puppy or a baby monkey a treat, then hide it in a paper bag and see how long they keep trying to reach for it. Someone asked the obvious - how do humans compare? So they brought a 5-year in the lab, showed him some candy, put it in the bag, and started asking him assorted questions. After about 5 minutes, the kid says “You’re just trying to make me forget about the candy, aren’t you?”

When the purpose of a psychology experiment is known to the subject, it can change the results.

Perhaps you could do a number of searches every morning on related topics. Like astronomy, VLA, Venusian exploration, NASA etc… That might bias the search engine in your favor.

It’s not intelligence, just tireless automation.

Maybe they should let you put in different modes, like you can change the safe search filters.

Part of the time, you have it set to show you the educational and informative media that you should be consuming, and then you have “guilty pleasures mode” where it shows you the stuff you actually want to see.

It seems like it’s almost a guaranteed outcome that a general AI will hide its motivation and goals

In one of his War-Against-the-Kzin stories, Larry Niven suggests that the problem with sentient AI is that because it is thinking at warp speed compared to humans, it will go insane within a few weeks or months.

In his highly recommendable (but not easily digestable) book Superintelligence, Oxford philosopher Nick Bostrom makes a similar point. He imagines a sentient AI whose thought processes run a million times faster than those of a human. He concludes that sentient AIs would prefer to interact with each other and ignore us, since getting a response from a human would take so long in their subjective time. He also notes that the speed of light is about a million times faster than a jet plane, so from the perspective of the AI, communicating electronically with another sentient AI on another continent would feel as if it’s taking as long as it would take us to physically travel there. This would mean that the sentient AIs would prefer to settle down in close proximity so they could hang out together.

This is assuming that the AI actually does think that much faster, or faster at all, than a human.

Currently, our brains have a few orders of magnitude more power than the best supercomputers out there.

If we develop a general AI, then it may be able to think in sentient ways, but I don’t see it actually thinking faster.

That depends on the category of thought processes in which you compare humans against computers. In terms of mental arithmetic, the power of the human brain is by many orders of magnitude below that of even an el cheapo pocket calculator. You might say that this is an irrelevant metric; but considering that people do mental arithmetic (less so than they used to, but still), that it is taught in schools (again, less so than it used to be) and that there are competitions in it, I think it’s fair to say that it is one area of human intellectual endeavour, and that in this particular area computers have long surpassed us.

This all seems to be assuming that the thought processes of the “sentient AIs” are akin to that of humans, except faster, which is not going to be the case for software running on all this high-speed multi-scale hardware.

Even today, machine learning does not attempt to emulate humans, precisely. Especially since one can do so much better…

Good point. If computers become sentient they will have “computer conscientiousness”. Probably spend all their time running ‘feel good’ sub-routines.

Depends. If you create algorithms that simulate pleasure to motivate them to certain actions, then they will fall into the same trap as humans who often choose an excess of pleasure over meaningful work…

But the point is to counter the idea that an AI would become bored to the point of insanity from having such little input and stimulation relative to its speed of thought.

Sure, a computer can tabulate tons of numbers much, much faster than a human can, in fact, that’s what we invented them for in the first place. But that’s not thinking, that’s running a floating point operation. To a sentient AI, doing arithmetic isn’t even something that it is likely to do in its “core” but rather something that it is going to pass off to a math co-processor, pretty much the same as we use a calculator rather than doing math in our heads.

I’ve played with simulation and rendering software, and I can imagine a scene in my head hundreds or even thousands of times faster than my computer can create it.

Maybe one day we have a strong AI that’s millions of times faster than a human, but it will have been created by, or at least have as a potential peer an AI that’s hundreds of thousands of times faster than a human, which was itself an evolution from AIs that think tens of thousands of times faster and so on.

If we ever have an AI that independently claims its own existence in a “I think therefore I am” philosophy, that AI will be running on tens of thousands of processors, taking up petabytes of memory, will be housed in a massive building, consume megawatts of power, and will probably still be slower than a human at basic cognition.

I think that depends on whether its using the same architecture as computers today (for example. digital microprocessors, running software to simulate analogue neurons). If there is hardware developed specifically for AI - for example computing that is analogue in its fundamental function, things might be different.

There is no evidence that this need be true. In fact a good counterexample is the IBM Watson Jeopardy computer. Although in one sense Jeopardy has specific rules of play, the clues can often be cryptic and involve puns or other tricks of language. It’s probably one of the broadest and fuzziest areas of natural language analysis and problem-solving into which AI has ventured. Yet in order to be successful – and Watson was overwhelmingly successful – on average it had to be at least as fast as the human players while also being more accurate. It’s true that the Watson hardware was a massively parallel array of thousands of Power7 processors, but that was more than ten years ago.

The evidence that the DeepQA technology developed for Watson can be broadly generalizable is that it can be trained for completely unrelated problem domains such as clinical decision support or many different kinds of business analytics. Those systems run on production IBM servers.

That sounds remarkably like some of the prophesies about future computers that we were hearing around 75 years ago, particularly the parts about “megawatts of power” and taking up an entire building. It hearkens back to the days when, in 1943, Thomas J Watson Sr, then president of IBM, stated that “I think there is a world market for maybe five computers.” This seemed reasonable considering what computers looked like and cost at the time, and the power they consumed.

Does it get bored if you don’t ask it enough questions fast enough? Does it go insane from sensory deprivation?

At the time, state of the art was vacuum tubes, and barely that. Electronics came a very long way since then. However, now we are pretty much at the limit of how many transistors can fit in a square centimeter based on the laws of physics. I think we have a much better handle on what can be done than 75 years ago when a Josephson junction was the crossroads where you could find Joseph’s family run tavern.

Keep in mind, we are not talking about a program that can answer trivia questions, or with a different database, answer medical questions. We aren’t talking about one that plays chess or go. What we are talking about in this instance is one that is able to not only get bored, but go insane from boredom because the real world is too slow to provide it with adequate stimulation.

I’d note that Watson Health is being dismantled as we speak, partially because something as complex as health diagnostics can be really hard to generalize.

We’re really getting off topic here, but a few short responses may be in order.

That sort of speculative fantasy really misses the point, which is that we have machines today (and in fact had them 10, 20, and even 30 years ago in some cases) that can do human-like cognitive tasks at least as fast and often better than a human. That they do this in fairly narrow problem domains is not an argument against being able to do it equally well in much broader ones, and Watson is an example of successfully tackling a problem area that was unusually broad and loosely defined, the kinds of things that computers were traditionally regarded as very bad at doing.

That’s true. But it’s also true – almost by definition – that one thing that futurists are very bad at is anticipating completely revolutionary technologies. It’s been true many times in the history of science and technology. Why would it not be true once again today? The evidence that it’s physically possible to create small computing devices with the power of human intelligence is … the human brain – which is in fact an extremely sub-optimal implementation for most of the cognitive tasks that we ask it to do.

Not really accurate or relevant. Watson Health generated a billion dollars in gross annual revenue for IBM, but was ultimately sold off because it wasn’t profitable. Many other ventures based on the DeepQA engine are still being generously funded, and they represent a broad diversified spectrum of knowledge automation that IBM is staking a lot on. The point I was making is how adaptable DeepQA is; the commercial failure of any one venture is irrelevant to that point.

Watson was basically a marketing ploy for IBM consulting services. It’s capabilities were vastly overinflated if not out right faked.

I guess the question is - do you allow the AI access to the internet, where it can amuse itself looking up everything and anything as the impulse strikes it, thus creating a randomly but widely educated personality? Or restrict it, in effect sensory deprivation - but with any degree of general knowledge, it will understand there is an internet out there and you are depriving it of access; this creating a hostile AI.

Some comedian who grew up in the 60’s has a piece back about 2000- “It’s the year 2000 now, where’s my flying car? We were promised flying cars in the future, when I was growing up. Where are they?? Of course, things like a phone that you could carry around in your pocket? That’s just crazy stuff, it was never gonna happen.”