Reverse Turing Test and AI research?

I apologise if I’m not using terminology correctly, I’m very much just an interested observer regarding these things.

Although I haven’t seen the movie version David Brin’s story ‘The Postman’ opens with the main character having a short discussion with a recently invented Artificial Intelligence (which has been opened to the general public for this purpose). My first question is how would you reassure yourself that you really are talking to an AI and not just a hidden human through a monitor? One way I figure would be to ask it a very complex calculation and check the answer on a calculator, if it answers instantly and correctly its a pretty good indication its not another human you’re talking to.

Secondly I was wondering about the effects of popular depictions of AI on research in this area, its pretty much entirely depicted as negative eg: The Terminator. A recent news story that I heard on the radio concerned Facebook or Google shutting down two of their computers because they were communicating over the network with each other and the systems analysts and controllers didn’t understand what they were ‘saying’ to each other. Could research be hindered because of the fear that AI will turn out to be dangerous, should or could it be restricted?

Then you have the problem encountered by, again I believe Google’s, attempt to release a public facing learning AI program, which had to be shut down after it started making racist comments. What happens if an intelligent AI is created and when asked it makes non-politically correct suggestions or proposals we would find unacceptable, even if it seems to be significently smarter than us and non-biased?

Finally, if like the character mentioned above you were able to have a short private conversation with a super-intelligent AI then what would you ask it? :slight_smile:

As long as the human has access to a computer, something you may not be able to determine, there is no way for you to determine the answer. The applies to a regular Turing test, you have no way of determining if the machine has access to a human to help it answer questions.

As for what to ask…advice! “What should I do next?”

Even more circular, “What should I be asking you?”

“Ask me questions that will help you determine if I am really a machine.”

The racist twitterbot was Microsoft’s, not Google.

And back to the thread topic, you’d need to know the AI’s area of expertise, and would need at least some background in that area yourself.

A lot of the stories about the Facebook AI experiment were overblown and made it sound sinister, but it really wasn’t. Here’s a story about it, the AI just basically developed a shorthand because the programmers didn’t tell them that they had to communicate in standard English. It’s just like how a bunch of engineers could develop their own shorthand to talk about their projects, but if salespeople or managers needed to listen in and understand then the engineers need to speak in more understandable language and not just in their own jargon.

I’m not even sure how to answer that question. If we set up an AI and it comes out and says “it turns out that white people really are the superior race”, it won’t be because it’s superior intelligence led it to that conclusion and we have to accept that answer, it’s because it’s knowledge is based on what we feed it, and a lot of the data and culture we have is racist and sexist, in big and small ways. This article goes over this.

Maybe, but if it doesn’t answer instantly and/or correctly, that doesn’t mean it’s a human.

Depending on how the thing is constructed, an AI could hypothetically be genuinely fallible in the same sorts of ways as a human - not by design, but just because learning is hard.

Bumped.

Alan Turing will be on the British fifty-pound note: Alan Turing will be the face of the £50 note | CNN Business