What is so great about the Turing test?

LOL. Man, I wish that’s what I was doing … but I guess watching the live web broadcast of Tarol Hunt drawing his Goblins webcomic is just as good, right?

It’s not trivial, but it’s worth knowing: for a given set of axioms in first-order predicate calculus, it’s not generally possible to show that a given statement does not follow from those axioms. This is probably equivalent to the undecidability of the halting problem, but it’s been too long since I studied that sort of thing to be sure.

I think I might understand what Frylock is saying, tell me if I’m right or not:
The person in the CR is simply following instructions regarding how to proceed given each input.

The computer, on the other hand, even though at it’s lowest level is certainly executing instructions (unless it’s an FPGA or similar circuit), at a higher level would be taking the input, relating it to previous experiences, building a predictive model based on goals and possible responses and anticipated feedback due to it’s response, and ultimately producing some output.

All of this being much closer to “understanding” than the human that refers to the almost infinitely sized input->response manual (which as a practical matter can’t actually be created and really, IMHO is a very weak argument against AI).

I don’t think the Turing Test was meant to determine if the machine was intelligent. It was meant to demonstrate that a simulation of intelligence could be performed though purely mechanical means. I’m drawing from the Can Machines Think? chapter of Martin Gardner’s Mathematical Circus.
Of course, it’s entirely possible I completely missed Gardner’s point.

Or equally possible, that Gardner doesn’t like acknowledging Turing’s point :-). His preface to Penrose’s “Shadows of the Mind” was a paean to the uniqueness of human insight. Not very scientific, really.

But this all revolves around “theory of mind”. You say it’s a perfect simulation of conscious thought, but the computer says “What are you talking about? Of course I can think.” And similarly, the computer may (being the sensational debater that it is) say “You, Bryan, are no more than an excellent simulation of a thinking computer”. To which you respond “Well I KNOW I think”. How, indeed, do you know that I am a person, typing at 5 AM in Brooklyn, NY? Maybe my real name is Ultronic and I’m a tremendously sophisticated computer. And how do you know that your own close friends are anything but beautifully programmed robots? As you can see, where I’m going with this is that ALL WE HAVE in determining whether people (or machines) think, is how they speak and behave. We gather from the empirical evidence in front of us that they have minds. The Turing Test provides a method of allowing a computer to convince us it has a mind, as effectively as I am now convincing you that I have a mind. Some may even want to claim that the only mind whose existence you can sufficiently prove is your own, if you do not accept the legitimacy of the Turing Test.

And as for the Searle objection, a summary for those who don’t know what all the hullaballoo above was about. A man in a room is given a supremely complicated set of rules which he follows in order to draw random lines (I’m shortening it), when he is given other random lines.

As it turns out, the lines he is being given are actually chinese characters, and the lines which he is drawing are also Chinese characters, and moreover, they are appropriate responses. As long as he follows instructions (so he is like a programmed computer…) he can “speak Chinese”, and yet, no one would say that he understand Chinese! Similarly, a computer following instructions does not understand what it is doing.

Now, the flaw in the system, is that we do not claim that the man understands chinese, any more than we claim that my Broca’s area (or other random language-related part of the brain) understands English and Hebrew. What we claim, and according to the Turing Test, we claim it correctly, is that THE SYSTEM, that is, Man+instructions, understands Chinese. The components are irrelevant. Similarly, no part of my brain is conscious or “understands”, and no part of a computer would be conscious-- but consciousness or understanding emerges from the whole, in people, in robots with AI, or in Chinese rooms with men in them.

You have the historical development backwards. Turing used the undecidability of the halting problem to show that the Entscheidungsproblem was undecidable, relying on Goedel’s technique of reducing a FO formula to a natural number (Goedel numbering). The Enschreitdungsproblem was the problem of deciding whether a statement in a formal language followed from a given set of axioms. The undecidability of the halting problem can be shown by a relatively straightforward diagonalization proof, and for Turing, was merely a step on the way to proving the decision problem (the fact that this step also paved the way for the computer revolution is merely a nice side-effect).

Church’s method of solving the problem was a lot more complex than Turing’s, which is one reason why Turing’s work tends to be better known than Church’s, even though Church beat him to it (you know you’ve got problems when the pre-eminent logician of the time, Goedel, has doubts about your work).

Not all logics are undecidable. Propositional logic is decidable. Predicate logic (and any logic contains predicate logic, for example second-order logic) is undecidable for reasons sketched in my previous post.

Ultrafilter’s original statement:

My attempt at a restatement:

It appears ultrafilter (and others) think that’s an accurate restatement. (Anyway, no one has said it’s not.) And it appears that by the last sentence, Ultrafilter is referring to the fact that most statements in many logics are undecidable, in the sense that it’s impossible to prove that they’re not theorems.

If I’ve got all that right, here’s what I don’t understand. How is it known that for our conjecture C, there is an S and an A such that it’s undecidable whether S is a theorem of A, and such that proving C is equivalent to deciding whether S is a theorem of A? Why might it not turn out that the S the deciding of which is equivalent to proving C is decidable? After all, it’s not the case that it’s always decidable whether a given S is a theorem of a given A.

Another question. What about my own argument that there is no program P for which no algorithm exists that decides whether P will halt or not? My argument was as follows. The program in fact does either halt or not, though we may not know which. But that means we can construct one algorithm that says “yes” when it takes the program as input, and another one which says “no” when it takes that program as input. We then know that one of those algorithms gives the right answer concerning that program. Since we know one of them does this, we know that an algorithm exists which decides whether the program halts or not. We don’t know what algorithm it is, but we know it exists, and its the existence claim which is my claim. I thought it was a claim that is so utterly trivially true as to be hardly worth formulating. I formulated it because I thought someone else was contradicting it.

-FrL-

Several people on the thread have objected to Searle by saying that while the man in the CR doesn’t understand chinese, the Chinese Room system (including the man, the books, the input-output procedure, and so on) does understand Chinese, and it is the system which is relevantly like the computer.

Are you guys not aware that Searle himself answers this reply in a few of the papers in which he puts forth the CR thought experiment? He calls your reply the “systems reply,” and here is how he answers it. Think of a slightly different scenario, in which the man is not inside the room reading rulebooks and so on, but instead, has memorized the rules. The basic idea of the response is to say that you can construct a scenario in which the man is the system. People give him slips of paper, he manipulates them according to the rules he’s memorized, and produces output apparently in perfect Chinese–without himself understanding Chinese.

So, Searle argues, yet again, we can see that for a system to execute a program (whatever program you like) is not sufficient for the system to have understanding. For in the scenario, the system executes the program, but does not understand.

You might have problems with the response, but it’s good to at least know that Searle has a response, and know how you would answer it.

It is good here to reiterate what Searle’s overarching point in these debates has been. His point is not that you could never make a computer think–he thinks we can make a computer think, at least someday. His point is that you won’t get a computer to think simply by giving it the right program. He thinks computation is not sufficient for understanding. You need something more than computation to get understanding.

-FrL-

I think you are misunderstanding what ultrafilter is saying. We don’t first run program P and then try to find S and A. Rather, both P knows about the axioms, and takes as input S (the statement that we wish to show is a theorem), only halting when P finds a proof of S from A. Given a fixed set of axioms, A, and a statement S, run P on S, and see whether P halts. Deciding whether P halts on arbitrary input is undecidable.

Well, that sounds like a perfectly good strategy. The only problem is there’s a hole in your argument. How do we construct the programs that, given P, either output “yes” or “no”?

I thought so too, but neither he nor others who have responded took issue with my attempted restatement, so I wasn’t sure whether I was misunderstanding or not.

Wait a minute! P takes C–the conjecture–not S–the statement–as its input.

Again, what Ultrafilter said was that the program deals with a particular conjecture. He didn’t say anything about the program finding a proof of S. He said that constructing such a program is equivalent to deciding whether S is a theorem of A, but that is not the same as saying the program itself finds a proof of S.

I know that deciding whether P halts on arbitrary input is undecidable. I have said in this thread already that (quoting myself):

Again, let me reiterate what I’ve said:

  1. There is no algorithm A which is such that, given any program at all, A can tell you whether that program will halt or not.

  2. Given any particular program P, there is some algorithm which will tell you whether P will halt or not.

I think you guys may be reading my second claim and misunderstanding me to be saying the negation of the first claim. I’m not.

:confused:

I had in mind something completely arbitrary. The algorithm literally simply says “When you are given P, output ‘yes’” As far as I can tell, this would even be satisfied by an algorithm that simply says “output ‘yes’” i.e. for any program. Like I said, I was thinking of something competely and utterly trivial, because I thought it was a trivial point.

Perhaps I am confused as to what it means for an algorithm to decide something. I thought “decide” here was just a kind of term of art meaning, basically, “give the right answer.” In other words, I thought it didn’t matter how the algorithm arrives at the answer, just that it gives the right answer.

-FrL-

Yes, the input to the program, a statement in FOL, is treated as a conjecture. The statement and the conjecture are the same thing.

Isn’t this the same as the halting problem, though? If every program can be analyzed by another to determine whether it halts, what is the problem in having a program first take another as input, look at it, look up the correct program for analyzing whether it halts or not, and then running that program on the input? Haven’t I just described a solution to the halting problem?

Yes, it means that. I’m still confused about what you are talking about, though. Your suggestion only examines programs, not inputs. How do we know, by looking at P, that P halts on 5, 10, 400, 600 etc. Isn’t this what ftg was talking about with the Goldbach conjecture?

The statement and the conjecture are the same thing here, as Capt. Ridley’s Shooting Party concluded.

I guess it really depends on what the definition of “understanding” is.

Okay, I’ll have to re-read Ultrafilter’s post. Thanks (and thanks to Ultrafilter) for that clarification.

That halting-problem-solver would be a program infinite in length. (Since it can look up any program at all in its database. That database would be of infinite length.) I don’t know whether the halting problem is supposed to be unsolveable in principle even by an infinitely long program. (Is it?)

I thought a program plus an input was just equivalent to some other program which takes null input.

-FrL-

So on your understanding of understanding, does the man in the internalized version of the CR thought experiment understand Chinese?

-FrL-

—it’s probably useful to ask whether it’s true that the man in the internalized CR is the same thing as “the system”, i.e., the system relevant to the question “Is there a system here which understands Chinese?”

Suitably formalizing “infinitely long programs”, you could make an infinitely long program which correctly solved the Halting Problem for inputs which are finitely long programs. It would be easy; as just shown, you just hardwire a database of solutions into your program.

You could not make an infinitely long program which correctly solved the Halting Problem for all inputs including infinitely long programs such as itself.

Investigating this in light of the approach via a hardwired database of solutions, we see that the proof of the undecidability of the Halting Problem ends up being just the same as the proof of Cantor’s Theorem, that (classically) there are more than K bitstrings of length K, for any cardinal K, including infinite ones.

Of course, when we usually talk about the Halting Problem, we don’t mean to talk about any of this in the infinite context; we just restrict our discussion to finite strings/programs/etc. But it’s all the same.

This all seems correct to me, but I think people are talking past each other.

ultrafilter’s point seems (suitably rephrased) to be that, from any particular consistent system of axioms A (in a logical system capable of reasoning about computer programs in the usual ways), we can construct a program P(A) such that the axioms A neither prove nor disprove the statement “P(A) halts”. This point of ultrafilter’s is true. [The result is simply (Rosser’s sharpening of) Goedel’s Incompleteness Theorem; we can let P(A) be a program which enumerates proofs from A, halting only if it finds a proof of “P(A) does not halt” without having previously found a proof of “P(A) does halt”. Some mechanical (if potentially confusing) reasoning will show that A cannot consistently prove or disprove “P(A) does halt” in this case, and thus P(A) does what is claimed of it.]

Frylock’s point seems (suitably rephrased) to be that, for any particular program P, there is some consistent system of axioms A(P) (in a logical system capable of blah de blah) such that the axioms A(P) either prove or disprove the statement “P halts”. This point of Frylock’s is also (classically) true, and for the exact trivial reason that he gives: we can take A(P) to be either {“P halts”} or {“P does not halt”}, according as to the behavior of P. Admittedly trivial, but it does what is claimed of it.

Being a bit more precise, for any fixed consistent “formal” metalogic M, M does not prove all true instances of “The statement S does not follow from the axioms A”.

Why? Well, we take “formal” logical systems to be those with algorithmically-checkable proofs. Thus, it comes down to the fact that there is no algorithm which, given as input a system of axioms A and statement S, correctly states in all cases whether S follows from A.

But why is this true? Well, the question “Does program P halt?” is equivalent to the question “Does the statement ‘P halts’ follow from the basic axioms [blah de blah] describing the behavior of Turing machines/Java programs/whatever?”. And, conversely, the question “Does statement S follow from the axioms A?” is equivalent to the question “Does the program which enumerates proofs from A searching for a proof of S eventually find one and thus halt?”. So, yes, it is simply equivalent to the undecidability of the halting problem.

(Which is in turn simply equivalent to Cantor’s Theorem, just interpreted in the category of computable functions rather than of arbitrary functions, as I love to harp on about; Lawvere’s “Diagonal Arguments and Cartesian Closed Categories”, etc., …).