Indeed. Ray Kurzweill very effectively refutes Searle’s points in their back-and-forth bit in Are We Spiritual Machines?, which is available on Kurzweil’s website at www.kurzweilai.com.
He basically says that if none of the individual components of a system have anything approaching “consciousness,” then neither does the system as a whole. One objection to this is that this logic is apparently refuted by the human brain – none of its components (i.e., individual neurons) exhibit anything that looks like consciousness, but the brain as a whole does.
It’s called the “Halting Problem” in the literature.
Nope. Unsolvable is unsolvable is unsolvable. All known general purpose computing systems are Turing equivalent. None of them (or lesser beasts) can solve the Halting Problem. Period. No one in 70+ years has come up with a more powerful computing system and is unlikely to do so. And if the Martians did show us a whole new way of computing, the Martian’s computer’s Halting Problem would still be unsolvable. From the Halting Problem, one can then prove hundred’s of other problems are not solvable. (My fave is that one can never build a perfect computer virus detector.)
I recently read a book about (or two) about Turing’s life. He viewed quite strongly that computer’s would someday be intelligent and mused about the consequences of it. This caused a backlash from some more conservative elements of the British establishment. (We’re intelligent because God made us and all that.) But basically more a philosophical debate more than a scientific one.
Various fields in AI have their only Holy Grail for noting a significant breakthrough in advancing the field. In Computer Vision, it’s “recognizing one’s grandmother.” I.e., given a picture with several faces in it, pick out one familiar face. People can easily pick out their grandmother, but computers cannot yet do the equivalent. Although how a computer got a grandmother is another question. (Note that the facial recognition software shown in TV shows and sold to gullible police agencies doesn’t work anywhere as well as claimed.)
I don’t see how things like emotions should play a noteworthy role in defining AI.
Could it be at all instrumental in solving the Turing test? For instance, could you ask an invisible respondent how “he” would feel if you pulled a baby’s arm off to see what would happen? Is amusement connected at all to emotion? Could you make a witty remark and try to determine if the respondent detected the wit? Could a series of witty rejoinders be related to emotion in the sense that we all feel something, some emotion, I’d propose, when we exchange in witty banter - elation, excitement, joy, etc.
I don’t think that the existence of unsolvable problems has anything to do with whether a machine intelligence can exist. The Halting Problem has interesting applications in a lot of computation theory, but I don’t think it has much of an influence on practical AI. After all, it’s generally accepted that humans can think, and we can’t solve the Halting Problem either.
The Turing Test doesn’t advance the notion that the machine has intelligence, rather it demonstrates that intelligence is not necessary to carry on a credible conversation. Similarly, a chess-playing computer doesn’t prove the machine understands chess - it demonstrates that playing chess does not require understanding. As I understand it, the Turing Test really asks about how many of our activities can be mechanized and what degree of technical sophistication is needed.
Right, there’s no program P which is such that, given any program at all, P can tell you whether that program will stop. But what I was saying is (or was meant to amount to saying) that given any finite set of programs, a program can be written that tells whether those programs stop or not. (This is trivially true of course.) I thought that the person I was responding to was saying that there is some program Q such that no program could determine whether Q ever stops or not. That was what I was disagreeing with.
Depending on exactly what you mean by emotion, one could argue that you can’t have intelligence at all without emotion. For intelligence is goal oriented, goals imply desires, and desires (again, depending on what you mean by emotion) are emotions.
A kind of artificial intelligence could be built which has no goals of its own but only those given it by those interacting with it. But I take it AI researchers would like something more than that.
That doesn’t even beging to resemble Searle’s argument. :dubious: You said there’s a back and forth between Searle and Kurzweill. Does Searle himself accept his summary of the Chinese Room argument?
Searle argues that since the man in the chinese room carries out all of the computational tasks that are supposed to be able to make a computer understand Chinese, and yet does not understand Chinese, it follows that running a program–any program–is not sufficient for understanding. You need something more, or something else, other than running the correct program.
There are problems with Searle’s argument.* But the argument you mentioned is not Searle’s argument.
-Kris
*I think the argument is a lot better than most people give it credit for, mostly due to misunderstandings he himself has not done enough to avoid. My own view is that people should not accept that the person in the room and a computer running a program are relevantly analogous. In other words, just because a person could run the program without understanding, this doesn’t mean a computer couldn’t run the program without understanding. A person “running a program” is doing something very different than a computer running a program. For example, a person is following instructions when he “runs a program” in the Chinese Room, while on the other hand, a computer is not following instructions at all. Programs are not sets of instructions, they are design specifications (at least when the thing running them is a computer and not a person).
This is fundamentally incorrect. You think you are restating the Halting Problem but in reality you are not. Take a simple program that searches for contradictions to Goldbach’s Conjecture (every even # > 2 is the sum of 2 primes). Simple program. No one knows whether it will halt or not. Ditto any other finitely refutable open problem in Math. And those are just the easy examples. (Next stop Goedel’s Theorems.)
And I’m just sticking to programs with no input. If you allow inputs then the Universal Tm is the only program you need to consider. Ever.
I suspect you have conflated some other issue with the Halting Problem.
Is the Wikipedia article mistaken? Its understanding matches my own, but it’s possible both I and the article are mistaken.
According to both myself and the article, there is no single program (or algorithm in the article’s wording) which can determine, for all programs, whether that program will come to a stop or not. If there is such a thing as “The Halting Problem” (as opposed to simply a set of individual halting problems for individual programs) I don’t know what it is supposed to be other than just that–the problem of finding a general algorithm that solves the halting problem for all programs.
The second issue you discussed is this. Can there be a program whose halting program can not be solved by any algorithm? You said “no one knows” whether a program searching for counterexamples to the Goldbach Conjecture will ever come to a stop or not. But I’m not sure what your point was in pointing this out. Just because “no one knows” whether it will come to a stop or not, does not mean there can be no algorithm for determining whether the program will come to a stop or not.
I am pretty sure that for any program P, there is some algorithm which will determine whether P will ever come to a stop or not. I think this is just trivially true, since the algorithm needs only be one that returns “yes” when given P as input, if P will come to a stop at some point, or returns “no” when given P as input, if P will never come to a stop. That we don’t know what that algorithm is for many such programs–such as your example concerning Goldbach’s conjecture–doesn’t imply that no such algorithm exists. The algorithm exists. For the GC example, the algorithm is either the one that takes yields “yes” when given the GC program as input, or else it’s the one that yields “no” when given the GC program as input. We don’t know which algorithm it is, but it’s one of those algorithms, and since it’s one of those algorithms, the algorithm exists. Similar reasoning applies for whatever program you’d like to discuss.
Anyway, I suspect, for my own part, that you have conflated the Question Whether Programs Will Ever Come To A Stop with some other issue.
But I await some actual mathematician (or anyway, mathematically more knowledgeable person) to arbitrate between our respective suspicions.
That’s wrong. The Problem of Discovering Whether Programs Will Come To A Stop Or Not provides an example. There is no generalized algorithm for solving that Problem, and that means no computer could solve that Problem.
I didn’t really mean to say otherwise, but as I can now see, whatever I meant not to say, I did in fact say it. Subsequent posts in the thread have made it clear what I meant to say.
No, Frylock is right. Your example isn’t equivalent to the Halting Problem at all. The existence of a single open problem in no way implies that the outcome of such a program is unknowable. We don’t currently know a guaranteed-to-halt algorithm that will show that, but that doesn’t mean that none exists.
ftg was responding to Frylock’s assertion that there is no program Q whose termination/non-termination can never be decided. In fact, it’s not that hard to construct such a program: simply have it search for a proof to some conjecture and stop once it has something. Deciding whether such a program halts is equivalent to deciding whether some statement is a theorem of a set of axioms, and that’s known to be undecidable.
Wrong. Computers run machine code, which is basically a list of instructions for the processor to execute. In fact, one common measure of processor performance is instructions per second (such as MIPS, Million Instructions Per Second).
That’s not Kurzweil’s summary, that’s just my quick-and-dirty summary of one aspect of one variety of Searle’s argument (there are actually several versions of the Chinese Room). He argues that it’s possible to simulate consciousness (or “understanding chinese”) without actually being conscious (or understanding Chinese), because no part of the Chinese Room system understands Chinese even though the system as a whole seems to be understanding Chinese. But the same thing can be applied to the human brain: I understand English (or Chinese, or whatever), but the individual components making up my brain don’t. Just because the basic units of the system don’t exhibit any conscious understanding doesn’t mean the system as a whole can’t.
I think the argument is much worse than people give it credit for, simply because Searle’s argument boils down to this:
Imagine a man in a room, receiving squiggles on paper through a slot in the door, looking those squiggles up in a rule book, writing down other squiggles that the book directs him to write in response, and passing them back out. Unbeknownst to the man, the “squiggles” are Chinese, and he is having a conversation in Chinese that is indistinguishable from a real human Chinese speaker. Now, since the man doesn’t have any understanding of Chinese, and a book can’t be said to “understand” anything, clearly it’s possible to simulate conscious understanding of something without actually possessing it.
But the whole notion that you could have a “rule book” that would allow you to convincingly answer questions in Chinese is ridiculous – an actual “rule book” that allowed you to answer questions in a manner indistinguishable from a conscious human Chinese speaker would have to be so complex that it would essentially have to understand Chinese itself.
Anyway, the book Are We Spiritual Machines? is available online here. Most of the book is Kurzweil arguing with intelligent design proponents, but the chapters by Searle and Ray, and Kurzweil’s responses, are very interesting.
I would have to be exceptionally clueless not to know that the word “instruction” is used as jargon in Comp Sci in that and many other ways.
I won’t insist that you shouldn’t use the word to describe what the computer is doing. My point was that what a computer does when it “follows” the “instructions” in a computer program is different (in an important way) than what a human in a Chinese Room scenario would be doing when she follows instructions. A computer doesn’t interpret its instructions and deliberate about the best way to carry them out and so on. Rather, the computer’s “instructions” are (as I said) more like design specifications. The computer is a platform for the building of (virtual) machines, and program instructions are something more akin to lego blocks, used to build structures, than they are like the kinds of instructions I am following when I fulfill a series of commands written down on a list for me.
There are many things I can do–probably the majority of the things I can do–which don’t require that I follow some formulated imperative which I have interpreted and understood and deliberated about and so on. For example, when I visually percieve, what I am doing is not carrying out an instruction, but rather, is simply to proceed according to my design. One could formulate an algorithm for a simulation of my perceptual processes, and one could express that algorithm as a series of instructions. But what I’m doing when I carry out that algorithm is not following instructions in any interesting sense. I’m just doing that which my structure determines that I do.
I’m saying that a computer running the “Chinese Room” program is not like the person in the Chinese Room, because the person in the room is following instructions in an important sense in which the computer is not. To the computer, following the program is like what perception is to me. It’s not a set of instructions to carry out–it’s just what my (or the computer’s) design determines that I do.
And I think it’s because of this fact–that the computer isn’t following instructions in the same way the Chinese Room person is–that the computer can understand Chinese by carrying out that program, even though the human in the Room cannot…
I’ve typed this up while out of the other corner of my eye I have been watching Waterlilies–a french coming of age film about young lesbians in a swimming education camp, or something like that. So if the above was scattered,my apologies!
I’m not sure I’m following you, let me try to restate what you just said so you can tell me if I’m reading you right.
You’re saying:
For a conjecture C, construct a program P which does the following. P searches for a proof of C, and stops once it has found such a proof. There is some statement S and set of axioms A which is as follows. Deciding whether P will halt is equivalent to deciding whether S is a theorem of A. Whether S is a theorem of A is known to be undecidable.
If that’s an accurate restatement, here’s what I don’t understand. How is it known to be undecidable “whether S is a theorem of A”? I think you were implying in your post that this is a somewhat trivial point, but I’m not following it unfortunately.
I really think you have that exactly backwards. When a computer follows instructions, it is well and truly following instructions. Quite literally. When a human “follows instructions” he is, as you note interpreting them (often incorrectly). In that case, the instructions are really more like guidelines.
I can’t make heads or tails of what you’re trying to say. Computers are given a list of instructions and follow them exactly. What on earth are you talking about with virtual machines and lego blocks? The last phrase, where you describe how you “fulfill a series of commands written down on a list” is exactly what a computer does. It doesn’t seem you know what you’re talking about here.
Yeah, try again. What you wrote doesn’t make a lick of sense.
Like I said, I don’t really want the focus to be on what counts as an “instruction” and what doesn’t. I just want to make the distinction between what a computer does and what a person “running” the same “program” is doing in a CR-like scenario. Looks like you agree with the distinction, but think I’ve got the vocabulary reversed. That’s fine with me.
From your post I do have some evidence that for ease of communication it is possible I might need to revise the language I use. But it’s hard for me to see how. I don’t know how to plausibly say the human is not following instructions.
I guess what you’re saying is that the human is trying to follow instructions but only the computer really does so, or something?