Any reason we couldn't build a machine that "understands" everything?

I understand what Searle claimed, and if it were true, then Searle proved that he doesn’t understand anything. Also proven by my contention that it is false.

This is the “conflation of systems” I was talking about. The system of the human being running the program understands Chinese. It doesn’t matter that you could engage him in casual conversation and he would not be able to tell you anything about Chinese – because then he is not running the program. Searle is only playing with ambiguity and misdirection.

The rest seems like question-begging on a monumental scale. It doesn’t understand by executing the program, because executing a program is not sufficient for understanding? Whether executing a program is sufficient for understanding is the point under debate.

This discards the premise that the machine exhibits the external behavior of understanding. We know that something enjoys chocolate if and only if it exhibits the behaviors associated with enjoying chocolate – notably including a proclivity for future encounters with same. We are therefore entitled to make predictions about its behavior with regard to future chocolate.

(Future Chocolate – band name!)

And where do you get the idea that “it understands” implies “we are justified to make predictions about its future behavior,” unless you define understanding behaviorally? Personally, I would say that, when we say “he understands,” if we mean anything other than that he responds in the correct way, we are talking nonsense. We are not merely giving a wrong answer: we are answering a question that has no possible meaning. We are speculating about the color of invisible unicorns.

Could you explain, then, how the argument you have just outlined is not equally applicable to demonstrate that we are not justified in assuming that human beings understand? We have no reason, aside from past behavior, to think that a person who enjoys a chocolate bar today will pursue another one in the future. If we stipulate that human beings will consistently behave like understanders because they understand, we are begging the question.

But if we can produce the behavior of a human with a different inner structure, that is a concrete demonstration that inner structure is not relevant to external behavior. You suggest this yourself when you say “you can get any behavior you want out of the right program.”

So, why should a thing’s internal structure be considered a part of understanding, if the external behavior either way is the same? And, even if we admit the importance of internal structure, what reason do we have to say that our internal structure “understands” while some other structure does not? If you follow this through, eventually the only thing of any consequence you are saying is that you define “to understand” so that it means “to be human.”

Missed the edit deadline.

I understand what Searle claimed, and if it were true, then Searle proved that he doesn’t understand anything. Also proven that he doesn’t understand this subject by my contention that it is false. I don’t like Searle’s use of philosophical language to address real matters, which results in his self defining conclusion. The Chinese Room understands Chinese in the same manner that any human does, through a process. The exact process or processing machine is irrelevant.

Oh, I don’t know, but you might find a few answers to that question here, or here, or even here. I am not saying I agree with any of those (I don’t, I am a physicalist), but they do give lots of reasons to believe that physicalism is false, many of which have convinced many very smart people, and some of which (in the first two instances, anyway) I really do not know how to decisively rebut.

It is, but it is not my position that a machine that no machine could ever enjoy chocolate. My position is that we have never built a machine capable of enjoying chocolate (I am pretty confident that I am right about that), and that, at present, we have essentially no idea about how we ever could build one. The interesting, and scientifically fruitful problem in this vicinity would be to try to figure out how we could build one, but the first essential step down that long road is to face up to the difficulties. It does no good to say “Oh, it must be possible, otherwise there’d be ghosts,” and forget about it.

At present, there is a lot more evidence around to suggest that supernatural forces exist than there is to suggest that it is possible to build a machine that can enjoy chocolate. I share your skepticism about the supernatural and I believe that a chocolate enjoying machine ought, somehow, to be possible, but lets’ not pretend we are certain there are no supernatural forces, or that such a machine can definitely be built. A fortiori, lets not deceive ourselves that we can prove the one on the basis of the other. These are things we hope to discover, not things that we know. We are not even entitled to say that the balance of the evidence points towards them. In the first case, the evidence, such as it is (not good, I admit), points the other way. In the second, we have no evidence at all.

A “theory of understanding” and a “definition of understanding” are the same thing.

I meant that all understanding is compression, not that all compression is understanding.

When we say we understand a thing, we mean that we possess a compact mental model that predicts important features of the thing without mimicking the thing.

I can tell you exactly how to build that. Examine and analyze the process of what my brain does when I enjoy chocolate. Write a program which symbolically represents the parts of that process. Have that program reads data corresponding to the input from the other parts of my body when I eat chocolate (not even neccesary if we are talking about the concept of enjoying chocolate). The program, or the computer, or whatever you want to call it has enjoyed chocolate. I’m not being obtuse, but you are making a lot assumptions that something which is difficult to do because of the volume of work involved has any bearing on its feasability.

I am not certain that no supernatural force exists. Nor could any rational person be certain that anything does not exist. But there is not a shred of credible evidence anywhere that supernatural forces do exist.

Ah, the “I’m right, so anyone who says otherwise is wrong” argument. There is no answer to that.

Hey everybody, we all need to stop using words and arguments that ed doesn’t understand. If he doesn’t understand it, it’s wrong. OK?

To address myself, for a moment, to those who can follow a philosophical argument, I think that what both sides of the Chinese Room debate often miss is that to construct a Chinese Room as Searle envisages it (i.e. a computer with entirely symbolic inputs and outputs that can pass a rigorous Turing test, in Chinese or any other language) may simply be impossible. If I (and the many cognitive scientists who think likewise) am correct to think that understanding depends on the capacity to have rich interaction with the environment, then no such device (which is envisaged as having an extremely impoverished interaction with its environment) can understand anything, and will not even be able to fake an understanding very convincingly. Under those circumstances, nothing much follows from the fact that he man inside does not understand what is going on either. The paradoxes (or irreconcilable intuitions) that the argument seems to lead us to, arise from the fact that we have accepted the incoherent premise that such a system could be built.

Irrelevant to what? It is very relevant to scientists who are trying to understand how we understand language.

Here is an example of why I don’t understand understanding.

It is clear that if you understand something you can predict its behavior under certain stimuli. That goes for machines and for people.

Say a physicist thinks he understand something, say star formation. He understands in two ways. He can write down the equations for star formation, but he also thinks he understands the reaction of the system to stimuli, for instance a nearby supernova.

He builds a computer model using his equations, and lets it run - and discovers that his understanding of how the system reacts is all wrong. In this case, does the program understand star formation better than he does, since he predicts the results better? Is his getting the equations right understanding or not?

Blow this up to everything, and you can see the problem. If we could write down the equations for everything, and build a machine that used those equations to simulate everything, would the machine understand it?

Your Nobel Prize is in the mail.

There are lots and lots of reports by people who claim to have seen ghosts, experienced miracles, and all sorts of other supernatural events. There have also been many carefully controlled experiments demonstrating effects such as telepathy, clairvoyance and psychokinesis, and which appear to be impossible to explain by any conceivable physical mechanism. You may not believe it (find it “credible”), I may not believe it, but to say there is no evidence is simply a lie. By contrast, there is currently no evidence whatsoever that anyone could ever build a machine that could enjoy chocolate, and to claim that you know that you could do it if only you had the time and resources is simply a lie.

Maybe you could do it if you had some brilliant insight into what it is to enjoy chocolate, but you show no signs whatsoever that you do. (And frankly, if you did, I think I would have to believe in miracles.)

No, there is no reason to believe that such a program understands anything. It is a tool that the scientist uses to enhance his own imperfect understanding. Any person’s understanding of anything but the simplest and most trivial issues is bound to be imperfect, and to remain so even after it has been improved and deepened by such (or other) means.

Note, please, that I am not saying that no machine can possibly ever understand anything. I am simply saying that the there is no reason to think that that he system you envisage understands the physical system it is simulating. Heck, it does not even understand (or need to understand) that the symbol it uses to represent, say, a star of a certain mass, represents a star (or a mass).

No, for the reasons given above.

OK, in trying to recreate an edit that missed the deadline, I mistakenly wrote:
‘Also proven that he doesn’t understand this subject by my contention that it is false’.

I intended to write:
‘Also proven that he doesn’t understand this subject if my contention that it is false, is true’.

You are free to interpret the meaning of my incorrect words, anyway you want.
Try addressing the corrected version.

It is your failure to understand logic and distinquish between opinion and provable fact that seems faulty.

That last paragraph is full of opinion and lacking facts. The paradox arrives, like all others, from faulty premises. Show some evidence somewhere of the necessity for a rich interaction with the environment as a necessary part of the process of understanding, or that a machine cannot have a rich interaction with the environment. Just because you believe it is true, doesn’t make it so. A process can exist abstractly without actually operating or having any interaction. Machines routinely interact with the environment far beyond the ability of any human.

[/QUOTE]

In what possible way? If understanding is definable as something other than ‘understanding only as is accomplished by a human’, what possible difference could the process or machine that achieves that make?

Since you feel the need to be snarky, I will add that I can write a computer program in only seconds, which achieves the level of understanding on this subject you have achieved.

Yes

Is that your best sarcasm? Or just an admission that a large volume of work invoved in accomplishing something does not disprove its feasability. If you understood that, you might at least of pointed out that a volume of work which exceeds any ability to perform it could affect its feasability, but then you have no evidence that that condition exists either.

No carefully controlled scientific experiment has ever demonstrated such a thing. A carefully controlled non-scientific experiment could demonstrate anything.
I will return to the remainder of that paragraph later on.

[/QUOTE]

Apparently to you, it requires brilliant insight to enjoy chcololate. I assume that is why you think so much of yourself.

Back to the previous paragraph and your claim that I am a liar. I will have to consult with a moderator before addressing that. I would suggest you refrain from calling anyone a liar as long as you claim that there is proof of supernatural forces.

Really? Why would that be? In most cases the definition of something is not the same as the theory of it. The definition of “evolution” (which would be something like, “the process by which current types of living organisms have developed from earlier living forms”) is not the same as the theory of evolution (which would take much longer to state).

Fair enough, but then you owe us an account of what it is that makes some instances of compression count as understanding when others don’t. (And I think you might find that that is the hard part.)

No “we” do not mean that. Many people can say, truthfully, and knowing perfectly well what they mean, that they understand something or other without having any conception of what a mental model might be. If a tribesman tells you he understands the ways of the forest, he knows exactly what he means, but if you tell him that he means that h e has a mental model of the forest, he will not know what you are talking about (even if your statement is true).

Once again, you are actually offering a theory, not a definition. In this case I am prepared to grant that it is quite a good theory (I remain open to being convinced that your “compression” theory is a good one, but I am not there yet), but even if it should prove to be the one true theory of understanding (if there can be such a thing) it will not thereby become a definition.

In general, confusing theories with definitions is a very bad idea, because it can mislead you into thinking that your theory must be true (it is true by definition!) and you will fail to pay attention to any relevant (positive or negative) evidence.

You may think you’re being obtuse and that this is a guaranteed method for making a computer enjoy chocolate, but actually there is no reason to assume in your hypothetical that the computer is enjoying chocolate.

This is because we’re talking subjective experience, and there are no objective facts that can tell you that this experience has occurred.
The idea that copying what the brain does, symbolically, is enough, has its detractors. They might argue that simulating a hurricane on a computer won’t actually make anyone’s house blow down, and by analogy simulating what a brain does won’t necessarily make a mind.

Um, first the ‘obtuse’ part. I believe I meant to say, ‘I don’t mean to be obtuse’. But I don’t recall exactly, because I didn’t intend that statement as part of any reasoning on this subject.

The reason to assume that the computer is enjoying chocolate is this: It can be demonstrated on a smaller scale that machines can simulate human thought without distinction. There is no evidence that it cannot be done on a larger scale that I know of. If there is such evidence, then I am wrong, and you are right. I do not contend that any of the theortical means I have described have accomplished ‘understanding’, or that they ever will. I am refuting the assertions of another poster that they cannot.

You are correct about the hurricane. I did not claim that this could be applied in that case. But you have addressed part of understanding here. An accurate symbolic representation of a hurricane can be used to understand the effect of hurricane, without ever damaging anything. A human can use that simulation, and we have done so, resulting in a better understanding than was accomplished through the study of actual hurricanes. I contend that no one has presented a rational case, or shown through evidence, that a machine could not accomplish that same thing.

And to be clear, you haven’t contended that a simulation could not result in machine ‘understanding’ either. And I cannot refute your assertion that it might not.

I’ve seen many arguments, here, and elsewhere, where people do not seem to understand the meaning of the words ‘would’, ‘could’, ‘if’ and other simliar qualifiers. Apparently you do understand them.

Thoughts on the Chinese Room:
I don’t think the Chinese Room understands Chinese in the way we think of when we say a human understands Chinese. When a human understands a language it means they are able to translate written or spoken words and phrases into abstract concepts in the brain which can then be manipulated further, etc.

But just because the Chinese Room doesn’t understand Chinese doesn’t really matter. It is the equivalent of taking a square peg, trying to fit it into a round hole, saying it won’t fit therefore no pegs will ever fit. It’s pretty short-sighted.

Our brains compress data, pattern match, incorporate into internal models, simulate multiple future paths with feedback, and act. Seems fair to assume that “understanding” involves most or all of these functions as opposed to a simple lookup.

Although a machine could be built differently that achieves the same result (brute force), we wouldn’t call it understanding because the word “understanding” as we use and know it, is tied to our internal mental processes. It describes something specific about how we solve problems, not something general about how all problem solving machines operate.

There’s a difference between using the word effectively and understanding its meaning, isn’t there? :wink:

What I was trying to say is that the normal sense of the word, taken at face value, is not useful in this debate. Merriam-Webster defines “to understand” as “to grasp the meaning of.” Perhaps I’m missing something, but this seems to say nothing at all about the question of machine understanding. What is a meaning, and how is it grasped?

Maybe this is splitting hairs, but such an experiment would not answer that question. It would only answer the question: is this particular machine capable of understanding? A negative answer to that far narrower question would not imply that no machine is capable of understanding.

By my way of seeing things, the quantity of evidence for the supernatural is precisely zero. Is my perspective biased? Maybe. But I have yet to witness, experience, or even hear of any phenomenon that is not explainable by purely physical means. If these very smart people have been convinced that something supernatural exists purely by arguments from physical sensations, without any kind of articulable, repeatable experiment whose result is explainable only by the invocation of the supernatural, then I think they are not so very smart after all.

Perhaps we differ in our levels of skepticism. For me, to pretend that we are uncertain about the non-existence of the supernatural is about on the same level as going out of one’s way to observe that we’re not certain that Superman doesn’t really exist. In every technical sense, you’re correct, but that doesn’t make it possible to entertain the notion without losing credibility.

As it happens, I think you’re wrong about there being no evidence that it’s possible to build a machine capable of enjoying chocolate. The facetious (but still valid) answer is that we build them all the time, and there are billions of them crowding our planet right now. The more rigorous answer is that we can observe that if you put 60 kilograms or so of carbon, hydrogen, oxygen, calcium, and dozens of other elements together in a very exacting shape, you will have a thing that enjoys chocolate. We do not yet (and may never) have the ability to manipulate matter on so fine a scale, but if you think that arranging matter in the right shape would not produce a chocoholic, well, I think the burden rest on you to support that assertion.

I will also note that the idea that it is impossible for a computer program to have understanding, if combined with the assumption that human beings have understanding, has as a necessary consequence the conclusion that it is not possible for any program to simulate physical processes above a certain level of complexity.

Finally, in the hopes of avoiding too much crosstalk, I’ll try to make my own position clearer:

I’m not necessarily advocating for any one definition of “understanding,” nor am I making a case that machines can definitely understand things. I would just as readily accept the conclusion that humans don’t really understand things either. Basically, I believe that if machines cannot “understand,” then humans cannot “understand,” and that any argument that produces the conclusion “humans can understand, but machines cannot” is using the word to mean one thing when it applies to humans, and another when it applies to machines.

What I’m arguing for is an equivalence of “understanding” candidates within the same behavior class. In other words, if a thing demonstrates all the behaviors associated with understanding, there is no argument against its possession of “true” understanding that could not be, with equal validity, applied to any human being. Also, I would not consider it valid to argue that a machine doesn’t understand because its inner workings are wildly different from those that make up a human being: I think this is begging the question, or at best simply redefining “understand” so that it specifically implies human hardware. And I don’t think this is what you are trying to do.

Which, I suppose, leads me to ask: how do you define “understanding?”

To me, this says nothing about any deficiency on the part of the simulation, but instead demonstrates that the real thing is very much overrated. If the mind is the part of the brain that vanishes when the brain’s every behavior is simulated, the correct conclusion is that minds do not exist, and that if we think we have them, we’re just fooling ourselves. :smiley:

10 Input a$ “Ask me any question”
20 Print “Well, I know the answer because I understand everything but I’d never be able to explain it to a feeble human like you”
30 Goto 10

A finite human brain can’t understand everything simultaneously about an infinite universe, and inventing a machine that does, even if that is possible, wouldn’t mean that it could explain it to us. The machine would ‘blow your mind’, either figuratively or with a really cool laser type device.

Think about how much humanity already knows - yet even the greatest genius only knows a tiny fraction of that. That explains why Cecil has assistants.