Any reason we couldn't build a machine that "understands" everything?

One thing I can add to this thread is the notion of emergent properties. A simple set of components (including “rules” for their behaviour) can combine to generate complexity very quickly. For certain classes of system, of which the universe is almost certainly a member, you cannot statically predict the behaviour - you have to actually run the machine to find out what happens.

As others have pointed out, a complete model of the universe would have to be as complex as the universe, so to FULLY comprehend the universe, you need to create a model as complex as the universe and then run it.

BUT - you CAN understand all of the rules by which the universe runs, AND you can understand the exact starting conditions. This is enough to fully define the behaviour of the universe. You can then make useful predictions by just considering the subset of rules and conditions relevant to the practical problem you’re dealing with.

This is how humans survive - we have a subset of the rules of the universe sufficient to make short term forecasts from our knowledge of a very small subset of the current conditions of the universe.

Nope.

If as a matter of principle you could show that we will never know how, then yes.

This strikes me as a very good and relevant question. How on earth would your dog know whether you understand relativity?

Another good question, and the first I thought of. What does it mean to say a machine “understands” something? (Does the Chinese room understand Chinese?)

Well, OK, but your argument went straight from “A human can pass a Turing test” and “the brain is a machine” to the conclusion that “a machine can have understanding.” There is a suppressed premise there, namely that “a person’s brain is not only necessary, but entirely sufficient for their ability to understand things.” I can think of at least two theoretical perspectives from which that is not the case. One, that I happen to think is the right one, I already outlined: that understanding also depends on a being having the hardware (or wetware) to sustain a rich, two-way interface (i.e., sense receptors and effectors) with its environment. The other such perspective is that of dualism, whereby the ability to understand depends not only on a brain, but also on some non-material, non-mechanical or “spiritual” entity that interacts with it. It is not a view I favor, or that is very popular these days with either scientists or philosophers, but it does still have defenders, and I do not think that it has yet been decisively refuted. Thus it is question begging to base arguments about the nature of the mind and of understanding on the assumption that it cannot possibly be true. The view that is not true is what I referred to as a “working hypothesis,” not an established fact.

On that definition, which I understand as implying that the machine is something separate from, and constructed by, a human or humans (“device” implies something that has been devised), I would say that a brain is not a machine at all. But this does not matter. Your first definition is the more relevant one.

No. Though I do not know what difference it would make if I was. I do think that, of all devices that people have actually constructed, computers are almost certainly by far the most closely analogous to brains, but I am open to the idea that, for all that, the analogy may not actually be very close.

Well, no doubt there are many reasons why people work on AI. Some may be just engineers, interested in building devices that will be useful, or make money. But I think it is clear that an important motivation of the founders of the field, and doubtless of many of those who continue to work in it, was to provide an existence proof for materialism: to show that immaterial spirits are not necessary to explain mind and understanding. A worthy aim, in my view.

True, but it would demonstrate that mind and understanding does not necessarily depend on immaterial spirits (or, for that matter, on a rich interface with the environment). This would not prove that the brain does not need to be supplemented by either of these things in order to produce mind and understanding, but it does remove the principal motivation for thinking that it might need them.

Really? Why don’t you go through a few dictionaries and see if you can find any definitions of “understanding” that mention brains at all? (I won’t be holding my breath.):rolleyes:

Anyway, it is not a matter of definition, it is an empirical matter. We have a huge amount of evidence that people (sometimes) understand things. (And, I fully concede, we also have plenty of evidence that, in order to be able to understand, it is necessary that people have brains.) We have no evidence whatsoever that a brain apart from the rest of a human being can understand anything at all. The relevant experiment has not been (and, very likely, cannot be) performed.

Maybe. Again, this is an empirical question and the relevant experiment has not been (and perhaps cannot be) performed. (Even if you can keep the brain alive and functioning normally, how are you going to find out if it understands anything?)

But anyway, even if some understanding persisted in these circumstances, that would not show that brains alone are sufficient for understanding. It would show that a brain plus, for at least part of the time of that brain’s existence, the organs (sense organs, muscles, etc.) providing a rich, bi-directional interface between the brain and its environment, are sufficient for understanding.

And there is a practical scientific consequence to this: you will never succeed in reaching a scientific understanding of understanding if you only study brains in isolation, without regard to how they hookup with the environment of the organism in which they are found.

Seems within the realm of empirical possibility to me. Every brain we’ve seen so far is in constant rich interaction with its environment. So, for all we know, take away that interaction, and the brain loses consciousness and so loses active understanding. Who knows, maybe it loses all psychological coherence to the extent that it can’t even have its understanding “revived” so to speak.

In other words, for all we know, understanding lies not in the brain but in the complex dynamic that joins the brain and world.

I don’t endorse this view, but I don’t think it can be dismissed out of hand either.

There are innumerable things done by human beings, ranging from shitting to understanding quantum mechanics, that cannot be done by any artificial machine that has actually been constructed. Whether there are any things done by human beings that could never, possibly, in principle be done by any possible artificial machine is precisely the point at issue.

Ah, I see, if physicalism is not false them physicalism is true. You are wise in the ways of tautology, Grasshopper.

See Cal Meacham’s article: http://www.teemings.net/series_2/issue_03/brain_2.html

Why not? I’m not saying you can definitely do so, but I don’t see that the mechanism of the hookup is necessarily relevant to understanding. Kind of similar to saying you can’t drive a car if you don’t understand how the engine works. Aside from the concept of the engine being on or off, what else do you need to know in order to drive? I will grant that if you don’t understand the concept of fuel, you won’t be driving for long.

It’s more similar to saying you can’t understand why a car turned left if you don’t understand how the steering mechanisms work.

I can turn the car without understanding that.

But someone watching me drive can’t understand how the car/me system managed to pull off the turn without understanding something about the steering system.

Even more to the point: Someone watching me and the car driving down the road can’t understand how the car/me system managed to pull off the turn without understanding how I and the car interface.

Understanding may be like that–we might not be able to understand understanding without understanding the brain/world interface. There may be no ongoing understanding without such an ongoing interface.

It occurs to me that shitting may not have been the best example, in the light of Vaucanson’s Duck (not that it really shits, of course, any more than ELIZA really understands conversation), but the underlying point remains true. I do not think any any machine that has actually been constructed can dance the Hokey-Pokey, or enjoy a bar of chocolate. I am inclined to believe that it would be possible to construct a machine that could dance the Hokey-Pokey if we wanted to, but I am not at all sure whether we could ever construct a machine capable of or enjoying chocolate.

I’m not making any assertion about whether or not we need an understanding of the connection between the brain, and non-brain parts of the human body (assuming such a distinction can be made in the first place). I still do not see that understanding ‘understanding’ requires it. But it might. And that part about the distinction may be important to that.

I think the problem is that the “normal sense of the word,” in this context, is entirely vacuous.

Further, Hamster King didn’t say that compression algorithms are understanding. He said that understanding is a kind of compression. And really, can you possibly define it as anything else? When we say “Alice understands X,” we usually mean that Alice can answer questions or make predictions about X, or otherwise simply make correct statements about X based on a necessarily incomplete model of X. That’s just a set of behaviors exemplifying a compression of the thing understood.

Yes. In both cases of the thought experiment, the Chinese room understands Chinese. In the second case, a part of the machine is the human operator. The fact that the human operator, in isolation, does not understand Chinese, is utterly irrelevant. Searle’s entire argument, including his replies to interpretations such as the foregoing, seems to depend on the conflation of different systems and the insinuation of functionless definitions.

We seem to take propositions like “understands Chinese” implicitly and without a shred of critical thought when speaking of human agents – if a man converses easily in Chinese, we say he understands it. A failure to extend the same definition to artificial objects is an entirely irrational prejudice. We have as much reason to say that the Chinese Room understands Chinese as we have to say that a man born in Beijing understands Chinese. Contrariwise, we have perfectly good grounds for arguing that the man is only manipulating meaningless symbols according to a set of rules and that he therefore understands nothing.

Why is this point even being argued? The relevant question is whether a mechanistic system is capable of understanding. Most people talk about the brain here because it is the organ that seems chiefly concerned with the processing of information. I think it’s trivially evident that a brain in isolation can understand nothing: understanding is an external behavior, which requires an external interface. I do not see a reason to believe that any particular interface is required, but if you want to say that the only machine we have observed to demonstrate what we call “understanding” is the (at least mostly) complete human body, I can’t argue with you. It just seems to me that, with regard to the actual debate (machine understanding), this question of brain vs. brain + body changes nothing.

Might be better to say that, if you can’t ask the brain any questions, the question of whether it understands anything is no longer well-formed. It’s like asking the color of an invisible unicorn.

Our capabilities in constructing the machine may be limited, but unless there is some ‘supernatural’ mechanism or process not physically possible in anything but an animal that can enjoy, then in theory, a machine can enjoy chocolate in exactly the way I do.

Did you notice Cal Meacham’s article is concerned with science fiction? Just because some science fiction writr imagined something does not show that it could actually happen.

I did not say it is necessarily relevant, I said it might well be. I actually think there excellent reasons to think it is very relevant, but they would take a long time to detail (I could recommend you some books, if you like), and they fall short of proving it is necessarily the case.

It is not like saying that at all: almost the opposite. It is more (though still only vaguely) like saying that you can’t drive a car if it has no wheels and all the windows are blacked out.

Again, you are correct to say that if physicalism is not false, it is true.

And what reason have we ever had to believe that physicalism is false?

I think ed malin’s point is that the argument that a machine cannot enjoy chocolate is necessarily an argument against physicalism: it supposes that our ability to enjoy chocolate is a result of supernatural influence. Therefore we should consider such arguments for what they are, and apply to them the same skepticism that we apply to other arguments for the supernatural.

An understanding may involve an expansion of data. Just look at all that has been written about some trivial subjects.

Thanks for mentioning that, I was trying to remember the Chinese room thing. I consider it absurd, and sort of like saying a machine can’t achive human understanding because by definition only humans can achieve human understanding. I don’t recall a rational argument in its favor.

Can a human who has lost the ability to see, hear, smell, control muscular functions, etc., still ‘understand’. I think so. I think I understand things while I am asleep, and lacking those things to a great degree.

Please recommend the books.

Clarification:
The car examples aren’t clear.
Cal Meacham’s article was not Science Fiction, it addressed both Science Fiction and reality.

I think Searle’s argument is largely misunderstood and underappreciated. At the risk of GD-ifying the thread, I want to address the above.

Searle’s position is this: Computation is not sufficient for understanding. In other words, there is no computer program which you could design the execution of which is, in and of itself, enough to make the thing executing the program understand something.

If we assert that the room itself, including the person, understands Chinese even though the person doesn’t, then here is Searle’s reply. Take the whole room and, so to speak, put it (the whole system) inside a person. Have a person memorize the rules rather than looking them up. Have all the inputs go directly to him and outputs come directly out of him, rather than all going through a slot in the wall. In other words, make the situation such that the human being is the entire system executing the program. He’s executing the right program–yet, he doesn’t understand Chinese. No matter what program he executes, he won’t thereby understand Chinese. Hence, executing the correct program isn’t sufficient for understanding.

Searle himself would deny that the Chinese Room itself understands. Speaking on Searle’s behalf but from my own point of view, what I think he should say is that even if the Chinese room does understand, this is beside the point. The question is, how does it understand? Does it understand by executing the right program? The answer is no–because as Searle has illustrated, no matter what computer program you think it takes to make a machine understand Chinese, you can have a machine perfectly execute that program–yet not understand Chinese. The program isn’t enough. It’s got to be something else.

(If you object that it just turns out the human being isn’t the right kind of machine to execute the program, the Searlean reply is just that all you need is a machine capable of executing any program that could be executed on a turing machine. That is (or was at the time anyway) a standard assumption in the kind of AI Searle was arguing against. And human beings, of course, can execute these kinds of programs. That’s the Searlean reply. My own reply, on the other hand, is that this objection is just about right. A human being is not the right kind of machine, roughly because a human being has the power to decide whether or not to continue executing the program, meaning the explanation for the program’s results can’t be put in terms just of the program itself. I believe this means the execution of the program itself can’t constitute any kind of agency encoded by the program itself–and so can’t constitute an agency which understands the things the program appears to understand. That was far to brief, I probably just should have left it out.)

That’s the “other minds reply” and Searle answers it in his paper. The reply is this: There’s a difference between asking how we know something understands and what we mean when we say something understands. Sure, we only know a human being understands when we see he responds in the appropriate way to various stimuli. (If he doesn’t, he’s unconscious, or dead, or in some kind of psychologically faulty state, or something.) But this doesn’t mean that what we mean when we say he understands is just that he responds the right way.

Searle is arguing that no computer program is sufficient to instill in its executor that which we mean when we say somthing understands. It may turn out to act just like an understander, but this doesn’t mean it is one.

If it’s objected that there’s no use making distinctions between a non-understander that acts just like an understander, on the one hand, and on the other hand, an understander simpliciter, then here’s a reply that I am not sure is due directly to Searle. (I may have made this up–I don’t remember.) The use in making the distinction is this. If we’re actually encountering a candidate for understanding in the real world, then if the candidate does understand, we are licensed to make a lot of generalizations about its future behavior. We are licensed to predict, based on its current behavior around chocolate, what its attitude toward future chocolate will be. But if the candidate is not a genuine understander, then it doesn’t matter what we’ve seen it do around chocolate in the past–there’s no telling what it will do in the future. If we assume it’s understanding its encounters with chocolate, we’ll assume a lot of things about how it will treat chocolate in the future. This is because we know what understanding is, that understanding involves a certain kind of coherence in attitudes over time, and so on. But if we’re wrong–if in fact it doesn’t understand even though, so far, it’s acted like it understands, then our predictions about its future attitudes toward chocolate are (understandable but) completely without basis. There’s no telling what it will do, because it doesn’t have that characteristic we assumed it had and which we assumed was a reason it should continue on in an “understanding-like” fashion in its future encounters with chocolate.

So, there is a difference between an understander and a non-understander that acts just like an understander. The difference crucially depends on the fact that we’re licensed to make generalizations about the one based on the idea that it understands things, and we’re not licensed to make the same generalizations about the other.

Now, I can stipulate to you that the thing always will act exactly like an understander. Then you and I, as parties to a fictional tale about a fictional entity that looks just like an understander, can predict that it always will act like one and can say that, in a sense, this thing is “just like” an understander. But that’s you and me talking about a fictional entity. We have a kind of omniscience about it that we never have in the real world. In the real world, we can’t stipulate that a thing “will always act just like an understander.” In the real world, we can only say that it’s always acted like one before. We may therefore assume that it is an understander, but we may be wrong. And Searle is arguing that if we’re basing our assumption only on the inputs and outputs relevant to the system we’re talking about, then our assumption is at least unlicensed and probably wrong. Behavior doesn’t establish understanding–because you can get any behavior you want out of the right program, and programs aren’t sufficient for understanding. To know whether somthing understands or not we need to know something more than how it behaves. What is this? I guess it’s got to be something about its inner structure, but to be honest, I’m not sure.

TLDR I know!

That is the key question. And we don’t know the answer. Repeat it back in a different form in a test? It’s all a summary, isn’t it? And summaries are necessarily not the full data set, they are heuristics, a shorthand so that the whole data set isn’t spoken every time, and the data set isn’t the actual event or subject. All of it is an abstract. Now if a computer abstracts something, that isn’t the same thing as a person abstracting it. Or a dog, assuming that a dog figures out how doors work, it probably doesn’t conceive of them like people do.

I don’t know why you should think that. Most people seem to be able to use and understand the word effectively enough.

Anyway, if you are allowed to ignore the normal senses of words, and define them as you please, you can trivially prove the truth of any statement whatsoever.

Perhaps you really mean that you can’t see how to accommodate the phenomenon of understanding within you physicalist world-view, but that is precisely the problem under discussion. The first step toward solving a problem is recognizing that it is a problem. Denying that it exists does not help.

See any dictionary of the English language. I am confident that you will find that none of their definitions of “understanding” allude to compression in any way. I do not think, anyway, that Hamster King was proposing a definition, he was proposing a theory of understanding. I pointed out that it was not a very good one, because it clearly covers many things that no-one believes amount to understanding.

You seem to be conflating evidence that Alice understands X with her actual understanding of X. Her statements may be a compression in the sense that they do not tell you everything that could possibly be known about X, but it does not follow that her understanding as such is some sort of compression. (Neither is it an incontrovertible fact that her understanding consists in her having some sort of model, incomplete or otherwise, in her mind. That is a plausible, but far from proven, *hypothesis about the nature of understanding.)

Anyway, things that Alice says are not the only way that she can demonstrate her understanding of something. If she understands how to drive a car, she can demonstrate that by driving a car. I do not know what is getting “compressed” there.

It is great that you agree with me on this, but lots of people do not, and, historically, until about 20 years or so ago, nearly all cognitive scientists and AI researchers experimented and theorized upon the implicit assumption that it was not the case. Some still do. As a consequence, they wasted much of their time exploring scientific blind alleys. (At least, that is what I think. If you and me are both wrong about a brain in isolation being unable to understand anything, then perhaps they were not blind alleys after all. The game is not over.)

No, I don’t want to say that. I am quite prepared to believe that a suitably constructed robot might be able to have true understanding. It appears we are in agreement here.

Well, in my view (and, seemingly, yours) you will get a different answer to the question “Could a machine possibly understand anything?” if (as many have) you take the relevant machine to be one that has capacities analogous to those of a brain, from the answer you will get if you take the relevant machine to have capacities analogous to those of a brain in a body. In the first case, you will get the answer “no” (and you will have wasted a lot of effort on a dead end research program) and in the second it will be “yes.” If you and I (and the dualists) are all wrong, in the first case you will get the answer “yes,” and in the second case, you will still, presumably, eventually come to the same conclusion, but you will have put in a lot of wasted effort in getting there.

In more practical terms, if you are trying to create an artificial intelligence and you believe that the brain is sufficient for understanding, you will probably confine your efforts to programming computers (as AI research mostly did for its first few decades). If you if you believe that brain and aspects of body are both necessary for understanding, you will work on robotics.

Likewise, if you are a cognitive psychologist, you will, in many instances, do different experiments and consider different theories, and you will think the other guy is largely wasting his time (and one of you will be right).

We seem to be in agreement that the experiment can’t be done. (Well, I think it probably can’t be done. Maybe some clever experimenter might be able to figure out a way that hasn’t occurred to me. I, and others, have been surprised by ingenious experimenters before.), I do not agree that the question is not well formed. Just because you have no way of finding the answer to a question, it does not follow that there is no answer. To believe otherwise is verificationism, a much explored but now generally discredited epistemological theory.

(Incidentally, the question, “What color is an invisible unicorn?” is perfectly coherent, and I even know the answer. So long as it remains invisible, it has no color, just like other invisible things, such as air. The fact that unicorns don’t exist does not make the question ill-formed or unanswerable either. Does it not make perfectly good sense to ask where Sherlock Holmes and Dr Watson lived? And the right answer is not, in most contexts, “nowhere,” it is “221B Baker Street.”)