The same could be said for following the interactions in our brain.
Narrowly isolating individual processes and following them through some chain should not provide insight into the overall system being modeled. But the “trick” of understanding can still be taking place from the perspective of the whole system.
I think one problem is that people try to elevate programming and logic to something that matches the brain and it seems qualitatively different. I personally think the error is in having elevated the brain’s functioning to some almost magical level above other machines.
But, I will say, I am really talking about intelligence and understanding, and not consciousness. I think (but not sure) you can have intelligence and understanding without consciousness, and the consciousness question is a more difficult question to consider.
It sounds like I might agree with that - assuming “governed by those rules” means something along the lines of creating the model/simulation of the world/environment and producing results based on that.
I definitely do not agree with any discussion that focuses on the man. The man does not represent the model, the man is merely a cog in the wheel.
The question is whether following a program is sufficient for understanding. Both the man inside the room and a normal person follow the program you’re referring to–but we understand, while the man in the room doesn’t. That means following a program isn’t sufficient for understanding.
What I mean by the distinction between “following a rule” and “being governed by a rule” is roughly this:
If I figure out just exactly how gravity would affect the motion of a ball, then grasp a ball, and cause that ball to move as gravity would move it while still holding on to the ball the entire time, then the ball is following the law of gravity, but its actions aren’t at that time being governed by the law of gravity. But if I simply throw the ball, then subsequently to being thrown, the ball’s movement is governed by the law of gravity.
The man in the Chinese room is following the rules that constitute understanding of Chinese, but he’s not (as native speakers of Chinese are) governed by them.
As I mentioned before, though–Searle answers this in the very works people are usually referring to when they talk about the Chinese Room. Just put the entire room inside the man. Have him memorize the rules. Now he is the wheel, not just a cog in it–and he still doesn’t understand Chinese.
That’s an interesting idea. But if I learned to speak Chinese, would I be governed by its rules? One could concoct a scenario in which there are two languages, Chinese and Esenihc, with the peculiar characteristic that a valid utterance in one is also a valid utterance in the other, such that the same sentence may mean ‘I brought my mother flowers today’ in Chinese and ‘death unto all my enemies’ in Esenihc. It seems to me that a person governed by the rules of Chinese would invariably understand the former, while a person governed by the rules of Esenihc would equally necessarily arrive at the latter meaning.
Now what if I learned both languages? It seems that I should be able to choose between both meanings; but then, I would not be governed by the rules of either language (if I understood the concept well), which would mean that I could not understand the sentence at all. That doesn’t seem a good conclusion to come to, to me…
To answer the objection–if the person has to actively choose which language to interpret an utterance as an instance of, then context hasn’t made it obvious, and if context hasn’t made it obvious then he wouldn’t understand the utterance. It’d be ambiguous.
Meanwhile, if context has made it obvious, then he’ll be, in a sense, “compelled” to understand it one way or the other, and that “compellingness” of the interpretation constitutes his being governed by the relevant rules.
I thought along similar lines – that in general, there ought to be some ‘meta-rule’ governing which language-rule to use, but I can see two possible problems:
There might be a text – a story, a poem, something like that – that makes perfect sense in both languages, so if somebody familiar with them just found a sheet of paper with that text on it absent any context, he can read it in either language, and it will make perfect sense to him, being essentially self-contained.
If one admits governing by meta-rules, i.e. rules that contain language rules plus some rule that describes how to select them, as sufficient for understanding, it might be argued that the occupant of the Chinese Room is similarly governed by a meta-rule which contains the rules for Chinese, provided his behaviour in total is essentially deterministic, so it’s less clear why there’s no understanding in that case.
I would agree that following a program or set of rules does not constitute understanding from the perspective of the person/agent/machine that is following the program.
Meaning the logical machinery that is carrying out the instructions themselves can not be said to “understand” any more than you would say the atoms in our brain “understand”.
Understanding is a property of the system, and by system I am referring to the processing of the model, the transformation of information over time - not the hardware whether it is a computer or a brain.
The man is definitely not following the “rules” of understanding Chinese.
The “rules” of understanding any communication have more to do with building an internal model than anything else. The man and the room are completely skipping the 1 key rule to understanding.
Searle doesn’t answer it because he is off-base. A literal mapping of input to output explicitly skips the internal processing that creates understanding.
If you put the entire room inside the man you still have a system that does not follow the rules of understanding and therefore will never understand.
It almost sounds like you think we have “language rules” that govern our response to specific input in a specific language.
If so, I would disagree.
We may have rules that govern how the different portions of the language can be put together to convey ideas, but that in itself is just an aid for our brain when trying to build it’s internal model of what has been said.
Let’s walk through a simple example of processing some input and how it translates into understanding:
The input is “throw the ball”
The word “throw” triggers some sort of access to an internal model of throwing something, based on years of experience with one’s body and others using that term. They have been connected internally and a time and object based model exists that can be used to apply to similar situations for the purpose of predicting the future.
The word “ball” triggers some sort of access to various connected/related internal models of, most likely, different balls with all of their typical time and object based models that can also be used to predict the future.
The word “the” gives some context to the ball (again based on years of experience and interactions), triggering some sort of access to the person’s current state/environment.
The language has done it’s job, and now the person will use the models and other internal input to determine what to do.
The models are pieced together to construct a scenario that matches what was spoken, the person now “understands” the request. Next is a quick simulation and identification of possible problems with the request (are they inside a glass house and the ball is a baseball?), and an understanding of what the result is expected to be.
This internal model processing represent much more of the meat of understanding than any kind of “language rules” that can be used to just map input to output.
Pfttt to the Chinese Room. All it does is move the comprehension component out of one human brain (the sap inside the room) and into another (the guy( or guys) who compiled the rulebook.) But the system as a whole (and we must include the book compiler in the system-as-a-whole, or it’s just a “magic” book) has a Chinese-comprehending element.
All of the things you mention, Searle explicitly stipulates that you are welcome to include in whatever program you wish to put into the Chinese Room. His argument (he thinks) still works. (And while I think his argument ultimately doesn’t work, I’m certain this is not the correct approach to show him wrong. He’s got answers lined up for all this already.)
The program in the Chinese Room isn’t required to be a simple “to this input respond with that output” variety. Searle even says one way to program the room might be simply to give it a complete and detailed description of the way an actual brain works. Have the guy in the room follow that program, with all the information processing you mentioned included therein. And still, the guy won’t understand Chinese.
Have the guy internalize the rules, so that he is the wheel and not just a cog in it, and he still doesn’t understand Chinese.
As far as I can tell, you and Searle actually agree. You both believe that something more than merely following a program is required. The way the information is processed is important, and the kind of information processing that counts as “following a program” isn’t the kind of information processing that yields understanding. For that you need something more than following a program.
Following a program is just acting in a way which, were we take the program to be a description of actions, would be accurately described by that program.
My own suggestion is that what you need is not just “following” a program but also “being governed” by its rules. (Where “being governed by rules” means not just acting in a manner accurately described by them, but rather, being the subject of certain natural laws constituted by those rules.) That’s something the man in the room doesn’t have.
Meanwhile, I think this gives us a means by which to answer Searle. He’s right that something more than following a program is needed. But he’s wrong if he thinks that by programming a computer, all you’re doing is having the computer follow a program. Rather, if you do it right, then you’re going to have the computer be governed by the rules of the program. Hence the computer might be able to understand by means of the right program after all.
If by “the rules” Searle means “the exact structure of a brain of someone that understands Chinese” - and Searle still thinks this person won’t understand Chinese, then I am confused.
If Searle means anything less than that, then I would argue at one extreme (simple mapping) he is correct, but at some point in the continuum, the processing crosses the line into what we would call understanding because it is able to perform all of the tricks we are able to when we understand - and that is general problem solving and pattern matching in situations that are not explicitly enumerated.
Continued discussion of the “man” focuses on the wrong thing - the thing is the system and it’s power of modeling. Even if Searle allows us to put the system in the man, I think it’s misleading and unnecessary to continue discussing the “man” - what is important is the processing power of the system - whether it’s running in a brain, a computer, a series of tubes, on paper - it doesn’t matter - it’s just a mathematical model.
I appreciate that you are saying we might all agree, and the idea of “governed by” doesn’t sound incorrect - but I worry when at the same time discussion of “follow” continues. “Following” or “executing” or “running” or whatever, a sequence of steps is merely the act of processing. It is not an attribute of the system/model. “Governed by” is more of an attribute of the system/model. They are not the same thing and, again Searle’s focus on “the man” and “following a program” is merely focused on the wrong thing.
So maybe I do agree with Searle in that “following” a program, from the perspective of some piece of a system, does not create understanding - the system and model can have understanding and the man does not represent that unless the system inside his head does understand Chinese, either by being a human brain that has been trained in Chinese or by being an equivalent system/model.
The thing here is, Searle was addressing people who explicitly argued that the act of processing is, in and of itself, all we need to worry about when we’re trying to create a machine that understands.
Enumerated mappings of input to output do not represent understanding. I’ve always felt the same way about chess playing algorithms - humans use a different set of tricks that are more generally applicable to any problem - and that is what we usually consider intelligence.
I was under the impression that he used that extreme example to say that a computer is not capable of understanding under any circumstances in the way that humans understand, which I disagree with.
Regarding consciousness - even though my default position is that it’s an emergent property that can also be created on a computer - I’m not nearly as confident of my position.
I would disagree. If you’re moving the ball in such a way that its motion exactly mimics the motion it would have as a result of gravity, then you’re exerting exactly zero force on the ball. In other words, the ball’s motion is a result of gravity, and your hand is just going along for the ride.
You’re not just reprogramming the computer, you are changing its basic design at whichever level you wish to. For instance, you can have a computer whose instruction set changes with the workload, and whose wiring changes with the workload. Just like the brain.
That’s a very common misconception. In fact he explicitly states that he believes we can build a computer that understands. His evidence for this is–we ourselves are computers that understand, and there’s no reason to think we couldn’t build a human being someday given sufficient tech breakthroughs.
As we were debating I was thinking I should actually read what he has to say. I’ve read summaries, etc. but I was always left with the impression he was arguing against computer understanding - probably my laziness in not following through to get enough detail.
Sorry for taking so long to respond, I was away. You can look up genetic algorithms for a bunch of examples, but here is how they work.
You have a goal, which is represented by some sort of fitness function, which lets you compare which of several alternative solutions are closer to the goal. You have a way of creating a new solution from the old one, often representing the solution as a string and then permuting it. You also usually combine several solutions into a new one. (Why this is called genetic should now be pretty clear.)
So, say you want to use this to design a digital circuit. You’d start with a bunch of random circuits, and perhaps a fitness function consisting of how far off a solution is from a desired one. You look at all your random circuits, pick the n closest to the solution, combine some of them, and then do some mutation to create some more. Keep on doing it until you have an answer. The circuit thus created is not the result of any programming, and often is very different from what someone would come up with. It is basically a way of searching through a space of solutions, but all problem solving is exactly that. It is not limited to solutions the programmers ever thought of.
Once you know a good way of solving a problem, genetic algorithms are not very interesting. And they are nowhere even close to being self-aware. But they do do things which would be called creative if people did them. And nature “creatively” made us long before there were any programmers.