Just to quibble, instructions don’t actually reconfigure anything, but set up new datapaths for data using control signals.
Just want to note that people cluster around local minima also, as anyone who has participated in a brainstorming session knows. You can consider the history of science as a lot of hill climbing until a genius like Einstein moves everyone to a new area in the search space.
I think I would qualify GA’s even a little more than you have. While the output does appear “creative” in one sense, and is a pretty effective solution-search method for some problems, they still do not do what I believe humans do when humans are creatively solving problems, so I don’t think they can be used as an example of performing that same function.
A human, it seems, can abstract the modeling and solving of problems and apply those models across a variety of situations, which appears to be the “creative” step. While there is clearly some form of parallel processing going on to resolve/apply many solutions/model at once to a problem and return a good fit, it doesn’t appear to be a matter of iteration but rather abstraction/pattern matching and substitution.
What’s a datapath? What’s a signal?
It’s okay–even those who should have been reading him closely have tended to misinterpret him in the same way.
It’s not subtle–he just outright says, explicitly, the things I’ve said he says–but his opponents have expectations about what he would or should say and often let that color their readings (or lack thereof) of his works.
I’m no big Searle fan–I disagree with his conclusions–but I do think his Chinese Room stuff hasn’t been treated correctly yet and is very often dismissed via out-of-hand objections that don’t actually even apply. That’s among professional philosophers. When it comes to fora like this one things are, of course, even worse
The wiki article implies he is saying much more.
“Searle holds that the brain is, in fact, a machine, but the brain gives rise to consciousness and understanding using machinery that is non-computational.”
It implies Searle is saying you can only have understanding when you have a human brain - not with a computer.
I think that line from the wiki is accurate, but I think (here I’m not as sure as I’ve been so far) that when Searle says “computational” he means “that which can be completely specified in terms of what program it follows.” Searle doesn’t think that understanding (and hence brain activity) can be specified fully in terms of the program followed. (It can be specified comletely in terms of the rules governing it, but not in terms of any old set of rules you could say it follows. Recall the earlier distinction.)
The passage you quoted doesn’t imply that only brains can understand–rather it implies that anything which can understand must have this non-computational extra something to it. What that something is Searle has not said much about (if anything) but I think the idea I’ve expressed here about governed by vs following might be a step in that direction.
I’ve just read someone’s description of his argument, and it seems much more focused on symbolic processing - and to that I would agree - merely manipulating symbols does not give understanding. The understanding is at the model level and symbols can be used for two entities to communicate, but it’s just a medium, it’s not the message.
This is what Searle said:
(Searle 1984, p. 39).
1) Syntax is not sufficient for semantics.
2) Minds have content, specifically, they have semantic content.
3) Computer programs are entirely defined by their formal syntactical structure.
Conclusion: Instantiating a program by itself is never sufficient for having a mind.
It’s pretty clear he is saying you need a brain, can’t do it with a computer. I don’t agree with this.
No, he’s not saying that–like most philosophers, Searle doesn’t automatically mean “brain” when he says “mind”.
For Searle, computer programs are entirely defined by their syntactic structure, but that is not to say that computers are entirely so defined.
Well, then it’s unclear what he is saying. Does he think that the combination of hardware and software found in today’s computers have the possibility of realizing a “mind”, or does he think they need some other ingredient?
I think all ‘internal model’, or generally all representationalist, approaches face a common problem in the question of whose benefit the model is constructed for. I’ve alluded to it earlier – it’s the problem called the ‘Cartesian theater’ by Dennett, related to the ‘homunculus’ account of perception. In a nutshell, it’s this: if cogitation requires model building, then how can we perceive, think about, reason about the model? Because in order to do that, one would have to build another model, and so on ad infinitum. Perception provides the clearest example: in order to perceive the world (or some aspect of it), I gather, one would have to construct a model of it within the mind, presumably for the self to examine. But if that’s how perception works, then how does the self perceive the model? If it again needs a model, then the chain never terminates, and perception that way is impossible. But if it doesn’t need a model, then why would we need a model in the first place?
To me, the only possible way out seems to be getting rid of the rigid structure of model building completely – have things not make sense through their relation to some model, but rather, have them dynamically acquire meaning through their relations among one another. This is somewhat more abstract, but I think everything else leads to paradoxes of one sort or another (which then are typically resolved by assuming that minds need ‘something else’ to function, which I don’t think explains anything at all).
I think it still works, though: In the case where your hand guides the motion of the ball, while gravity acts on it, it is not gravity that compels it, as one could easily turn it off, and nothing about its motion would change. Gravity is optional, not necessary, which as I see it is somehow key to the notion of following vs. being governed by rules: in the Chinese room case, there is a sense of ‘could have done otherwise’ wrt applying the rules towards the Chinese symbols, whereas a native Chinese speaker can’t help but interpret a string of Chinese characters the way his rules dictate (…probably).
Oh do they ever! The history of civilization is just one long wandering from one local minimum to another, every time making the classical Humean mistake of insisting that this time, the way things are is how they ought to be.
I didn’t mean to point to that as a distinction between GAs and humans, but rather, to point to a general weakness of GAs as universal problem solvers; it just happens to be one we share.
I like that picture very much – the paradigm shift as breaking free from the confines of a local minimum. I’ll add that to my mental toolbox.
I don’t see a problem here. If we are reviewing our own thought processes, then we have consciously stepped back a level. I don’t think there is a need to go to infinity.
Sensory perception and written communication are not necessarily the exact same thing.
The context of the model was in translating written Chinese into something that represents understanding in the brain and responding. In this case, I think it would be difficult to argue that model building is not happening.
I think that there are multiple methods of processing happening simultaneously in the brain, some of which include modeling and some of which do not.
I just did a quick look on some websites regarding Dennet’s argument - so my impression is based on a 5 minute reading of wiki and other - but it appears his argument is whether there is a special place in the brain in which consciousness arises.
Whether it’s localized to a subset of the brain (which doesn’t seem unreasonable), or whether it’s distributed across the entire brain (which does seem unlikely) - it doesn’t actually provide any kind of argument that model building does not happen.
Again, maybe because I have just skimmed the information I am getting the wrong impression, but it doesn’t feel compelling.
Why does it have to be rigid?
I believe this happens also and is part of the modeling.
Can you give an example of a paradox related to the proposed modeling/simulation that I think humans do?
I don’t think we really know how humans solve problems. I don’t know about you, but I often solve them with no self-awareness, since I give it to my subconscious and it eventually comes back with an answer, in a way totally hidden from me.
I agree that we are good about finding solutions from a wide range of fields, but that is kind of like recombining lots of possible GA solutions. I’m not saying that we work like GAs - only that GAs are creative in the sense that they produce answers that they are not directly programmed to produce.
Microprocessor design blocks come in four flavors. There is I/O, there is the embedded memory and the control for this, and then there are datapath and control blocks. Datapath blocks consist of fairly simple and straightforward processing of words or bytes. The floating point unit is a good example two 64 bit words come in, they go through a pipeline where a certain arithmetic operation is performed, and the results pop out. Datapath logic is designed to be very fast.
Control logic makes all the decisions (for instance what arithmetic operation is to be performed) and send out signals saying, for instance, that the first operand needs to be inverted or blocked or something. They are typically big state machines (which produce an output given an input and their current state) are very complex but don’t have to be very fast.
So, to revisit my comment, an instruction gets decoded by control logic, which figures out what it is to do, and sends signals saying that Register 5 is an input to this operation and that we need to start a read from memory, and then figures out that five clocks later the data is ready, so the operands can be sent to the FPU. No reconfiguration per se - the control signals block some inputs and let others through.
Holy moly. That wasn’t even correct in 1984! If it was, you could debug and understand a program just by looking at its structure without any sort of simulation. I wish.
In fact, the inadequacy of syntax was known over a decade before this. “If time flies like an arrow, what do fruit flies like?” It was well understood in the AI world that any language understanding program would have to understand semantics also.
The possibility - no the certainty - that any program trying to exhibit understanding will work off of experiences stored somewhere pretty much means that syntax is not adequate to understand what a program does.
Thanks for finding the quote - I understand the reason he thinks the Chinese Room problem shows anything much better now.
BTW, I head a very relevant segment on NPR’s On the Media on Sunday. A guy described his experience applying a Turing Test to a chatbot. It failed miserably, but it was clear that someone had put a lot of effort into supplying responses to common questions. When he asked it whether she was a chatbot, it responded by asking him to prove that he wasn’t one.
This sounds like the Chinese room card system, and shows why such a system would never pass a Turing test. If it stored the responses to its responses, and modified its rules to decide which to use, then we’d get closer, but we’d not be in a pure syntax zone.
I’m not very well read in AI, much less 1980s AI, but Douglas Hofstedter for one certainly thought semantics could be reduced to syntax. I didn’t have the impression that he was considered crazy by other AI researchers for thinking this. Was he?
“Experiences stored somewhere” can still arguably be defined wholly syntactically, can’t they? Just list out what values are stored where in the mechanism. That’s pure syntax.
Okay okay but what I’m getting at is if you follow through on all this asking what it fundamentally is, you’re going to get down to physical objects which go into the makeup of the computer. In other words, components.
You said datapath blocks consist in “processing.” Fine, but what’s processing? I’m almost 100 percent certain that you’ll find that processing is the movement of components of the computer.
If the computer isn’t doing its thing by means of the movements of configured components, then computers are incredibly mysterious indeed!
Of course, I’m speculating. But I think it’s reasonable to doubt that we follow an intensely iterative GA approach to problem solving. It’s my intuition based on introspection.