Besides, it’s of course quite easily possible to generate an algorithm that will spit out a list of halting TMs: just have it simulate every TM according to some enumeration, and output those that eventually halt; this does not solve the halting problem, since for any given TM, even if it has not been output yet, you can’t exclude that it will eventually be output, and thus, don’t know if it halts, and will never know if it doesn’t halt.
<nm>
I don’t get what the OP doesn’t get.
A p-zombie is a machine / entity that maps external inputs to behaviours without the feeling/experiencing part in the middle. Whether you disagree that it’s possible to make such a thing in practice, doesn’t make the concept invalid / nonsensical.
The disagreement is whether it’s impossible even in principle. My intuitive guess is that while an automaton could approach any level of verisimilitude there would always be some point at which it would fail; and there’s still the question of where this programming would come from in the first place. If you say “just run a program that simulates a brain”, then the question becomes whether in fact a computer can simulate anything. We do know that there are complex phenomena that appear to be irreducible- that can’t be reproduced by any calculation shorter than just running the experiment.
But what exactly do you mean by that? If you mean practically impossible, then it doesn’t affect the argument, and it’s no reason not to grok the argument either. If you mean that it is self-refuting, then you haven’t given any reason to think that because all your arguments have related to the practical side.
Yep, and that’s why I’d never use those words. I’d say run a program that responds to the same inputs with the same outputs.
It’s trivial to show that for a simple system that are multiple ways of achieving the same behaviour. I don’t see any reason to doubt that it’s possible in theory to make a system that responds to at least some stimuli in the same way that I do, but without the “inner experience” bit that I have in my mind.
I would say that is disputed, not a known thing. But the position you are hinting towards is broadly where I stand on this. I don’t assume that all phenomena are computable and actually I doubt it.
This is generally the first response people give on hearing and understanding the thought experiment. It’s also th first objection Searle replies to in the very paper under discussion. His response is to point out that we can simply modify the thought experiment so that the entire process takes place inside the man, rather than spread out in the room he’s in. So he’s memorized all the books and rules etc. You give him chinese writing, and he gives back chinese writing that seems to make sense as a chinese conversation. He’s acting for all the world like an entity who understands Chinese (writing anyway… and the thought experiment could be further modified to include speech albeit even more implausibly). Yet he doesn’t understand a word of it.
But he isn’t the entity who understands Chinese. The entity who understands Chinese is the “phrase book” he’s somehow memorized.
The concept of a philosophical zombie is nonsensical even in principle, because for inputs to result in sensible outputs, there must be some sort of processing in between, and that processing is what “consciousness” or “soul” or whatever you call it is.
As with the question of consciousness and determinism, it boils down to this: from the OUTSIDE you cannot distinguish between the beforementioned philosphical zombie and a real person with a real “soul”-or-whatever. But to the individuals themselves, it makes a difference. The philosophical zombie ain’t conscious; the other knows they are and knows they aren’t a philosophical zombie.
But of course, if you buy into the ‘systems reply’ in the first place, then that’s the exact behaviour you should have expected: effectively, the man carries out a simulation of a Chinese-understanding person, such that to hold that he himself should therefore understand Chinese would be a level confusion. If you asked the man, in Chinese, for his favourite colour, he could easily give a different answer than if he were asked in English, so the beliefs of the Chinese-speaking entity are not those of the English-speaking entity. So we might have a bit of schizophrenia here, but no indication that an algorithm does not suffice for understanding.
The problem is that in the chain of cause-and-effect connecting stimulus and reaction, one never needs to talk about consciousness at all; thus, one can at least imagine it occurring without any conscious experience. Therefore, it’s possible for that chain to occur without giving rise to consciousness, and hence, there is no necessary entailment from it to the existence of consciousness. But then, there must be factors going beyond just that causal description that are responsible for it being in certain creatures (i.e. us) accompanied with subjective experience.
Anyone care to explain to me why the Chinese Room thought experiment is used to talk about consciousness? Look, anyone who wants to tell you that they understand how consciousness works (barring some massive spring forward in Neuroscience that came out in the time between me last checking and the time of me posting this) is lying. What’s that got to do with the Chinese Room experiment? Well, assuming you’re using it to try to discount the idea of AI ever achieving consciousness… The Chinese Room thought experiment, like so many other thought experiments dealing with consciousness, implicitly assumes a certain definition of “consciousness”. It implies that our consciousness does not work via input->calculations->output; something which is utterly unfounded, yet taken so for granted that it can be smuggled into the syllogism without thought or oversight. I mean, maybe this isn’t the point - but is not the problem of “hard AI” defined as the difference between AI and consciousness?
What definition would that be?
Well, it’s an argument which, if it is sound, shows that understanding does not work in an algorithmic way—it does so by assuming that there is such an algorithm, and then showing that despite the algorithm being followed, no understanding is present in the system implementing it. Frankly, I’m not sure what you think is wrongly being taken for granted.
Because there’s no reason to believe that our own consciousness does not function algorithmically.
Well, that’s what the argument purports to demonstrate, so if there’s no rebuttal of the argument, then there is a reason to believe that…
What about the known limitations of computability?
How can there be any doubt of that? Computer models excel at simulating heat flow through electronic components, just as one example. Nobody would design a radio or TV set without a simulation of heat flow and heat build-up, and computers do this with wonderful precision.
There are absolutely gobs of things computers simulate. The question seems absurd. Modern weather forecasting is amazingly accurate, compared to what it was forty years ago. How come? Computers!
Well, one limitation of conventional computers lies in randomness: no algorithmic way exists of producing true randomness. While it’s true that deterministic computers and computers with a source of randomness can compute exactly the same functions, it’s not true that the latter can emulate the former’s performance in all cases. What I mean by this can best be explained by example.
Consider the following cop-and-robber game: there are two houses, either of which a robber is in the process of, well, robbing. A cop is attempting to catch the robber. At any timestep, both cop and robber can either switch houses, or stay. When the cop enters the same house as the robber, the robber will be caught. Now, the police training manual lists an algorithm for searching both houses (the form of this algorithm is irrelevant). However, if you suppose that the robber is an ex-cop, who himself knows the house-searching algorithm, he can elude the cop indefinitely, by simply being one step ahead of her.
But now suppose the cop has access to a source of randomness, that is, can at any timestep make the decision to either stay or switch randomly. Now, there is no algorithm the robber can follow such that he eludes the cop; in fact, if the robber is deterministic, he will be caught with certainty, and even if the robber himself has access to randomness, he will be caught with probability 1 in the limit of long sequences. So, the addition of randomness transforms the game from always-loose for certain kinds of robber to always-win.
Now, a deterministic computer can simulate that in the sense that he can just try all possible switches, effectively simulating any possible strategy for both the cop and the robber; eventually, he will also find the one they actually took. But this is clearly something different from ordinary simulation of a system: think of trying to do a weather forecast by simulating all possible weather conditions, without any possibility of choosing one over the others—you’d get out no information whatsoever.
When I responded to Lumpy’s post upthread I took it as meaning something like whether computers ever really emulate a real-world phenomenon.
The fact that computers can make predictions about real-world phenomena is a subset of the fact that mathematics can make useful predictions. It’s taken by some as proof that maths is somehow “out there”, independent of humans, and we discover it as much as invent it.
But I see maths as just a set of tools for manipulating information. Like an extension pack for the human mind. These tools can be usefully applied to the real world only because we insist that mathematics be self-consistent, and logical, and the real world also appears to meet this criteria.
It’s rather like how the inference that you are mortal from the known information that all men are mortal and you are a man, works.
So, say I write a program to simulate a storm on my computer. What my program is arguably doing is simulating me, working with a pen and paper, manipulating the known information to find out things novel to me.
But my method for working out this information, and what’s happening inside a storm, are not necessarily the same thing.
And we know we have access to true randomness how?
Quantum mechanics—in fact, almost all probabilities in everyday life have a uniquely quantum origin. Even in the highly constrained setting of a billard table, the usual paragon of Newtonian mechanistic behaviour, after about 6-8 collisions (IIRC), the dominant contribution to the path uncertainties comes from quantum uncertainty.