The concept of a philosophical zombie makes no sense to me

To me it always raises the question of what’s the point of qualia in the first place. A naturally evolved p-zombie seems better than a lot of regular animals in many respects. It brings up the old joke question of why pain has to hurt so much. Why be frozen in fear or wracked by hunger pangs? Why can’t the brain just coldly calculate that it has to put more weight on the other foot to prevent further damage? Like Arnie says in T2, “I sense injuries. The data could be called ‘pain.’”

Searle/the CRA don’t argue against the possibility of conscious AI, in general. Nor does it have anything to do with meat over silicon chauvinism. It makes arguments about how they work on a fundamental level vs. how the brain does it. It says Siri and chat bots don’t understand you and you might know that because of how crude they are, but also that Moore’s law won’t make them understand no matter how convincing they seem because it’s just like giving the guy in the room more papers and cabinets. It’s syntactic turtles all the way down.

The argument might be wrong, but pointing out the room’s physical improbabilities is still missing the point. It’s like telling Einstein that you can’t ride beams of light and objects with mass can’t travel at C. And the whole bit with half-dead cats is just silly. We still can’t keep brains alive in jars, or give them inputs to make a coherent experience that mimics reality. We can make something like Maxwell demons now, but they don’t violate the rules of thermodynamics.

But people and computers play chess with exactly opposite strategies. The number of possible moves within say fifteen turns is beyond human mental capacity, so humans use what experience and knowledge they do have to form intuitive guesses based on a complex non-linear sense we call “elegance” or “beauty”.

By contrast, computers simply use their speed and memory to try millions of solutions in the time allotted and use statistical analysis to pick the one with the best overall chance of winning the game. We don’t have a “game tree” for chess- a list of every possible chess game that could play; but if we did and a computer could process that much information quickly enough, it could simply mechanically follow the tree to either victory or a draw. Sort of like how the Four-Color Map Theorem was “proven” not logically but by proving that all possible maps were equivalent to a manageable number (quadrillions) of equivalent maps that a computer tested all of. Tremendous number crunching power is extremely useful but it’s not intelligence.

True. But who’s to say we won’t have computer programs that do it that way, if not today, then in fifty years?

I would hesitate gravely to declare that “This can’t be done.” That was the point I was trying to make with computer chess. People did declare it couldn’t be done…and, not long after that, it happened.

I know you’re not pulling a “No True Scotsman” argument here, but I think that some people essentially are. They define “understanding” as something human brains do, and then crow, “Computers can never understand.”

The two points I think are interesting are: how do we know that computers will never truly understand? And…if a computer emulates understanding to such a high degree that the results are indistinguishable from true understanding…then what is the difference?

It keeps coming back to the Turing Test. If I can’t tell – if no one can tell --whether the correspondent is an AI system or a real person…then how is the AI system not a real person? What, exactly, is missing that is necessarily definitional of personhood?

I couldn’t disagree more.

In this case “understanding” is an internal attribute, it’s not even available to external evaluation other than by indirect and non-precise methods.

As stated previously, there is a random number sequence somewhere out there that would cause outputs that appear completely intelligent and full of understanding, but it’s just coincidence.

Isn’t there some way to leverage “I think therefore I am” to draw some sort of line?

If it is absolutely unavailable to external evaluation, then it is a nonsensical proposition.

If it exists solely as an internal attribute, then I can deny it exists and this denial cannot be refuted; likewise, I can claim that a given computer program possesses it, and that claim, also, cannot be refuted.

My problem with this is that when you define “understanding” as an internal attribute, you immediately start running into trouble, and have to undergo more and more elaborate contortions of logic to maintain the illusion that the phenomenon exists at all. For starters, you paint yourself into the corner of having to define that internal attribute in increasingly irrefutable or inaccessible ways, as Trinopus points out. Without some externally visible correlation, the whole idea of “understanding” is incoherent. I’m sure someone could work in a comparison to Wittgenstein’s private language argument here, but I’m too lazy to do that.

I am not at all sure that we can be obliged to consider strictly non-deterministic phenomenon in the same category. Nonetheless, you’ve begged the question by presupposing a determined output for this generator (eternally, one must suppose, or else it would be possible to distinguish it from an ordered system!) while at the same time calling it “random.”

Well, of course **I ***understand *English, and I am conscious and self-aware. But **RaftPeople **lacks all internal attributes; simply a random number sequence generating appropriate output given certain inputs.

Prove me wrong.

There was an article in Scientific American, some time back, which suggested that understanding is necessary for certain kinds of associational thinking.

If you can get from a race car to an ice cream cone in three steps, that requires comprehension and context.

e.g. race car, race course, audience, ice cream cone vendor.

You can’t do that kind of thing on a regular basis without having a solid idea of large “clouds” of connotations. This is another argument for the kind of AI that has to learn about its world, slowly building up such a catalog of associations. It can’t be hard-programmed, only learned – the way all of us learned such things.

In their strongest version, yes, the possibility of p-zombies existing entails that physicalism is wrong, because such a p-zombie is an exact physical duplicate of a conscious person, however, lacking any conscious experience. Thus, the physical facts themselves do not determine whether an entity is conscious; there must be ‘further facts’ that do, and hence, the physical facts don’t exhaust all facts about the world, and physicalism is false.

This is usually made plausible by telling a story such as the following: we can imagine a humongous lookup table that replicates the behaviour of any conscious being (for any finite, but unbounded length of time) by simply pairing the outputs appropriate to a conscious being with the inputs it receives (where the input is always the entire history of the interaction with the lookup-table entity up to the present point; otherwise, you could defeat the entity by simply asking ‘what was the last thing I said?’, or something similar). Most people would agree that such a being isn’t conscious, even though it produces all behaviour that we would expect from a conscious being.

But fine, you say, that’s just one way to set up a conscious being. So all we’ve shown is that in this way, you can produce consciousness-appropriate behaviour, without the attendant conscious experience; but there may be other ways to produce this behaviour that is accompanied by subjective experience. After all, we’re certainly no lookup-table entities. (Note, however, that this already throws up some interesting questions: if consciousness is not instrumental in generating behaviour, then why do we have it? Evolutionary selection only acts on behaviour, so could not select for consciousness. One way to argue is perhaps that while consciousness is not necessary to produce the right behaviours, it is sufficient, and perhaps the processes giving rise to consciousness (and thus, consciousness-appropriate behaviour) are simpler than those producing the behaviour without the conscious experience—certainly, if the lookup-table were the only way to produce the right reactions without consciousness, it couldn’t be selected for, because it’s simply impossible for such a lookup table to exist in our universe.)

However, what we have established is that how an entity—how some region of space-time—reacts to our prodding it, i.e. what behaviours it produces if we, say, ask it questions (or point a gun at it), does not tell us anything about whether or not that entity is conscious, or whether that region of space-time contains a conscious entity. That is, mapping the causal behaviour of something does not establish whether that something is conscious—there always exists a way to give rise to the same causal reactions without any attendant experience. But, all we know about the physical is essentially its causal structure—any material, physical object we know only by prodding it and observing how it reacts to this prodding. Physical facts, or at least, our knowledge of them, are thus reduced to causal facts—say, we do experiments, observe their outcomes (prod some region of spacetime), and create theories to explain the data.

Thus, if we replaced any region of space-time, any physical object, with something that reacts identically to causal probing, we could not tell any difference. Anything we do to that replacement would yield the exact same results as before. So we can take our lookup-table pseudoconscious entity, which we have treated as a ‘black box’ so far, and carve it up into smaller black boxes, each of which has the same causal dispositions as some small part of a human being, and each of which has them without any attendant conscious experience. So we know that the large black box has no conscious experience, and neither has one of the smaller ones.

And we can continue playing this game, making the small black boxes ever smaller—first, they may be regions of the brain—frontal lobe black box, hippothalamus black box, and so on—that have the property of reacting to any causal probe in just the same way as the analogue regions in a human brain do, but doing it without conscious experience—say, by consulting a lookup table. Then just carry on, to more fine-grained brain structures, to cortical columns, to individual neurons, hell, if you insist, all the way down to molecules and atoms. At any point, you can just replace the structure by one that is identical regarding causal dispositions, and hence, physically identical; but if the lookup table at the beginning wasn’t conscious, then neither will any of the smaller lookup tables be.

Now, the advantage is that we slowly get down to lookup tables of a much more manageable size—an individual neuron has a quite limited causal structure, compared to a whole human being. Ultimately, we then have a being that appears to all probes physically identical to a human—say, a specific human being, such as you yourself—but that seems to lack any and all conscious experience, being composed merely of lookup tables (at whatever resolution you think suffices).

We can then, to drive home the point, imagine going back up: a single neuron is, in this picture, nothing but a list of conditions under which it fires. We can join two neurons, to get a more complicated list; join those with a bunch of others, increasing the complexity yet more; ultimately, recreate a great master list sufficient to emulate all your behaviour. But crucially, at no point did something ‘extra’ appear: we started out with a lot of small lookup tables, and proved that it is equivalent to a big lookup table. So that whatever produces consciousness is not in all those small lookup tables—but neither is it somehow in their interplay, at least not necessarily, since that can be replaced itself by a big lookup table. So then, it seems at least logically possible that a being physically identical to you, but with ‘lookup tables’ dictating the behaviour of its neurons (molecules, atoms…)—i.e. with whatever scale you think is sufficient replaced by little black boxes causally identical to the stuff in your head—could exist, without possessing any conscious experience.

But if that’s the case, then physicalism is false: physics does only account for the facts regarding the causal behaviour of an object—a lifeform, a material object, or, most generally, a region of space-time—but consciousness is not captured by these causal facts, as there exist entities with the same causal dispositions, but no conscious experience. Thus, the physical facts underdetermine the question of whether or not there is conscious experience.

Well it gets down to what you call a “computer”. To mention Penrose again, he didn’t claim that there was something magical about evolved brains made of cells made of protein and DNA; he freely admitted that you could have an AI. What he’s extremely dubious about is the claim that intelligence can be reduced to algorithms- a blind automatic process of following step by step instructions. The reason this makes a difference is that we know precisely in a mechanistic way how computers work (we did build them after all). So if you presume that a computer can pass the Turing test, you can say “Voilà! See, no ghost in the machine required!”. But that is a presumption. You could make a machine that does everything a human brain can do, but it might not meet the definition of a computer, as we currently understand the term.

Yes, I see your points and I can see that there can be difficulties with calling it an internal attribute, but I think it is still accurate. If we rely on external output only the problem is even worse because scenarios that I think we would all agree do not constitute understanding are included in that group (rolling dice that happen to get the right answer).

In addition, we frequently run into a situation with our own brains in which no output exists but there is understanding. For example, lurkers of this thread may understand various points but it generates no relevant output.

If we go with your definition, then suddenly the lurkers of this thread are considered to not understand what is going on, when in fact some of them (most likely) understand even better than any of us posters.

This is a tangent, but I love reading about research that keeps showing the neuron is much more complex than previously understood. The dendrites with local spiking forward and backward, with different signaling characteristics resulting in different functions being performed on the data as it moves down the tree with the net result being that each neuron is really a “neural” network all by itself.

note: when you say “neither will any of the smaller…” I assume you mean “the system won’t be conscious even if the smaller lookup tables are used” as opposed to “the smaller lookup tables themselves are not conscious” which would be a different argument and more easily countered.
This argument quoted seems to be a leap of logic that makes some assumptions.

If we replaced every neuron with a lookup table, we really don’t know what the result would be. My opinion is that it would result in the same level of consciousness as long as every physical attribute was identical (electrical and chemical changes to the area the neuron occupies).

Half Man Half Wit, just to be a little more clear in my previous post, I understand the notion of moving the dial along the continuum of lookup tables from entire system down to neuron or atom and everywhere in between.

It may seem like my response assumes that at X number of lookup tables suddenly consciouness exists and the logical question is “what is X”? But really my response is more a statement about our lack of knowledge of what’s going on, we can’t just assume that neuron level replacement won’t result in the same experience (with consciousness).

Well, but any conglomeration of lookup tables itself can be replaced by a lookup table. Therefore, there is a way to combine the neuron (molecule, atom, quark…) lookup tables in such a way as to not lead to consciousness. Therefore, it’s logically possible for the combination of lookup tables as they are in our heads to not produce conscious experience. But then, there exists a possible world (in the modal logic-sense) which is physically identical to ours, but which lacks conscious experience. And if that’s the case, then physicalism is false.

For this conclusion to follow, one need not show that it isn’t possible for there to be conscious combinations of lookup tables, one merely needs to show that there is such a combination that isn’t conscious—which, strictly speaking, I haven’t done, of course, since the task is way too humongous to actually complete; but the idea as such contains no obvious inconsistencies, and can be coherently entertained. Thus, we can imagine a world in which the same sort of things go on, physically, but which lacks conscious experience; this world isn’t ours, and in our world, it probably isn’t possible to create zombies. But all the argument needs to establish is the lack of a necessary connection between physical facts and consciousness, which only necessitates the establishment of things being possibly different. A proponent of the zombie argument claims that it shows this possibility.

We don’t know that. We truly do not know what the result would be.

But a p-zombie due to lookup tables is not physically the same as our system. Whether it is conscious or not really doesn’t speak to whether there is some physical attribute of our system that is required for consciousness.

The big problem with the ‘lookup table’ intuition pump is that it doesn’t explain where the lookup tables come from. Did they appear by chance, at random? In an infinite universe such random look-up tables could emerge spontaneously.

Or were they generated by a massive, competent AI super-intelligence? I’m in the early stages of writing a SF story about this possibility, basesd in part on the discussions on this board. A group of astronauts arrive at an asteroid where a super-competent AI has simulated an entire society, a small town filled with apparently real people. But these people are simulated by the computer, using algorithms that simulate the reactions of real people like hyper-realistic chatbots.

Over time the astronauts are integrated into this simulated society, and even have children with them thanks to the magic of artificial gestation. In a few decades or centuries the asteroid is filled with a population, half of which is simulated and half is real human. The great problem is that no-one (except the competent AI) is capable of determining which is which.

The thing about this scenario is that the simulating AI is effectively creating entities which are conscious, according to any and all tests; are they philosophical zombies, or is the phenomenon of consciousness actually manifest in the AI rather than the simulant? I tend towards the latter possibility.

eburacum45: Nifty idea for a story! Write it and publish it! I’ll buy a copy!

However the mind works, at base it has to be composed of some kind of “blind automatic process”. Even if there was some sort of magic vitalistic force, it would still have be a “blind automatic process”. At some point when you break down the structure of a mind enough, you are going to find something that isn’t a mind; you can’t have some kind of infinite regress of minds within minds within minds forever.

I don’t think you could, given how broad and abstract the definition of a computer is. And you’d also have to somehow handwave away all the things we already know the brain does as not actually mattering. You’d have to show that all those firing neurons, all the things we think we understand about the brain are both irrelevant and yet at the same time look exactly like an information processing system.

There was a lot of early interest in neural nets, although that doesn’t seem to have borne a whole lot of fruit.

In theory, a fully-sensitized neural net is, still, algorithmic; it’s just that the algorithm is dispersed so broadly, it defies practical description.

It is entirely possible that some machine of this sort – or even of some sort not yet devised – could be made to do all the things a brain can do, and might definitely bear little or no resemblance to the computers we know today.

But at the heart of it all, I hold with what Der Trihs said: at some point, our own brains are made up of the “machines” that are atoms. So, either other machines can do the same thing via a process of emulation, or else there’s a supernatural or metaphysical “spark” that is not machine-emulateable. Roger Penrose tried to argue that it involved some hoodoo of quantum physics, but I think the he did not defend this notion sufficiently.

(I think the poor bloke was off his flipping nut, to be honest.)