The concept of a philosophical zombie makes no sense to me

And a quantum computer would still be a computer, anyway.

“I eat brains, but I do not think. Therefore I am not.”

Well, we know that there is one way to combine all those lookup tables that doesnt result in conscious experience: simply concatenate them to form a bigger lookup table. This demonstrates the logical possibility of combining lookup tables in such a way as to not lead to conscious experience, which is all the argument needs. And contrariwise, we don’t know of any way to combine the lookup tables that does lead to conscious experience.

It is, if you fine-grain the resolution enough—as I pointed out, any physical object is known to us only by its causal interactions, so replacing any chunk of the physical world by something that has the same causal dispositions, even if they are mediated by a lookup table, does not make a detectable physical difference.

Maybe for a different tack, consider what happens if somebody kicks you in the shin. Certain nociceptors will light up, sending signals to the brain via the spinal cord, which are then processed in such a way as to lead to, e.g., an avoidance behaviour, and some vocal exclamations—an ‘ouch’ and maybe some choice string of expletives directed at the shin-kicker. The thing is, at no point during this chain do we have to appeal to your subjectively experiencing any pain at all—all your behaviour is completely and gaplessly explained without it, just as a string of causal interactions.

But then, it seems at least logically possible that the same string of causal interactions could occur without any attendant conscious experience. Hence, there does not seem to be a necessary identity between the causal story and the subjective experience—talking about the physiological processes does not necessitate talking about the attendant experience, the way, say, talking about H[sub]2[/sub]O necessitates talking about water. But without such a necessary identity, there needs to be some further fact that needs to be fixed to determine whether there is attendant conscious experience.

That’s a possibility, but really, it doesn’t matter for argument’s sake, all we need to agree on is that such a lookup table is logically possible.

If you are talking about solving the problem of consciousness, that is certainly true (although IMO they will play a strong role when we finally get to that type of thing), but if you are talking about general problem solving in various domains then the statement is really incorrect. Neural nets are being used all over the place and their usage is increasing, rapidly.

1 - We don’t know that a single lookup table doesn’t result in consciousness. It seems odd to say it but we don’t truly know.

2 - You say we don’t know any way that “does”, but in reality we don’t know for sure if it does or does not. If the neurons are swapped, we honestly do not know what the result would be. We can’t conclude anything from that.

I see the point, but I think it fails due to assumptions that can not be considered facts.

You could make the same argument about when does consciousness disappear, when the atoms got replaced? When the neurons got replaced?

If you can’t state precisely when and why the consciousness disappears then it’s all just guessing, and those guesses could be wrong.

Quote="eburacum45 "]The big problem with the ‘lookup table’ intuition pump is that it doesn’t explain where the lookup tables come from. Did they appear by chance, at random?

[/Quote]

Well, that is the point of my little fable about the asteroid. Although it is possible that such a lookup table might emerge at random, like a Boltzmann Brain, it is far more likely that the lookup table would be the product of processing by a very competent entity, perhaps an AI that actually reacts to the environment (both real and hypothetical) by modelling the behavior of a conscious being.

To write such a lookup table the entity would need to be capable of asking itself constantly ‘what would a conscious being do in this situation’, and of getting the answer right. To do this it would be necessary to model a conscious being so exactly that it would have instantiated one.

I’m glad to hear that; there has seemed to me to be a complete silence on the subject in the popular literature (Scientific American, etc.)

Well, for high enough values of skepticism, we don’t know anything. But I’m more certain of lookup tables not being conscious than of the current location of my keys, and I’d readily say that I know where my keys are. Besides, consciousness seems at minimum to necessitate some sort of inner configuration to mirror the outside world, to have the sufficient richness to give rise to a representation of it; a lookup-table automaton lacks this.

:confused: If we don’t know if it does or doesn’t, then we don’t know that it does.

Could you make that more precise? What do you think is falsely assumed?

I’m not following. What do you mean by consciousness disappearing? In the passage you quote, I merely pointed out that the fact of you crying ‘ouch’ upon being kicked in the shin can be explained completely without reference to subjective experience, and hence, it’s possible to imagine all of that happening without conscious experience. I don’t see your point regarding the disappearance of consciousness. Or did you have in mind something like Chalmers’ absent/fading/dancing qualia? If so, I still don’t see the relevance, I’m afraid.

But if you then want to argue that you thus need consciousness in order to create such a table, you’re being circular, since you’re assuming that the machine acting in a way as to create consciousness-appropriate behaviour entails it being conscious; but that’s exactly the question. Also, note that the machine, in producing the lookup table, is not in the same conscious state as a being with which I am conversing would be—rather than having the experience of answering my question ‘How do you feel?’, e.g., it has the experience of pondering the question ‘What would the conscious machine say if it were asked ‘how do you feel’?’. It would stand to the machine implementing the lookup table in a relation rather like the author of a novel to one of her characters.

Thus, while there might be conscious experience associated, in a somewhat roundabout way, with the lookup table, it would not be that consciousness that I, as the interrogator of the lookup table machine, would expect to be associated with a conscious being producing the responses the machine produces—thus actually furnishing another argument for the underdetermination of experience by the behavioural/functional/causal.

However, as I said, how the lookup table came into being is rather immaterial—attempting to fight the hypothetical on these grounds is like telling Einstein that you can’t actually ride on a light beam: it misses the thrust of the argument.

I like to think that it would be something like trying to talk to Harry Potter by talking to J K Rowling - or Hamlet by talking to Shakespeare. At this moment in time the only entities that can create believable fictional characters are conscious themselves - this may change in the future.

As with the Chinese Room, the hypothetical is so unlikely that the thought experiment becomes a nonsense - although I wouldn’t be all that surprised if an artificially intelligent authorbot might be able to create believable characters in real-time in the not-too-distant future.

There are many commercial applications using neural networks and the number is increasing. Neural chips have been created including one by IBM introduced last year.

Google recently hired Geoffrey Hinton (and some of his people) due to his success with deep belief networks and unsupervised learning.

And finally, the best image recognition algorithm to date (using the standard set of character images) is still an evolved 6 layer neural network.

The likelihood of a hypothetical has no bearing at all on its success. Thought experiments of this particular kind merely ask ‘what would happen if…’. Riding a ray of light is impossible—not just unlikely—but asking ‘what would happen if one rode a ray of light’ led Einstein to valuable insights. Producing a lookup table encoding responses appropriate for a conscious being is unlikely, but asking ‘what would happen if we had such a table’ may nevertheless likewise yield interesting results—and whether it does is not determined by the likelihood of the scenario, but by the cogency of reasoning used in attempting to answer the counterfactual. The same goes for the Chinese Room.

I’m nowhere near a philosopher, but these are the thoughts I’ve settled on in this subject.

The reason I know understanding and consciousness exist is because I experience them. It connects with cogito ergo sum. I know I exist, and am not a simulation. That “I” means I am conscious. And the process of interpreting inputs through said consciousness is what I call understanding.

A problem with reducing human interactions to lookup tables is that there is no randomness, which I believe is a huge portion of the system. A lookup table is a many to one (or possibly one to one) entity. A lookup table is entirely deterministic, while I believe consciousness has some randomness thrown in, as the same inputs do not always produce the same response. A computer as defined now is also deterministic, and for the same reason cannot be conscious.

That doesn’t necessarily mean there’s some supernatural or otherwise undiscovered phenomenon involved. I think it may be possible that the randomness at the quantum level interferes. The human brain is largely made up of the movement of subatomic particles, after all. Yet our current computers force these same subatomic particles to work in a deterministic way.

There’s also another problem with the Chinese Room, though. Just because all existing humans think the responses are completely indistinguishable from understanding and consciousness, does it necessarily hold as true? We know our knowledge currently has limits. What if there’s a difference we don’t currently know about?

Finally, I think consciousness developed before logical thought, yet all AI attempt to use logic to function. That’s why we have the leftover vestiges of out more base emotions, which seem to correlate with other animals that we do not believe think logically. I think it is very likely that all of these things are necessary precursors, to the point that this is why a child goes through these illogical periods before becoming logical.

And, even if it isn’t, would we recognize a conscious entity that doesn’t respond like we do? If the human version of consciousness isn’t the only version, what conceptual methods to we have to recognize other versions?

One thing I know about is neuquant, a color reducing algorithm that uses neural nets. It’s incorporated into pngnq, which was, for a while, the best freely available image quantizer in many situations. I still use it as part of my image library to reduce the file size of images (although I pick between it and three other quantization methods).

Granted, neuquant was made back in 1994, and might be able to be improved upon today.

Lookup Table
This term probably needs some clarification because as I was thinking it through I realized this table is really an N dimensional mapping that is just duplicating physics. The input would be a set of vectors encoding the physical state at time of input and the output would be a set of vectors encoding the physical state at time of output. It’s possible some reduction of irrelevant data could take place, but it’s possible it couldn’t.

So, now we have a lookup table that encodes the laws of physics and the neuron, surrounding fluid/chemicals and electromagentic state are set according to the lookup table calcs. The end result of the brain at this point is identical to the situation if the neuron just used physics instead of our table called physics 2.0.

Because the physical state is the same at this point (and must be the same otherwise we’ve altered the calcs) for everything on the surface/external to the neuron, then my intuition is that consciousness still exists, but it may not. (Note: this is a sloppy assumption for ease of discussion as the internal workings of the neuron determine future neuron state, so all of that would need to be included and encode which means you pretty much need to keep going down to the lowest level possible).

If consciousness does not exist at this point, it means something internal to the neuron is required, but again we have no way of even coming close to knowing whether it does or does not exist at this point.

If we can’t say whether it does or does not, I think we can’t really follow the chain up or down to reach a conclusion.
Let’s pretend we did start at the lowest level possible and worked all the way up to complete brain state in one lookup table, we would still end up with a physical mapping from brain state to brain state that exactly matches reality because that is the only way to be sure we didn’t alter an important calc (primarily because we don’t know what is important).

We don’t know if the same inputs produce the same response or not - you can’t exactly reproduce the same conditions. What we see is compatible with a completely deterministic brain, but there may be some quantum randomness; we just don’t know how much of an effect that has.

Does not follow.

Not really. I’m not sure what you count as the dimensionality of the mapping—presumably, you’re thinking of something like encoding the physical state into a vector of N dimensions, which is then mapped to an output—but really, it’s simpler to think of the lookup table as connecting various causes with their effects, whatever way either might be represented. That is, again, a neuron is a set of conditions under which it fires, and so on—this suffices to functionally represent any physical system, while also explicitly lacking any internal processing, such as the physical processes occurring within the neuron (or whatever else level you thing appropriate).

The gist of the argument now is that any processing captured solely along functional terms can be perfectly well imagined to take place without any conscious experience. But physicalism posits an identity between this processing and phenomenology—a necessary identity, moreover. However, in any other case in which there is a necessary identity, we have a very different situation: you can’t imagine having one without the other, because they’re both the same. Take the example of H[sub]2[/sub]O and water: as soon as you know what water is, and what H[sub]2[/sub]O is, you know that they’re the same. Knowing all the properties of either entails knowing that they’re identical—you can’t imagine one without the other.

But in the case of consciousness and functional physical processing, the situation is very different. Arguably, we know all the relevant properties of the physical side—the scale at which we have any significant ignorance of the physics is orders of magnitude beyond the scales that are plausibly significant for consciousness. And we definitely know all the relevant properties of conscious experience, since subjective states are defined by us knowing them. Nevertheless, both could not be more different, and, far from realizing their identity because of knowing their properties, nobody has as yet proposed any convincing way to even make both compatible with one another.

So, the only reason for postulating a necessary identity between both must be an a priori belief in physicalism—we believe in the identity despite the apparently irreconcilable differences, rather than because of their similarity, as is the case in every other example, such as the H[sub]2[/sub]O/water one. But then, I think, we must at least ask ourselves whether we are in fact sufficiently justified in this a priori belief to uphold it against all evidence, or if there are not, perhaps, other options (without degenerating into other logically inconsistent options, such as postulating the existence of souls, or other ‘answeres’ that in fact fail to answer anything).

There is no “under which it fires” - there is a 3 dimensional state in flux and all important to the next state.

The more I think about this the less clear it is what is meant by “lookup” table.

Can you help me understand what this lookup table would look like for the case of a neuron?

I understand the point of the argument but I’m having a hard time seeing this particular example (replacing physical items with lookup table) as being valid.

The argument assumes that we replace items with a lookup table, not describe the lookup table but somehow remove underlying physical attributes, and then make conclusions about whether there is or is not consciousness.
This is what you stated previously:
“because such a p-zombie is an exact physical duplicate of a conscious person”

But nowhere in the argument do we end up with an exact physical duplicate that does not have consciousness. The lookup tables do not produce something exactly physically duplicate, they alter the physical structure.

I know I know, you will point out that at no point in this replacement was there anything that we could identify as being the magic physical ingredient that should not have been removed.

Bu just because we are not able to identify where or why consciousness could be impacted, we equally can not state that the altering of the brain doesn’t alter consciousness.

I think you’re getting lost in details that, ultimately, neither make nor break the argument. The idea of the lookup table is that we can replace any physical object by something that reacts to every causal probe—every way we prod it—in exactly the same way, even if on the inside it’s just a little green man looking up the appropriate effects using a big ol’ rulebook. That’s because we only know physical objects according to the ways they react to our prodding. So we can just replace the physical constituents of the brain by little black boxes implementing the appropriate behaviour in this, or some such, way, and get something that’s physically identical (at the relevant level) to the original brain.

True, you could crack open the box and catch the homunculus in the act, but there’s some level beyond which the physical details further ‘down’ don’t matter to the functioning of the brain, and it’s only at that level that we need to create a physical isomorph. That such a level exists is guaranteed by the idea of effective theories, meaning that at some point, dynamics effectively decouple from the details of the more fundamental levels—this is the only reason we can describe anything physically, since what things look like on the fundamental level, we have no idea, so if we had to take that level into account, then we couldn’t actually produce a description of any phenomenon at all.

Anyway, if you find the lookup table account to be troubling, then let’s just throw it out; I merely intended it as a means to make the point that any sort of processing at all can be imagined to take place decoupled from conscious experience more clear, so if it has the opposite effect of hiding the forest behind the trees, then good riddance.

The point of the Chinese Room thought experiment is that *if *we had a computer program that could pass a Turing test, then we could reduce it to a Turing Machine, and we could replace the CPU of the Turing Machine with a person. We can prove that this reduction can be done.

In the case of a lookup table, I know of no proof that we can reduce an arbitrary Turing Machine to a lookup table, and in fact I suspect that we can easily prove the reverse. If the Turing Machine never halts then no lookup table is possible, and if the algorithm generates a lookup table for every halting Turing Machine then you have solved the Halting Problem, which is known to be unsolvable.

If this proof is valid, then even if we had a Turing-Test-Compliant program, we could not reduce it to a lookup table.

True, but it’s irrelevant to actual physical computation, since all we ever have are only finite state machines anyway (there’s no infinite tape in reality), which can be cast into a lookup table form. As long as a computation only uses finite resources, the lookup table will likewise be finite (and of course the halting problem for all FSMs is perfectly decidable).