Okay. And what prevents a hypothetical computer from tapping into this randomness?
Nothing. If it’s that important one can just use a random number generator based on radioactive decay and you’re set.
I can’t think of any argument against this point which is that consciousness doesn’t seem necessary to arrive at our behavior.
When you stated previously that the p-zombie was identical physically, I thought you meant it was identical down to the lowest level of physical attribute (atom or whatever), but it sounds like the p-zombie argument really means a zombie that is externally identical but inside the brain it could be radically different from our current brain.
Is this correct that the argument is really just stating that it seems possible to arrive at the same behaviors without using the same physical brain structure?
I don’t see how this relates to the p-zombie idea that we’re talking about.
Yes, if the robber knew the exact details of the cop’s decision-making process, he could avoid it, but in reality there’s no way to access the details to that level, whether there’s randomness or not.
Well, possibly, but a Cal-Tech graduate I know (Doctorate in computer science) said, modern pseudo-random number generators are so good, God himself couldn’t tell the difference.
They’re certainly good enough for purposes of modeling heat-flow through a circuit board – or a cops and robbers sim.
But we can calibrate the model, by comparing it to reality. When the results are similar enough, enough times, we have a pretty good sense that the model is a valid simulation.
Otherwise, you fall into the solipsist trap. You aren’t talking to me; you’re only talking to yourself.
In practice, we are two separate minds, each of which is able to model the other sufficiently for a conversation to take place. That we can comprehend how we differ in opinion on a matter gives a ton of confirmation to the model!
i.e., if computers don’t really “model” reality – than how can our brains do so? Either the two kinds of modeling are just about as good…or else there must be some specific element missing from one – and since we all reject that it’s a spiritual element, then what the dickens could it be?
Sure, but that doesn’t contradict what I’m saying. A simulation need not actually be doing exactly the same thing “under the hood” as what it’s simulating. For simple systems there is more than one implementation that can provide the required outputs for given inputs.
For more complex systems, this is harder to verify, but we have no reason to think multiple implementations are impossible. That’s all I’m saying.
I don’t get the relevance of this.
Right, but that model needn’t be anything like what’s actually happening in your mind.
Humans could anticipate other humans’ behaviour long before we knew anything of neuroscience. And clearly one human cannot emulate the entirety of another human’s mind while preserving his/her own mind.
I didn’t say anything did, did I? Of course, a computer can tap into the randomness and use it as a resource; but in so doing, it can’t be framed in terms of a ‘black box’ machine anymore, since it needs to use external input. The randomness is not produced by the computer, it’s merely manipulated by it, in the same sense that water is manipulated by a control loop circuit, or steam is manipulated by a steam engine. Consciousness may be, logically, of the same kind: a computer can manipulate, but not produce it, like randomness.
No, the p-zombie is a physical isomorph. My version is one, too, at least down to whatever level is physically relevant to the production of consciousness. The whole point of the argument is that the same things can go on physically, without being accompanied by conscious experience. Certainly, when we imagine the production of a reaction by a stimulus, it’s not necessary to think about consciousness at all, and hence, it’s conceivable that it might be absent. Do you agree?
It doesn’t, not directly, anyways; I was merely replying to Trinopus’ question regarding the limits of computation.
He only needs to know the algorithm in the cop’s manual (say, by possessing a copy). Besides, this is a question of whether it is in principle possible, not whether you could actually do it.
It’s true that such numbers would appear random to statistical tests, and you would have to observe one a very long (but finite) time to figure out that it doesn’t produce random numbers. But with a copy of the PRNG, you can easily duplicate the string of random numbers generated; you can’t do that in the case of a genuine random source.
As stipulated, the robber knows the exact algorithm the cop follows; if that is the case, then he can predict the pseudo-random numbers the cop generates, and continue eluding her. Only genuine randomness suffices to overcome this problem.
I don’t get it - why can’t that randomness source be inside the black box that the computer’s in? Then it’s all there together, inside a nice convenient black box.
All this is exactly the same whether it’s an algorithm (p-zombie) or actual person.
If we knew the exact configuration of all the particles in the cop’s brain, then in principle we could model his future behavior exactly, excepting for some bit of quantum randomness thrown in. Our computer simulates this, even the randomness. They’re the same thing.
Ok, but the thing is we don’t know what is important so we can’t just alter things and assume we do or don’t have consciousness.
So I’m working from the assumption that any physical change whatsoever results in an unknown condition regarding consciousness. Now that’s pretty extreme so we could probably identify some situations in which it’s not altered, maybe if we randomly moved around 1 atom we would all agree that consciousness is not substantially altered.
But as you move up the chain, it becomes less clear whether we can trust our intuition or not, so we are really just guessing.
Ultimately, I still don’t understand how this p-zombie that is physically identical ended up without consciousness.
If the lookup table encompasses the entire system, then the only thing identical are the connections to output/motor neurons or just the activity of those neurons.
If the lookup table(s) encompass the functioning of each individual neuron, then we still end up with the surface of a neuron being identical and the complete 3D environment around it being identical and there is no way to accurately gauge if that has impacted consciousness or not.
1 - I don’t think I believe you could have the same things go on physically without being accompanied by conscious experience. I can’t prove it, but I don’t think it.
2 - Depends on what you mean by “necessary”.
I think you could theoretically create an input/output map that duplicates a persons behavior (I think). But, my assumption is that consciousness is an attribute that adds value to our system during decision making from a survival perspective (end result) and an energy efficiency perspective (effort required to arrive at good decisions), which means it’s tied to our current physical structure.
What, exactly, is allowed to be in the black box, pray tell? Nothing but logic components? The computer draws input from the outside world on a near-constant basis. Algorithms based on how hot the components are require sensors. Time-based algorithms require a clock. Am I missing the point somehow? How is a simple randomizing sensor which applies quantum mechanical effects somehow no longer allowed as a component?
So now we’re positing consciousness as some other “thing” that exists outside of ourselves?
I don’t mean black box in the sense of some actual physical container, but in a functional sense. No matter where you put the source of randomness, it would still be external to the computation.
The computer can’t simulate the randomness. It can harness it, use it as a resource, but can’t produce it itself.
But we do know that there is a scale beyond which changes don’t affect consciousness, because it’s not probed by the dynamics that give rise to consciousness (small lengths need high energies to be probed, and the processes occurring in our brains are very low-energy, for one). So we can just replace the dynamics at a scale lower than this cutoff, and we’ll have a system that is physically isomorphic to a conscious being in the sense that at the relevant scale for brain processes (whatever that may be), both systems are indistinguishable to physical probes.
Or, really, if it disturbs you that much, just forget I ever said anything about lookup tables.
I find it far harder to imagine how a non-zombie ends up with consciousness—that is, I can’t imagine that at all, while I readily can imagine the physical processes that occur in our bodies when we are, say, kicked in the shin, occurring without there being any sort of ineffable subjective experience associated with it. And since it seems that it is consistent to imagine things that way, it is logically possible for them to be; but then, again, the physical does not seem sufficient to fix all the facts about conscious experience.
Look at it this way: I could build a robot that, if kicked in the shin, behaved in exactly the same way you do. It would be just a simple elaboration of a sensor being triggered, which acts as a satisfying condition of some ‘IF…THEN’ construction, which then produces the appropriate behaviour. This, it seems to me, will quite clearly not be a conscious entity. The same thing holds for all other instances of behaviour.
But what is different if instead of a sensor, I use nociceptors, instead of some simple conditional, I use a neural network, and instead of servomotors, I use muscles to produce movement? There still seems to be as much possibility for an account of all that physically goes on, without ever having to loose one word about subjective experience.
But still, physicalism posits a necessary identity between the physics and the mental experience accompanying it; yet, as I have pointed out, in other cases of such an identity, we can’t speak about one part without speaking about the other. A proposition about water is a proposition about H[sub]2[/sub]O, but a proposition about physical processes does not seem to be a proposition about subjective experiences. Saying “H[sub]2[/sub]O is liquid (at some certain range of temperature and pressure” is the same thing as saying “water is liquid (likewise)”, but saying “pain hurts” is very different from saying “c-fibers firing produces avoidance reactions”, or something to that effect. So in what sense, if any, can we say that pain is c-fibers firing?
Nothing but that which works algorithmically. So any sort of sensors, outside input, etc., are in fact part of the interface with the external world.
Panpsychism, panexperientialism, and related approaches to the problem of consciousness do. I’m just pointing out that there is nothing that logically forces us to accept the proposition that computation suffices to give rise to consciousness, in the same sense that it does not suffice to give rise to true randomness (and yet, true randomness exists in the physical world).
We’ve touched on this issue in another recent consciousness thread: basically, certain kinds of properties do carry over into a simulation, while others don’t. A simulation of something is not identical to the thing itself. The simulation of a black hole won’t warp spacetime around the computer, for instance, because it has no mass; the simulation of a current running through a coil won’t attract the paperclips on your desk. The simulations possess neither mass, nor charge—those are physical properties that the simulations of the systems lack; only structural properties—things like complexity, organisation, etc.—are preserved in simulations.
So in the end, it’s at the very least logically possible for consciousness to be the same kind of property (or a different, perhaps unique kind altogether, that does not carry over into a simulation); and in fact, I think the various arguments against functionalism/computationalism strongly indicate that to be the case.
It can’t produce randomness only because you’ve arbitrarily defined “computer” to exclude a component that is based on quantum randomness. It would certainly be possible to integrate the quantum randomness generator on the CPU chip itself, are you saying that this randomness source still wouldn’t be part of the computer? I don’t get why you would define it that way, especially in light of this example where we’re comparing it to a brain, and you’ve explicitly said that a brain is largely deterministic but may have some component of randomness due to quantum effects.
To me, this seems like faulty reasoning.
The physical state of the system at every level influences the level above it (assuming we can talk about levels instead of a continuum). Every physical entity impacts it’s neighbor and influences the structure of the things that entity is a part of.
If we try to replace the inner workings of a neuron with a lookup table or computation (anything other than what it is made up of) and then attempt to make the surface and external area physically identical, you will end up with physical inconsistencies that will not operate the same.
Ion channels certainly wouldn’t operate the same unless you created the same physical conditions on the inside of the neuron and you could only do that by duplicating everything internal t the neuron.
I challenge you to describe a legitimate spot where that cutoff could be.
Maybe you are thinking the hypothetical allows for physical capabilities that would allow us to, for example, make the ion channel work properly (e.g. have all of the exact same physical attributes regarding chemistry and physics) despite the fact that the interior of the neuron is a non-existent void in spacetime (a lookup table or a green man or whatever is determining what the external conditions are at this point in time).
Well, okay, but that’s been a deal with science for centuries now. We don’t really know what atoms, molecules, protons, photons, etc. are doing.
We make up terms like “spin” because we don’t have a clue in hell what’s actually happening.
Yet…the models are accurate enough to allow us to build supercolliders (and neutron bombs.)
You’re emphasizing the “black box” aspect of science – and that’s fair. But that’s also an intrinsic limitation of reality. There are things – such as the insides of other people’s minds – that we cannot ever (ever?) really know.
Pragmatism is the view that if the sim helps you predict weather, design a circuit board, or catch the robbers, then it’s good enough. It bears a strong similarity to reality.
It’s a little like the Star Trek Transporter debate: “Is it the same?” It becomes a philosophical question, not a scientific one. What does “same” mean to you? It might not be the same as it means to me. We have to wait until someone builds one, and then we can be scientific.
True…but so what? If I have a tap on your phone, I can learn what your plans will be. If the robber knows the algorithm in advance, the cops lose. That’s what security is for.
Pseudo-random numbers serve to answer E.A. Poe’s complaint about chess-playing machines, and they do sterling service in real life sim work. Billions of people play on-line games, and it is rare (has it ever happened at all?) for them to hack into the algorithm and give themselves huge unfair in-play advantage.
“One can imagine” does not entail “it is possible.” Nor do I see how it’s relevant that stimulus and reaction can be connected without discussing consciousness. Nor, for that matter, do I see that the truth of that statement is established. Consider a system of stimulus and reaction so utterly complex and so arranged that it cannot be modeled without accounting for certain kinds of feedback – detailed modeling within the system of its own inputs and outputs, to some degree. That feedback would be your “subjective experience,” your “consciousness,” and the overall system would not be accurately described without it. Yet it is, still, just some (possibly even deterministic) way of mapping stimulus to response.
Why are you certain of that? Because you have a preconception of “consciousness” as something incompatible with something that feels “lookup-table”-ish to you? A sufficiently large lookup table combined with an automaton capable of traversing it is quite capable of encoding any internal state you care to name. Whole vast wodges of those states could correspond with subjective experience, and have their due effect on the behavior of the whole.
This is just an equivocation – wrongly equating a part with the whole it inhabits, when that distinction is quite essential to the problem. When our fellow (however improbably) memorizes this entire set of rules, his operation in accordance with those rules constitutes a system that Understands Chinese. The fact that this system is not open to inspection by the other parts of his mind is immaterial. “He” (the man) is not the same thing as “The System” (the operation of the rules that he is capable of carrying out), therefore saying “He doesn’t understand Chinese” is a strictly irrelevant objection.
As an analogy, the kernel of the operating system in my computer doesn’t know the first thing about manipulating spreadsheets – but it knows how to run programs that may understand spreadsheets very well.
This would not be true if the black hole that was being simulated, or the paperclips that are magnetised, are themselves simulations. If we do, in fact, live in a universal simulation, then there are no real black holes that we can observe, or real paperclips, and a simulation of such entities could be identical to the original.