Maybe the height example is confusing, because you can still take thing away from an object that is that tall, although I think the case works. But if you don’t like the height case, instead use one like light; the very same thing can be greater than blah wavelength and less than blah wavelength, where these are distinct wavelengths.
(Also disjunctive cases are worth mentioning, e.g. where (A v B) and (B v C) both supervene on B; and we should be able to cook up cases where A,C are logically independent from B.)
Because a mind in state |A> wants to remember being in state |A>, and hence, set its memory to state |a>. (Recall how I set up this whole memory thing in the first place.)
That’s why I’m trying to show it to you: the problem is that in this case, there is no evolution (at least none that follows from the physical evolution) from one mind into another (i.e. there’s no successor relationship).
I haven’t claimed it’s inconsistent, only that it leads to undesirable consequences, i.e. either the assumption of an extra-physical evolution, or the impossibility to do science because of the absence of reliable experience (measurement records/memories).
I don’t see any argument in that paragraph? It was just intended as a summary.
Well, the separability in the classical case is not really given quantum-mechanically: I would say what you describe would rather be analogous to the mixed state p[sub]A[/sub]|A><A| + p[sub]B[/sub]|B><B| (curiously, I can tolerate non-normalized pure states much more easily than non-normalized mixed states…) than to the superposition, since you can’t really neatly partition it, as is shown by interference experiments. To me, there’s really only one consciousness existing in a pure state |A> + |B>, whose awareness is that of |A> and |B> separately; but that’s really only my own philosophical prejudice.
As for whether two consciousnesses can, in principle, supervene on one physical state, while it seems distinctly odd to me, I don’t think I can exclude it in principle. I mean, it’s certainly the case that if a set of properties X supervenes on properties Y, then the Y-properties fix all the X-properties completely—that’s just the definition of supervenience. So if Y is mental, and X is physical, then there’s no wiggle room, it seems. But there’s the possibility—though I couldn’t claim I’d see how to instantiate it—that there is another set of properties, Z, that also supervene on the physical, and that also could be called ‘mental’ in some sense.
I mean, let’s talk about a different supervenience relation: certainly, biological properties supervene on the physical properties. And it would seem distinctly odd to claim that there could be two different sets of biological properties supervening on the same physical properties! If a certain physical state gives rise to a certain biological state of affairs, say some living cell, could that same physical state give rise to a different biological state of affairs?
Or even, think about statistical physical quantities: those supervene on fundamental physical quantities. So, could the same fundamental physical state give rise to different statistical quantities—to a different temperature, or pressure? Moreover, could it simultaneously give rise to two different sets of statistical quantities?
Both this and the previous example seems to me to be utterly impossible. But then, what makes mind different in such a way that this ‘multi-supervenience’ becomes a live option?
However, while these arguments certainly make me doubt this possibility, I don’t think they’re quite sufficient to rule it out. So I’ve tried my best to confront it as openly as possible…
Shouldn’t X be mental, and Y physical? (And then there is ample wiggle room.)
But that hasn’t been the sort of claim, rather the analogy would be that, amongst the set of biological properties, there are two distinct properties which supervene on the same physical property.
No. A mind is not just a property, it’s as much a collection of properties as a physical state is; and for physicalism, all mental properties must supervene.
So long as it isn’t all the properties, then the argument doesn’t work. (E.g. Consciousness-A is one set of mental properties, Consciousness-B some other set; the mental supervenes on the physical, and both consciousness can supervene on the same physical.)
ETA- ugh, im being confusing; so long as it isn’t that, whatever the set is, all the properties are needed to determine consciousness, then iamnotbatman’s position looks safe to me.
I am still not seeing how a superposed state cannot obviously be said to have a memory corresponding to those of each of the states separately. All the information is there. Consider a consciousness that is in superposition and watching spin sideways electrons go through up-down magnets.
Before the electron reaches the magnet the state is:
|consciousness watching electron travel towards the magnet>|e spin up> + |consciousness watching electron travel towards the magnet>|e spin down>
After the electron reaches the magnet, the state has evolved to:
|consciousness having seen electron travel upwards>|e spin up> + |consciousness having seen electron travel downwards>|e spin down>
If another electron comes through, we have next the state, after branching:
(|consciousness having seen electron travel upwards and now watching electron travel towards the magnet>|e[sub]1[/sub] spin up> + |consciousness having seen electron travel downwards and now watching electron travel towards the magnet>|e[sub]1[/sub] spin down>)(|e[sub]2[/sub] spin up)> + |e[sub]2[/sub] spin down)>)
and so on. The memory is there just fine, written out above explicitly. As you can see, after the second electron goes through, there is a superposition of 4 states, corresponding to memories of {up up, up down, down up, down down}.
I think that in general I would expect such possibilities to be fairly rare and special, although I can’t rule out that they are common. This is why early on I remarked that the existence of dualities proves (to my taste at least) the existence of these kind of possibilities. In fact, ironically, the MWI itself is a case in point, since it purports to be a dual description of QM. The fact that (if you believe in MWI being a valid interpretation) the MWI is in mathematical equivalence with other interpretations of QM proves that, if a supervenience relation holds in one interpretation, and a supervenience relation holds in another equivalent interpretation, then both supervenience relations are true, despite arising from the same physical phenomena. The difference is due to the fact that two different descriptions can both be correct. I can provide some examples: S dualities, holographic principle ie Ads/CFT, changes of basis in QM, E&M fields, Newton & Principle of least action, and so on. To take a concrete example, black holes in quantum gravity seem to be an example of a physical state with multiple supervenience. On the one hand they have thermodynamic properties, but on the other hand they have primitive properties, and may even be viewed as fundamental. The holographic dualities that are being discovered in quantum gravity seem to point to both being true simultaneously. The same dualities also show that there may be a map between physics in N dimensions to a theory with different dynamics in N+1 dimensions. The two theories would ostensibly give rise to totally different dynamics and completely different potential conscious life forms, and yet the two are mathematically equivalent descriptions. I could go on, but I think this gives you an idea of where I am coming from… the point is that different descriptions can supervene on the same physical state for which there may be wildly different (but ultimately equivalent) mathematical descriptions.
I think you misunderstood the memory example. A conscious mind, in some state, wants to be able to remember being in that state; to do so, it has to prepare some physical system in a way that it can be read out and tell the being what state it has been in (memory is physical!). So originally, the mind and the memory start out in some nondescript state |X>|x>. Then, the mind evolves to s+ome specific state |A>|x>. It wants to save the information of being in that state on its memory. Thus, it makes a measurement on the memory, thereby preparing it in a certain state. If all goes well, maybe the memory is already in an eigenstate of the observable used to store the data, and the mind and its memory end up in the state |A>|a>. Whatever now happens to the mind, it can re-measure the memory in the same basis, and again obtain the information that it has once been in state |A>: it remembers.
But in a more general case, the memory will not be in an eigenstate of the measurement; thus, after the measurement, without collapse dynamics, the state will be |A>(|a> + |b>). With the collapse dynamics, again we have no problem: if the outcome of the measurement post collapse is b, indicating that the memory is in state |b>, the mind can just apply a unitary rotation (or, even more simply, measure in an orthogonal basis, and then re-measure in the original basis, and repeat until the desired state is reached) in order to change the state of the memory to |b>. But a memory that, without the collapse dynamics, stays in the state (|a> + |b>) is of no use: it can’t be measured to reveal whether the mind was in the state |A> or |B>.
What I think you’re missing is that the mind itself, with respect to the relevant degrees of freedom, is also just a qubit. In the evolution you proposed, we’d have something like |A>(|a> + |b>) --> |A>|a> + |B>|b> for the first step. But that’s not what we want: we want the mind to remember correctly that it was in state |A>, whereas the second term will falsely remember having been in state |B>, into which it however only evolved thanks to the mis-set memory!
EDIT: Oh, and I’m sorry, but dualities are definitely not examples of supervenience relations; they’re just an isomorphic description of the same fundamental facts. Similarly to how I can describe the same set of pixels using jpeg or bmp coding—or even encode it as a vector graphic on some vector space or its dual.
Huh? We were discussing the case where the wave function does not collapse at all. We were discussing a consciousness supervening on the superposed state.I gave an explicit description of the state as it evolves with time, showing clearly that the memory of the superposed state is carried forward in time. But now you are complaining that we want the mind to remember states |A> or |B>, as though it was collapsed, which as far as I can tell has nothing to do with the example I gave.
It shows that there can be multiple valid descriptions of the same physical phenomena, each of which can have different supervening properties. There is nothing to prevent there from being a dual description of consciousness.
This is indeed partly what I’m saying. I would also say that “properties” are description dependent, and that since there can be dual descriptions of the same physical phenomena, it can never be ruled out that there might be additional properties on which to supervene.
Let’s back up. The case I was discussing was that of a mind in a definite state |A>, that wanted to remember being in state |A>. If the mind now evolves to some state |B>, the information about having been in state |A> is erased. Hence, I proposed the adjunction of a memory composed of a set of qubits. But now, the problem arises that if the mind in state |A> measures a qubit in order to set the memory, then, if the qubit afterwards is in superposition, the mind may evolve into a superposed mind, supervening on the state |A>(|a> + |b>). But then, it will not be able to remember being in the definite state |A> once, say, it has evolved into |B>(|a> + |b>). Thus, the memory doesn’t work without at least an apparent collapse.
Properties don’t supervene on descriptions. What properties something has is not changed by changing its description.
Not in general. It depends on what the state |B> is. In general state |B>, being a successor, will have information about having been in state |A>. Go back and look at the electron example I gave, to see an explicit example. (Maybe you are assuming |A> and |B> are very non-complex states? But then how can they have memories?)
Well, they don’t. That’s why I added a memory. Think of the states |A> and |B> as the momentaneous content of the mind, as what it is aware of. This will have the same dimensionality as the state space of whatever the mind is being aware of (so be a qubit in the simple cases we’ve been discussing). You keep assuming that somehow a memory is additionally associated with this; but this is just not the case. Hence my addition of one, in order to make the physical nature of memory more clear to you.
Come on. I mean, how could it? If I change the basis I describe a quantum system in, do I then change its properties?
This has obvious pitfalls, of creating confusingly ill-defined and artificial states with an artificial separation between the physical origin of memory and the content of the mind. I’ll have to think about it some more later when I have the time (real life intervenes). But I would urge you to just step back for a moment and look at the time evolution of a real human state. Consider the electron example I gave and tell me where it is going wrong. I think that you are unnecessarily complicating matters in your attempt to simplify them.
YES! If a particle has a definite momentum, then in the p-basis it has the property that it is localized, while in the x-basis it has the property of being spread out.
Oh, c’mon, really? It has two properties: a sharply defined momentum, and a spread-out location.
As for the ‘pitfalls’ you see, I think this is the only way to sort out our trouble, by using an explicit model. You fall into the trap of believing your intuition that the memory somehow is preserved in a split, and so on; the model shows that it’s not so simple. You take too much for granted without examining the physics behind it.
Yes, exactly. You think this is silly, but it is not. If consciousness were to supervene on the projection in momentum space, it would be different from the consciousness supervening on the projection in position space.
But in order to prove your point, it seems that you are assuming that the state |A> and |a> are the same; otherwise |a> would not be a faithful memory. Therefore without collapse, when you assume that |x> will evolve to |a> + |b>, you must also assume that |X> will evolve to |A> + |B>. Therefore without collapse dynamics the state will not be |A>(|a> + |b>), but rather (|A> + |B>)(|a> + |b>).
There’s a set of physical properties, in principle given by the linear subspaces of Hilbert space, upon which supervenes a set of mental properties, given by who knows what. I’m talking about the collection of properties in the supervenience relation, not about any properties individually.
No. |a> must only be one of a set of two orthogonal states, and thus, capable of representing one bit of information, 1 or 0, |A> or |B>.
Even if both were the same state, that wouldn’t be true. I can make a measurement of a subsystem by just measuring I x P, where I is the identity on the one subsystem (whose state is |X>), and P is a projector on the second subsystem (x is the tensor product). I can implement a unitary evolution U on the memory subsystem in the same way, by acting with I x U on the whole system. So the evolution |A>|x> –> |A>(|a> + |b>) is perfectly well implementable.
But there could be supervenience on properties individually, in which case there could be multiple supervenience relations in multiple subspaces, and therefore multiple supervenience relations on what is ultimately the same physical phenomena.
Then again, aren’t they the same? You say that |a> or |b> can represent |A> or |B>. Then in this model |A> or |B> can also represent |a> or |b>. Therefore they are equivalent.
You are assuming collapse, and ignoring the fact that if |a> and |b> can distinguish |A> from |B>, then if attempting to measure |x> yields a superposition |a>+|b> then it also yields a superposition |A>+|B>. This is the problem when you design something so physical removed, is that it is extremely easy to make silly errors. Also, myself not having a background in quantum computation or these kinds of manipulations, it is not transparent to be that without collapse, you cannot “just measure I x P” without affecting the measurer, and without, by the usual definition of measurement, there being a spectrum of superposed eigenvalues.
This is why – again – I think it would be much clearer to think in the terms I have already explicitly delineated. That is, you have a conscious state |A> equipped with certain memories, the conscious state, who passively observes and evolves as an electron goes past a magnet. The electron is deflected upward and downward (having been in superposition) and in each part of the superposition hits a phosphorus screen, which emits a photon, which hits the observer’s eye. There are now two observers each that was hit by a different photon, whose internal memory of the event is therefore different. The only place I have assumed collapse so far are in trivial ways. For example I have assumed that the observer sees a photon, when in reality he will be hit by a spread-out wave front. But nonetheless each of the two wave fronts coming from each part of the phosphorus, depending on where the electron hit, are distinct. Therefore without any collapse, there is still, in the whole ensemble of superpositions, a distinction in memory between the physical observers, one superposition having been subject to a spread out photon. wave form at an angle to the superposition having been subject to another spread out photon wave form. So now we have state:
|A up>|up>+|A down>|down>
Continuing the experiment, the same thing happens again, this time once to each of the two parts of the above superposition, leading to:
|A up down>|up>|down>+|A down down>|down>|down>+|A up up>|up>|up>+|A down up>|down>|up>
Well, take again the examples I presented. For statistical physical properties, for instance, temperature supervenes on the momenta, volume on the positions of the particles making up some gas. If you consider these individually, do you have two statistical physical states supervening on one fundamental state? No, you have simply one incompletely described physical state in each case. The same goes for biology: you don’t have two different biologies, but maybe two biological phenomena. (This I would say is analogous to one mind supervening on a superposed state, giving rise to two separate awarenesses.) So why should consciousness be different? I mean, of course, it might well be, but there does not seem to be any reason to assume it is.
In any case, this whole side thread is a little irrelevant, seeing how I already said that I’m willing to assume that it’s possible for two consciousnesses to supervene on one physical state for the sake of this discussion (nevertheless I still think the possibility is inconsistent with quantum mechanics for reasons of linearity).
|a> and |b> can represent having been in the states |A> and |B>, the same as 1 and 0 can represent ‘having won the lottery’ and ‘having not won the lottery’; to say that therefore 1 is equivalent to having won the lottery does not strike me as very sensible. But of course, they are equivalent in that both are two-level systems; but they may be wholly different states (one could be x-eigenstates, the other y-eigenstates).
No. No evolution of the observer takes place at all. Just picture two qubits; I can perfectly well measure/evolve one without affecting the other.
This is what I’ve been saying. Hence, rather than getting lost in abstract reasoning about complex systems with gazillions of hidden assumptions, I propose moving to a simple and explicitly describable model in which no ambiguities remain.
You just need to write it down explicitly, and it’ll become clear. Take as a system a tensor product of two qubits, say in the state (1,0) x (0,1), and effect the evolution given by ((1,0),(0,1)) x ((1,1),(1,-1)); it’ll leave the first qubit unchanged, and the second in a superposition.
Again, this is the point in which you make unwarranted assumptions—namely that memory works as usual in your scheme. If you take that for granted, then certainly, you will get a valid successor relation (that given by memory), and hence, mental evolution, and you can convince yourself that that’ll do. But memory isn’t an unanalyzable primitive; it is simply physical, and has to be treated on par with all other physical facts, i.e. in particular, your theory must account for it. As it stands, it does not.
This already assumes collapse, because you neglect the third observer, who saw both photons and is in superposition.
In the assumption that a classical view of memory can be carried through unchanged into the quantum world.
Fine, but my point was that if both represent two-level systems, then they will both if starting in an eigenstate of one level, evolve with time into a superposition of both levels.
One qubit cannot cause a second qubit to evolve without itself being caused to evolve in response.
Yes, there is also the third observer who saw both photons and is in superposition. How have I assumed collapse? Nowhere in my argument did I rely on the nonexistence of the third observer who saw both photons. In fact, its existence was not meant to be hid; it was implicit, and has been described explicitly in my other descriptions of the example.
Then you are still not understanding my example, and still not understanding anthropic selection and its relation to the wave function. If you have a wave function evolving, it contains a set of uncountably infinite classical worlds evolving. From this view alone it is obvious that memory is moved forward in time; because the wave function is evolved forward in time, and therefore the collection of classically-encoded memories are moved forward in time. Because we nowhere assume actual collapse of the wave function, even thought the collection of classical memories are moved forward in time, they are still in superposition, and the question of the number of potential minds supervening on all the possible superposed subsets of classical memories becomes just a combinatoric problem.