It doesn’t have any effect on his conscious experience, but on his experience in the sense of the observation he has made. If a clone makes observation A, and another makes observation B, there is only then experience AB if the second clone is in some way the future version of the first. If that weren’t so, consider the case in which both clones simply exist concurrently: they are related to one another as much as they are if they exist in temporal sequence without a successor relation, but here, I guess you would not hold that there is in some sense the experience AB, nevermind that the clone observing A might have the memory (implanted by an evil scientist, say) of having observed B. Or would you?
Well, again, it has no effect on his conscious experience (still curious where you got that notion from). But it has an effect on his history: in the case where only c1 and c2 exist, presumably supervening on physical states s1 and s2, then the physical evolution gives an unambiguous successor relation: c2 has a history that includes, in some sense, having been c1. But if c3 is added, there is no longer an unambiguous successor relation defined by the physical evolution: c2 may have been c1, and c3 may have ‘popped up’ out of nowhere; or c3 may be c1’s successor; or both may be. In any case, there is now ample possibility for the history of c2 not to include having been c1.
The problem is, again, that the clone example misleads your intuition. Since there’s always a fact of the matter regarding that this clone was split into this one and that one, you believe that the same holds true in the many-worlds model. But obviously this is not the case: the branches in many worlds are individuated solely by the part of the superposition that gives rise to them; in other words, by the ‘card’ they receive. This is the only individuating event. So, talking about the clone example, you continue to confuse yourself into believing that the structure added to the clone example by virtue of its concrete implementation also holds for the many-worlds scenario, allowing yourself to draw the wrong conclusions. Since you evidently don’t see the disanalogy, it’s better to directly focus on the quantum case, rather than continue to distort it with classical intuition.
Yes, and I don’t think I messed something up. |a|[sup]2[/sup] of the minds supervening on a|A>+b|B> must experience being in the state |A> and have one predecessor supervening on |x> each, and |b|[sup]2[/sup] of those minds must experience being in state |B> and have a successor on |x> each. That way, if you are one of the minds supervening on |x>, your chance of ending up experiencing the state |A> is |a|[sup]2[/sup], and your chance of experiencing |B> is |b|[sup]2[/sup]. No? (EDIT: Or are you just asking about the meaning of |x>? If so, it’s just arbitrary.)
Memory is physical. Whatever memory is present in the first picture is also present in the second. However, if the (upper) mind having a history BB in the second picture has a memory of having observed AB, his memory is simply wrong.
But memory is a complete red herring anyway. If we’re talking about elementary many-worlds splittings, there is no memory in this simple sense. If a mind is in the state |A>, and then in the state |B>, it can’t have memory of having been in state |A>—this is exactly the part of it that has been rewritten, so to speak. A qubit first in state |0>, then in state |1> has no memory of being in state |0>: there’s no room in the state space.
And neither is it possible to just ‘keep’ the state in some memory, and then just evolve the rest through a split—this would amount to an evolution |A> –> |A>|A>, which violates the no-cloning theorem. Memory is a very high-level property, compared to what we’re talking about here (hence its introduction via the clones leading to so much confusion). You seem to be thinking that a mind in the state |A> can just recognize that ‘Ah! I am in state |A>!’, save that information somewhere, and then get on with things; in particular, then split and enter a new state—maybe |B>—, and have still access to that information, in order to say to itself ‘I used to be in state |A>, and now I am in state |B>’.
In fact, in order to explain memory in quantum mechanics, we first need the notion of collapse—real or apparent. Since only after the collapse do we have classical information that can be copied and stored as we need it—meaning, states that are orthogonal in the preferred basis/the basis singled out by the collapse, which can then be cloned. But the collapse, respectively the appearance thereof, is exactly what we’re trying to get clear about here (and in order to do this, we need the successor relation). To thus appeal to memory at this stage—misled by classical intuitions—is to just commit a gross level confusion.
But in the example he is very explicitly the future version of the first. The first makes observation A, and is cloned. The second, the clone, is identical to the first, and has the memory of having made observation A. He then makes observation B. Therefore, he has experience AB. This should not be surprising, since, by definition of cloning, he is literally the same as the original.
As I said above, I find this to be a non-sequiter, since it seems clear to me that in the example we are discussing there is a clear successor relation. The clone is, by definition, the same as the original. The original observes A at t1, and A at t2, and therefore has a memory of having observed AA. The original observes A at t1, and then having been cloned, observes B at t2, and therefore has a memory of having observed AB. In the second case the original was a clone at t2, but this should have no impact on our thought experiment, because by definition of cloning the clone is the same as the original.
Nevertheless, if we were discussing the case in which an evil scientist placed the memory ‘AB’ into an unsuspecting clone (which again, I want to emphasize, I do not think is at all like what is happening in the clone example), I would still think that this should have no effect on the thought experiment or on the consideration of the probabilities.
Look, when we conduct science, we have no idea if an evil scientist is inserting memories into our minds. It could very well be that this happens. And if it does, we will just have to live with it. It would affect our calculation of probabilities if, for example, when we flipped a fair coin, a mad scientist inserted memories that made us think that the coin was in fact unfair. It would affect science. It would affect our determination of physical law. The same is true in the clone example. It doesn’t matter whether a mad scientist is inserting memories, or if there is a successor relation or not. The fact is that regardless, at a given time step, a given clone will be equipped with a certain set of memories, and those memories will have to make do. Given those memories, the clone will come to certain conclusions. For instance, given the binomial distribution in the card example, most clones will have memories consistent with p=0.5. Most clones will have arrived at emprical data about the world around them. They can come up with a physical law for their universe: that p=0.5 for cards. To them, it would look strange, because the card they receive is random, even though other aspects of their world are deterministic. That’s just the way it is.
And what you are leaving out is that this leads you to conclude that if c1 has card A, and c2 gets card A, and c3 gets card B, that this lack of a successor relation implies that there is somehow “no meaning” in assigning a probability for c1 to, at time t2, receive AA verses AB. And yet c2 is by definition the same state as c1, and c3 is by definition the same state as c1. Therefore, by definition, the probability for c1 to find himself as c2 (AA) is 50% and the probability to find himself as c3 (AB) is 50%. This is why I must conclude that you are somehow denying the conscious experience of c2 and c3 as clones of c1. By the definition of cloning, their own conscious experience is each identical to the case in which, considered separately, there was no other clone, and therefore a valid successor relation. Therefore if c2 receives card A (and therefore has memory of AA), from the viewpoint of conscious experience, it is the same as if there was never any cloning, in which case it was c1 who received card A (and therefore has memory of AA). And similarly for c2 receiving card B (and therefore having the memory of AB). Since both cases each therefore respectively represent the valid conscious experience of “c1 receiving AA” and “c1 receiving AB”, it therefore follows that there is a 50% chance of c1 receiving AA or AB. The only way out is to deny the conscious experience of c2 or c3 on the basis of there being no valid successor relation, and, as I have pointed out, this leads to a logical contradiction.
I don’t see the difference. In the cloning example, there are two steps: at time t[sub]1[/sub] the state is cloned, and at time t[sub]1[/sub]’ the card is received. In the MWI case the two steps are combined into one: the difference between t[sub]1[/sub] and t[sub]1[/sub]’ is taken to zero.
You say “But obviously this is not the case: the branches in many worlds are individuated solely by the part of the superposition that gives rise to them”. But where do you think the superposition comes from in the first place? The process of the wave function going into superposition is what is in analogy to the cloning process. In the cloning example, there are two steps: at time t[sub]1[/sub] the state is cloned, and at time t[sub]1[/sub]’ the card is received. In the MWI case the two steps are combined into one: the difference between t[sub]1[/sub] and t[sub]1[/sub]’ is taken to zero. An initial state goes into superposition (ie is cloned), and at the same time the state is perturbed (the clone is dealt a card). This is just a re-statement of what the Schrodinger equation does (for simplicity backing off from the continuum limit and considering it as its difference equation) when it, for example, causes a wave function to diffuse. Say the wave function is initially highly peaked at zero (suppose position basis) at time t[sub]1[/sub]. Corresponding to this wave function are only worlds with a particle at 0. At the next time step t[sub]2[/sub], neighboring amplitudes are cloned and shifted, producing worlds in which a particle as at 0, and also at -0.1, and 0.1. And so on. I can’t make all of this mathematically precise (how many worlds were there initially?), because then the problem of finding the Born rule in the MWI would be solved, and I don’t claim to have done so (yet :)). The point is the principle by which it works is in good analogy with the clone example.
I don’t follow the “and have one predecessor supervening on |x>” and “and have a successor on |x>”. You are talking about both predecessor and successor states? I don’t follow, but even so, if you are talking about predecessor and successor states, the number of minds supervening is not constant in time.
When I have been thinking of a state |A> having memory I have indeed been thinking of something more high level, with memory: a human. A human is a state that has the capacity for memory. If we consider some kind of quantum branching, for example the measurement of an electron in spin sideways state to be spin up or spin down, then the human, with memory and all, branches into a human who has measured spin up, and a human who has measured spin down. This, to me, is in good analogy to the clone example. Now, let the human perform the experiment again. If each copy does, then we end up with four humans, having measured {upup,updown,downup,downdown}. This seems like in good analogy to the clone example to me. Just as in the clone example, the humans have memory, and can find that p=0.5.
OK, here I feel compelled to again try to explain anthropic selection, and why successor relations don’t matter, because I think at the end of the day we are hung up here.
You, HMHW, may ask yourself: “why am I always me? Why am I not iamnotbatman sometimes, or frylock?” From what I would gather, you do not find the answer as obvious as I do. You would say: “well, there is a valid successor relation between HMHW at time t[sub]1[/sub] and HMHW at time t[sub]2[/sub], and there is not a valid successor relation between HMHW at t[sub]1[/sub] and iamnotbatman or frylock at t[sub]2[/sub].” But, dear HMHW, I would tell you: there is no need for such an extra-physical and unnecessary rule. Such an unnecessary rule is borne of a naive attachment to “primitive this-ness” of the ego, the idea that an ego has some “this-ness” that allows you to track it, even after it passes behind a curtain while a physically identical ego passes through from the opposite direction. No, this is a solution in search of a problem. Just as we have discarded primitive this-ness of indistinguishable particles or of waves, there is no primitive this-ness to consciousness. Why are you always you and not iamnotbatman? Because if you were iamnotbatman, then you would not be asking that question. You would not be aware of having asked the question. You would just be me, iamnotbatman, and you would have the memories of iamnotbatman, and you therefore would have no conscious experience of “HMHW being iamnotbatman.” iamnotbatman just experiences being iamnotbatman, and HMHW experiences being HMHW, by definition. There need be no successor relation to reign in chaos.
This principle is called “anthropic selection.” If you were to cut up HMHW’s and iamnotbatman’s lives into two pieces, performing the slice at time t, and then interchange the successor relations, so that at time t, iamnotbatman became the physical successor to HMHW, and HMHW became the physical successor to iamnotbatman, then it is trivially obvious that, for the same reason as above, iamnotbatman doesn’t “experience being HMHW” or vice versa. Despite the criss-crossed successor relation, each one, by definition, given his physical state, his memories and so on, continues to be himself, entirely unaffected. The concsious experience of each is anthropically selected, or “self-selected”, rather than selected by an extra-physical principle. If the logic works for one time slice, you can of course extend it to the extreme, and remove (or criss-cross) every successor relation between every time slice. The consciousnesses are oblivious, because are self-selected rather than selection by an extra-physical rule.
You’re still not even seeing the problem. I don’t believe in some extra-physical haecceity of consciousness; I said that physical evolution is enough to provide identification. I agree with everything in the example in your last post, but of course, the example misses the crucial ingredient: that there is, in your model, the possibility of me being in the physical state of HMHW, while being in the mental state of Frylock, HMHW, iamnotbatman or the wicked witch of the west. Of course, I will always perceive myself to be none other than myself—that is, if I end up as Frylock, I will not wonder that I, HMHW, have suddenly become Frylock. But knowing that in fact minds typically just jumble in and out of existence in your model, the validity of my experience (read: the question of whether my memory etc. refers to anything that has been the case) becomes something in need of explanation; in the situation in which there is only one consciousness supervening on the physical state, this explanation is unnecessary.
As for memory, it is true that we can’t be sure—on an ordinary reading—whether or not our memory is truthful, but we have no valid reason to believe otherwise. The possibility to do science is predicated on this assumption. In your model, should one believe it, one would have to believe otherwise—that memory can in general at best be false. Then, the possibility of validly assuming that memory is truthful is simply not given.
And regarding the physical nature of memory, you still haven’t seen the point: memory needs an explanation of the measurement process; you bringing it into a situation where that is exactly what’s missing makes your statements simply nonsensical. Consider an implementation of some consciousness with a memory, given, say, by a string of n qubits in some blank state |x>. The total state after having gotten, say, A, is: |A>|xxxx…>. Now, if the observer wants to remember having gotten A, she needs to somehow set one of her qubits to an appropriate state; but this always involves measurement. One possible protocol would be to measure the first qubit in the x-basis; if she receives |1>, she considers the qubit to have memorized the information ‘have seen A’, so let’s call this state |a>. If she receives |0>, she considers that to be the information ‘have seen B’, consequently the state is |b>. If she receives a state different from what she wants, she can simply apply a unitary rotation to flip |1> <–> |0>. After some series of observation, thus, the observer should be in a state |A>|aababb…>; she can then use measurements in the x-basis to read out her qubits (not engendering any collapse), and compare the observed statistics with the quantum predictions.
But note that this analysis is predicated upon the existence of a collapse, and thus, needs the resolution of the splitting dynamics before it can even be applied! Otherwise, the generic outcome of the first measurement would be something like |A>(|a> + |b>)|xxx…>, which is useless as a memory. Without the resolution of the measurement problem, without the successor relation, we can’t talk about memory! Thus, your attempt to use it to shoehorn a successor relation back in through the back door is simply fallacious.
Actually, this point is perhaps made better and more succinctly by just noting that, in order for there to be a HMHW state in the future, there must already exist some sort of causal pathway, some means of information transfer, in short, some successor relation that transfers the HMHW-memory from now to then, or then to later; but this is of course precisely what’s at issue.
In order for what? What happens if there is no successor relation? How is the experience of HMHW any different? Previously you have asserted that his consciousness is unaffected. But then what are you saying? What is the content, the purpose, of saying “there must be a causal pathway”? There must, in order to… what?
I don’t see why it must be the case that there must be some means of information transfer (itself I think ill-defined – I would define information transfer to be the fact that information was in the past, and now is in the future, the word transfer is an empty extra-physical notion) . You are making unnecessary assumptions rooted in something like a haecceity of consciousness (though you deny it).
Why must there be a pathway? You are presuming upon what can very well be physically realized in spite of you: a HMHW arising spontaneously out of a statistically unlikely arrangement of atoms. You cannot wish away the potential physical reality of such situations.
You seem to assert that there is some “essence” that must be passed from past to future that is somehow beyond the physical. For if we only considered the physical, all that matters is the state at t1 and the state at t2. Let me put it this way. As I would understand your position, it would imply that a physical simulation could never produce consciousness, if the physical simulation consisted of time slices each corresponding to a different physical state. This is because, though you have a series of versions of HMHW, where information moves from prior states to future states, where information appears to transfer, there isn’t actually a valid successor relation, because there is no causal relationship between the state at each time slice other than the fact that one precedes the other.
In order for the information contained in my memory to get to the next moment in time. For a case in which a single consciousness supervenes on the physical state, this causal pathway is provided by the physical evolution; in the computer case, it is provided by the computation. But in the case of multiple consciousnesses supervening on the same physical state, it is simply missing. Think of the minds as nodes in a network: you assert that two nodes share information (i.e. memory), and that this suffices to create a connection between them; but what you fail to appreciate is that without a connection existing in the first place, there is no way for both nodes to share that information. You just assume that they do, that by some miracle, the correct memory pops out of the woodwork where it’s needed. But this isn’t the case, and as I’ve tried to show you, in your framework the notion of memory transfer simply is still ill defined.
But you are still not seeing the problem. You do, without realizing it apparently, believe in extra-physical haecceity of consciousness. You refer to the validity of your experience, as though that has some extra-physical meaning, as though, if a memory “really” happened to another copy of the same physical state, that that wouldn’t have been you. But of course it was you if you don’t believe in the haecceity of consciousness. They are both by definition, by physicalism, you. You nonetheless assert that there is some meaning in a successor relation being necessary to extra-physically distinguish between two states that are physically identical.
Honestly, this is making me feel a little helpless at this point. I fully accept that what happens to a copy of me whose memories I then inherit, or whatever else permutation you might dream up, happens to me; I’ve said as much again and again. The physical state, on my view, is the conscious state. The problem is that you have no mechanism for the memory inheritance. There’s no way, on your construal, for any particular consciousness in the future to inherit the memory of me now, because there is no way to point to any of these consciousnesses as my successor. Just as in the physical world, the transferrence of information requires a physical evolution, in the mental world, the transference of information requires a mental evolution—which, on my construal, are just one and the same. But they aren’t on yours; hence you claimed there is no evolution. But then, you still claim that there could be an inheritance of memory. This is, quite simply, contradictory.
If we are talking about physical memory, what does multiple consciousnesses supervening have to do with anything??? Whether or not physical memory is present at time t1 or time t2 has nothing to do whatsoever with the existence or non-existence of consciousness. The anthropic selection happens after the memories have been moved forward in time by the schrodinger equation; it is not itself the memory selector.
The schrodinger equation provides a deterministic time evolution that pushes physical states forward in time. Nothing need miraculously “pop out of the woodwork where it’s needed” (although if it did, that would be fine. if it happened, then anthropic selection would define the relationship).
Which construal of mine are you referring to? My construal of the MWI? Or are you making a general statement of any physical situation in which there are clones and ambiguity of successor relations? (In general I think you don’t realize that your statements can be very confusing in this respect). If you are making a more general statement, I have repeatedly provided examples in which the inheritance is physically manifest. For example, during the cloning process the memory is physically transferred by the definition of cloning. If you are referring specifically to the MWI, as I have repeatedly pointed out, the Schrodinger evolution is a deterministic physical evolution. It transfers states forward in time. I am completely at a loss as to how your argument circumvents the fact that physical states are evolved forward in time, and with them, of course, memories. You point out that this presupposes collapse in order to explain collapse. It does presuppose collapse (though in this conception trivially, as there is no such thing as something ever not being collapsed), but not in order to explain collapse. I am not explaining collapse here; collapse it already explained; I am explaining how a memory can be a successor to another.
Nooooo… In order to explain the (experience of) collapse, you fist need to explain experience; and for this, you first need to explain the transference of memory. If that’s predicated on the existence of the collapse, then the cat just chases its own tail…
And about this part:
This confuses me. Are there, or aren’t there minds aware of the uncollapsed superposition? Because if you now hold there aren’t, after all, then much of this discussion has suddenly become rather pointless…
The physical transformation that generates a set of universes each with different memories that has been transferred from the past does not predicate on the existence of collapse. The “collapse” is by anthropic selection, trivially in the sense that there is nothing dynamically happening, merely the anthropic self-selection by a conscious subset of the wave function. I had worded that badly. The point is that the overall wave function is the same both before and after the “collapse”.
You are right I worded that badly (see above). Although it may be worth revisiting the fact that from the very beginning (particularly at the beginning) I made clear that the question of whether superposed minds exist is a tiny and IMO irrelevant side issue. I even stated that I thought they probably don’t exist. What I said was that I don’t see any reason why a superposed mind cannot exist. A superposition is part of physical reality, and I see no reason why a mind cannot supervene on such a part of physical reality. If it does exist, then there is no problem with it coexisting with other minds that supervene, in the same way that us humans could be part of a larger galactic organism which itself is conscious.
Here I made a set of 4 slides similar to your earlier ones, but which explain anthropic selection.
The first shows 3 minds, each at three time slices. There is no (physical) successor relation in any of the slides. You can imagine that each mind magically appears by coincidence at each time slice. The caption on each mind represents a succession of memories (with time moving from left to right).
The second slide shows, in blue, all of the valid “anthropic successor relations”. In other words, conscious experiences that are anthropically self-selected in a way that I claim is indistinguishable from your requirement of a “physical successor relation”. In red I show one (of many) examples of an invalid anthropic successor relation. (Do I need to explain why? I will assume for now that you follow).
The third slide shows a different set of minds and memories. Buried in there are the minds from the cloning example.
Slide four again shows, in blue, the valid anthropic successor relations. In red is one (of many) examples of an invalid successor relation.
Perfectly (nice drawings! Did you use inkscape, or what?). If you presuppose memory, then you can have your ‘anthropic selection’ do the work. But the anthropic selection can’t at the same time also yield the prerequisites for the memory transference.
In other words, what is the memory of an observer with memory in the state |A>(|a> + |b>)? You’ll say that anthropic selection implies that there’s an observer in the state |A>|a> and an observer in the state |A>|b>, the latter of which will then use a unitary transformation to change her memory to the correct |A>|a> before any next splitting event. But, why is the observer not one of the consciousnesses that supervene on the total superposed state? This is were the collapse needs to be explained.
You might think the existence of these minds matters little and can somehow be neglected, but (as I’ve pointed out before) it’s actually the sole point your account diverges from the standard one and becomes incoherent: simply because (ultimately) there are two different ways to be conscious of a (superposed) state (and neither anthropic selection nor any physical principle can choose between them).
Powerpoint. Not a huge fan of it, but I use it a lot.
But the conscious superposed state does not collapse. It is a superposed consciousness, after all. The collapse is only apparent to non-superposed consciousnesses, by definition. The memory of the observer in state |A>(|a> + |b>) would be of both |A>|a> and |A>|b> simultaneously. If such a thing supervened it would, after all, be a completely unfamiliar and unintuitive type of consciousness.
First of all, as I’ve pointed out, I’m not sure how the existence of these minds is any more perplexing in my account than in classical mechanics. For example, consciousness can supervene on my mind, and simultaneously supervene on a galactic-sized mind in which I am but an atom. Is this incoherent? If there are two different ways to be conscious, why must one choose between them? Why cannot they both exist simultaneously?
The problem is precisely its lack of collapse: as long as this possibility exists, you have not explained memory. Again: the only way to have a description of memory in quantum mechanics is to have a description of measurement (as it’s always required to set a memory to a given value), and hence, an explanation of the collapse.
The problem is that, as far as quantum mechanics is concerned, your proposal is not analogous to a consciousness supervening on one part of the physical state, and a consciousness supervening on another part (that may include the other one fully or partially). This, I don’t see a problem with—it would after all be the same as one picture supervening on one set of pixels, and another picture supervening on another set of pixels (of which the first set might be a part). Like the picture of an eye, and the picture of a face.
But on your account, both differnt kinds of consciousness supervene on the same physical state; so the classical analogue would be, for example, one and the same brain state leading to two distinct conscious awarenesses. After all, the state in quantum mechanics is only one object: |A> + |B>. I can buy (and do, to the extent that I accept many worlds) that this states corresponds to the awareness of A, and the awareness of B—one kind of conscious awareness of the state, ultimately, that merely happens to partition in disjoint experiences. But to simultaneously postulate that there could also be an awareness of the superposition in total simply stretches things beyond their breaking point, at least in so far as a naturalistic account of consciousness is desired. It’d be like one brain state leading to two minds, or one set of pixels leading to two pictures (say, a portrait of me, and one of you)—as I’ve said, I’m willing to entertain the possibility, but I think it leads to insurmountable differences.
I don’t follow this. Perhaps what you say is true if you define memory in such a way that it is true. You say that collapse is required to set a memory to a given value. But if a mind supervenes on a superposed state |A>(|a>+|b>), then it would follow that the mind simultaneously has both memories: it simultaneously remembers both the memory that would obtain if it collapsed to |A>|a> and the memory that would obtain if it collapsed to |A>|b>. It is not clear to me why it is necessary that you assert that memory must be defined to exclude this possibility.
I do not see the problem with this. The conscious awarenesses that supervene on the same brain state have access to the same memories and so one, so they are going to be very similar, however I see no reason to exclude the possibility that a spectrum of nearly identical conscious awarenesses can supervene, each with a different experience of qualia, for example. I do not see what is inconsistent about this. The physical still completely determines the mental (ie fixes a collection of mental states).
The argument you provide in this last paragraph seems to just be asserting that you find it somehow absurd. But is there an actual contradiction? Could it be, perhaps, a lack of imagination on your part? Personally, I am not die-hard set on there being such a superposed consciousness. I just don’t at all see any contradiction in accepting its possible existence. Again, a classical example seems apropos. I don’t see anything contradictory about assuming the our left and right brains are separately conscious, and also, in addition, their combination is conscious. Do you? Or is this still an issue of the MWI picture being different from the analogy I am trying to make? At the moment the analogy seems fine.
For example, the left brain and right brain don’t have to agree. For example the left brain might remember being sad when the right brain might remember being happy (rare, perhaps, but certainly possible). The whole brain, when firing those neurons simultaneously, feels some combination, perhaps which has a quality all its own, the same way we see “yellow” when we see red and green simultaneously. I don’t see the superposed state as being different. It could, perhaps, remember an electron going both left and right when put through a stern gerlach magnet. Cool. If going left would have made it happy and going right would have made it sad, perhaps the memory of both happy and sad existing simultaneously gives it some new qualia, happy-sad.
Maybe the distinction you need is (something like) supervening on all of a thing, where to do so means that any less of the thing would take the supervenience away. (Although “any less of a thing” is problematic.) I don’t think this is enough, however. For example, “being taller than 6ft” and “being shorter than 7ft” both supervene on exactly the height of an object. Or “being non-identical to Fred” and “being non-identical to Bob” both supervene on non-Fred, non-Bob objects. Identity would get the job done, i.e. if x supervenes on y, then x is identical to y. But that seems too strong.