This explicitly presupposes that HMHW_AA is the (or one) future version of HMHW_A. But, as I’m trying to make clear, your model simply does not support this.
You keep trying to have your cake and eat it, too: one, you assert that there is no identifiability among minds across time in order to avoid the conclusion that you need an extraphysical evolution rule; but two, you presuppose that one mind can validly be called the successor of another, in order to avoid the conclusion that otherwise, no scientific reasoning is possible. But these two are flatly and blatantly in contradiction.
Furthermore, there simply is no room in the state to serve as a physical carrier for the memory you are proposing. The two clones created at any split are distinguished only by receiving card A or B; this must be the case in order to have this be a valid model of the (simplest kind of) many-worlds split. But then, they are each isomorphic to a two-level system: all other degrees of freedom just don’t matter, and we can stay in the relevant subspace of the Hilbert space. Then, of course, my quantum version of this thought experiment using beam splitters is an exact analogy.
There is just no physical way to implement a memory in the way you want to. If you disagree, the onus is on you to demonstrate it.
As for your last post, as I have now said too often for me to count: if your state at the later time/different point in space is connected to your prior state via physical evolution (i.e. in this case, a unitary map, or, because I’m just that generous, one that is at least completely positive and trace-preserving), then that map provides an identity relation in a completely simple, physical way. So yes, in this case, identity obtains.
Then I would be lead to conclude that you believe that if you are cloned at (x1,y1,z1,t1) and emerge at (x2,y1,z1,t1), that you would not believe that you are now somehow a zombie, or that you are somehow missing some je ne c’est quoi?
If there is a one-to-one correlation between the mental and the physical, then no, obviously not (though I’d say that the only physical way for this is if the ‘cloned me’ is really at some time t[sub]1[/sub] + x).
You keep avoiding addressing the specifics of the example I have very explicitly written out for you. I have asserted very little in the example. It is just a simple thought experiment showing what memories would be stored in each person if you cloned them and showed them cards and then repeated the experiment once more. You had asked how such memories were possible. I showed exactly how it was possible. How, exactly, is it that the memories are erased? Where, specifically, in the example, are the memories no longer there? At t1? t2? Where?
Once the two clones receive a card, they have a memory of having received that card. I am extremely confused at how you keep ignoring this trivial fact. Is it on purpose!?
I can’t assume in a thought experiment that humans have the capacity for memory? The onus is on my to show that a human is able to remember whether they received card A or B?
Just to be clear, you do or you do not believe that, having been cloned at (x1,y1,z1,t1) and emerge at (x2,y1,z1,t1), that something is “missing.”
Your last answer implied that there was nothing missing if you are erased at (x1,y1,z1,t1) and reappear in the exact same physical state at (x2,y1,z1,t1).
I am not seeing the flat blatant contradiction. It is easy to imagine one thing being a successor to another, without the two being identical. It is also easy to imagine many natural senses of the word “identifiable” according to which there might be one thing which succeeds another, but which is not “identifiable” with the other.
Can you explain more explicitly what exactly the contradiction is that you’re seeing?
No, what you did was asserting that there were memories, without giving any thought to the question of what it actually means to have memories in the thought example.
There are no memories being erased; however, you will (hopefully) agree with me that the forming of a memory requires a state change, no? And that if some future version of a clone shares the memory of that clone, he must at least share that part of the clone’s state that ‘carries’ the memory, right? But then, we must have an evolution on the mental level: the memory must have somehow been transferred, or re-created, in the future clone. There must be some information flow from the previous clone to the later one. And in your model, lacking as it does mental evolution, there is no channel for such an information flow.
Yes, of course! You seem to accept it as a triviality, but it presupposes a thing you supposedly deny: transtemporal identifiability. Your unanalyzed stance with respect to the physical nature of memory is exactly what gets you into hot water here! You assert it is somehow transferable from an earlier mind to a later one, without apparently realizing that this is synonymous to assuming that the earlier mind evolves into the later one!
I’m not sure what you’re trying to goad me into here, but, as has been my stated position at all times, if the physical is identical with the mental, if all mental facts are determined by the physical ones, then a physical evolution—a CPTP map, in QM—is a sufficient condition for identity of two physical systems, and hence, the accompanying minds. If there are several minds accompanied with the same physical state—well then, I just don’t know how to define an identity of minds; clearly it can’t be the physical evolution. But luckily, I don’t have to have an answer for such a situation, since I don’t think it obtains anyway!
Well, it’s a direct consequence of the way I’ve used ‘identifiability’ in this thread: as being related by a physical (or otherwise) evolution, i.e. a map taking past states as input, and outputting future states. Concretely, the model is such that there is a set of minds associated to a physical state at t[sub]1[/sub], and a set of minds at t[sub]2[/sub]. Identifiability, in the sense I’m using it, means that there’s a fact of the matter that some particular mind at t[sub]2[/sub] is the successor of some particular mind at t[sub]1[/sub]. Clearly, this isn’t determined by the physical evolution: this is compatible with all these identifiability-relations. The answer by iamnotbatman then was that there is no such identifiability/successor relation, to avoid having to postulate any extra-physical rule.
So basically, on this construal, a successor relation of any kind is an identifiability relation: if you say that c[sub]2[/sub] at t[sub]2[/sub] is the successor mind of c[sub]1[/sub] at t[sub]1[/sub], and not of, say, c[sub]3[/sub] at t[sub]1[/sub], then you have given an identifiability relation in the relevant sense. But this is the sort of extraphyisical rule iamnotbatman wants to avoid.
See, this is very confusing. I’m not sure how many times in the past, in an exchange ostensibly about the clone example, you have said “your model”, where I interpreted that to mean “the clone example”, where you are meaning the MWI. Please, please be more specific. And please, let’s focus on the clone example first. Otherwise the threads will diverge unrelentingly.
OK, at this point I can only imagine that you just completely don’t understand the clone example, or that our vision of the experiment has somehow diverged, even though I tried to make it very explicit in my previous post. I have thought carefully about it. I am not asserting there are memories; the memories are being formed and retained just as they usually would be, by usual newtonian evolution, or whatever is your fancy. We aren’t talking about MWI here, just a simple classical world where we are able to clone things. Then, the memories during the cloning are retained by definition of cloning. You are a physicalist, no? Memories are stored in a physical configuration in our brains, no? That is cloned too, no?
Yes.
Yes, this is why he is called a “clone.” The memories, being physical, are cloned and therefore transferred.
The channel is the cloning device.
Wow, I really wish you would just answer my question directly, rather than trying to provide a general answer. My interpretation is that you are saying that there is nothing at all “missing” about a clone. They are identified. This is confusing because it seems like a reverse from your previous position regarding the clones in our example.
OK, what do you suppose the clone example is about, if not your interpretation of the MWI? If it’s about something unrelated, I don’t really see the interest in discussing it…?
In that case, I would think that the cloning example is in severe disanalogy to your MWI concept, which I was under the impression was what we were discussing here…
I’m sorry, but I’m answering as directly and simply as I can. If the physical state is copied, and there is an identification between the physical and the mental, then yes, nothing’s missing. This is not at variance with my position on the clones, where there is explicitly no identification between the physical and the mental.
Obviously you understand the point of analyzing a simplified example, which is why you brought up the clone example in the first place. I just don’t see the point of jumping back into the MWI if the clone example is left unresolved, because understanding the MWI is contingent upon the more simple clone example.
I don’t see how. In the MWI there is a cloning taking place, exactly in analogy with the clone example.
But I had just asked you about the clone example. You are flatly contradicting yourself here. I am just trying to get a clear understanding of your position, and it seems like you are being obfuscatory.
But there’s no point in drawing conclusions if the example becomes disanalogous.
But you’re using the channel provided by the cloning machine to individuate the clones beyond their simply getting card A or B: this is not given in the MWI. We had established there is no transtemporal identity in your model, so your using the cloning process as a channel to transfer information from one clone to another—meaning that there is a clear relation between the previous and later clone: the successor clone is the one that received information from the predecessor—is a disanalogy at exactly the most crucial point.
Not on purpose, I assure you. But then you underspecified the hypothetical: how many minds are associated with the original me? How many are associated with the cloned me?
I can’t follow the distinction. In the cloning example, someone is cloned at t1, evolves, cloned at t2, evolves, and so on. By the definition of cloning, the memories are preserved, since memories are encoded in the physical state. In the MWI the universe branches at t1, evolves, branches at t2, evolves, and so on. By the definition of branching, the memories are preserved, since memories are encoded in the physical state. Where is the difference?
In the MWI information is of course passed forward. The MWI does not posit that each time slice is completely random. The MWI posits that the wave function of the universe evolves unitarily. This evolution involves branching, in which information is copied, and in so doing passes information forward.
I was trying to get to considering the case in which there are multiple clones. For the moment the question is, if you are cloned at (x1,y1,z1,t1) and emerge at (x2,y1,z1,t1), the original copy having been destroyed (“transferred”), then are they identified?
Then, by virtue of this information being passed on, you create a successor relation: at some time t[sub]1[/sub], there were minds c[sub]1[/sub] and c[sub]2[/sub], say. At time t[sub]2[/sub], there are now minds c[sub]3[/sub], c[sub]4[/sub], c[sub]5[/sub], and c[sub]6[/sub]. Those that received information from c[sub]1[/sub]—say, c[sub]5[/sub] and c[sub]6[/sub]—are the successors of c[sub]1[/sub]. That’s all fine and dandy; except, as we had stipulated, there cannot be a successor relation, since it’s not present in the physical evolution, and there are no extraphysical rules.
As a reminder, there can’t be a physical successor relation, because the evolution of the physical state at t[sub]1[/sub], s[sub]1[/sub], to the physical state at t[sub]2[/sub], s[sub]2[/sub], is clearly compatible with every possible successor relation: the one given above just as well as any other.
You’re saying both things are true: that there is a successor relation, according to which a mind that receives information from a prior mind is its successor, and that there is no successor relation, because the physical evolution is insufficient to determine it. Both can’t be right.
How is this in contradiction with the definition you have in your first paragraph?
The physical evolution does determine it, along with a collection of successors. Because there are a collection of successors, whether there is a successor relation is a matter of definition. I do not see any problem with there being a collection of successors, each one related to its parent.
Why do you think that, after the purely random copenhagen collapse, that a state is a successor of a previous state. You want “physical evolution” to determine it in order to be a successor. How is pure randomness “physical evolution”. I do not see any distinction between the determination through anthropic selection and the pure randomness when it comes to being hung-up on needing “physical evolution” to determine what is and what isn’t a successor.
Now, I am trying to provide a concrete example (cloning) that makes these issues manifest, and you seem to keep avoiding it.
It’s not, and I didn’t claim it is. But it is in contradiction with wanting a purely physicalist account: the physical evolution is compatible with any possible successor relation; you require a specific one to make your scheme work. This then cannot be singled out by the physics.
But there are obviously several inequivalent possible successor relations, and the physical evolution does not define which one is the ‘right’ one; nevertheless, you want to postulate that there is some ‘right’ one that gives the appropriate quantum mechanical frequencies (or at least, some definite frequencies). But for any such relation you could name, I can find one just as much in agreement with the physical evolution which produces completely different statistics.
Take my above example again: you require a relation in which c3 and c4 are successors of c1, and c5 and c6 are successors of c2 (say). But there’s just as well a relation such that c3, c4 and c5 are successors of c1, and only c6 is a successor of c2, or any other possible combination; and the physical evolution does not fix which one is the right one.
You’ll now say that memory provides the successor relation; but in fact, it presupposes it: for c5 to remember c1’s card (say), it is required that c5 must be c1’s successor in some way. You also continue to miss the fact that memory is simply a part of the physical state, such that all memory relations are contained in the physical evolution; but again, the physical evolution does not, in your scenario, determine the mental one.
Perhaps one last try to make this simple point as clear as possible. Take a physical state, s1, obtaining at t1, and another state, s2, obtaining at t2. The two are linked by a physical evolution, a (cptp) map s1 -> s2. At state s1, there exist minds c1i. At state s2, there exist minds c2j. As I have said, we have one physical evolution. But evidently, we have many possibilities for the mental evolution: any map m:c1 -> c2 will do (where with c1 and c2, I mean the set of minds existing at s1 resp. s2). Clearly, there are many maps m: any way of associating members of c1 to members of c2 will do. And just equally clearly, the physical evolution does not determine any particular of these maps to be the ‘right’ one. Thus, any attempt to claim any map to be the ‘right’ one—in particular, one that realizes the correct quantum frequencies—means adjoining a rule to that of the physical evolution: an extra-physical rule.
Maybe we could proceed by you telling me what still isn’t clear to you about the above argument?
".
It associates a past state with a future state for the system.
The problem is that analogy only takes you so far, especially when it comes to the subtler points of quantum mechanics. You manage to confuse yourself using the clone example by letting your classical intuition get the better of you: certainly, the clones just remember what happened before! But the problem at hand concerns exactly what terms like ‘remember’ and ‘before’ are supposed to mean: an issue of misplaced concreteness, which is always the chief danger with such analogies. By assuming that their meaning just carries over unchanged, you draw unwarranted conclusions; hence, it is better to steer clear of this example until we have managed to come to an understanding of those contentious term
So now you are backtracking and agreeing with me on the clone example, but saying you want to avoid it because it is not a good analogy? Please, you could at least be clear about this. Do we agree or disagree that in the clone example memories are past on and that a participant will observe a purely random variable (the card) with p=0.5, as a consequence of cloning and anthropic selection? If you still disagree, then it seems that understanding the clone example is an obvious prerequisite to the QM question.
I originally thought we were clear on the nonexistence of a successor relation; since it’s now obvious that the point has passed you by, it seems only appropriate to backtrack, as otherwise any answer to your question would be open to ambiguous interpretation.
So, before I answer yours, could you extend me the curtesy of answering my question from the previous post?
But then you completely don’t understand anything I have been saying! The whole point of the MWI is that there is no ‘right’ one. There are an infinite number of equally ‘right’ universes. The whole point of the anthropic selection I have been describing is that from the point of view of each observer within the system, he is the ‘right’ one, but that there are an infinite number of other observers who think the same thing. This of course does not mean that there is no causal relationship at all between a state at t1 and a state at t2. There is a clear causal relationship: a function that maps a state at t1 to a collection of states at t2. Each such relation, from t1 to one out of the collection at t2, is valid, or ‘right’.
Not necessarily (could be a quantum fluctuation), but certainly in practice. This is why, if c5 is not a descendent of the wave function at c1, then it generally will not have a memory of c1!
Again a question of definitions. It certainly does determine the mental one: the physical evolution uniquely determines the set of mental states.
Yes, that rule (which I take to be a trivial consequence of logic rather than “extra-physical”) is “anthropic selection”. The fact that all maps are ‘right’, but that only some maps are consistent with consciousness. Those that are consistent with consciousness are “anthropically selected.” This anthropic selection is not any more an “extra-physical” rule than Darwinian selection is an extra-physical rule.
This is why I keep pointing to the clone example. By adding a clone, you would have that the newly added ambiguity of indentification would subtract from the conscious reality of the original. This makes no sense. This is a good analogy because it is exaclty what is happening in both the clone example and in the MWI. If you pruned away the tree such that there was only one copy at any given time, then there would be identification in the sense that pleases you. This is because pruning away the tree randomly is exactly equivalent to what is done in random collapse models. The implication is that by adding back the branches, something is lost.
Not in a way that has any causal relation to the state that preceded it. Not, at least, a causal relation that is any different from the one I have put forward.