Does the Schrodinger's Cat experiment say/mean what this personal trainer/life coach says it means?

Hence, I’m calling your attention to the fact that you’re saying it implicitly: by tagging one as the copy who walked out of the machine the original walked into, and the other as the copy that walked out somewhere else. This individuates the two; but then, you loose the analogy to the many worlds scenario, where both branches are not individuated beyond the fact that in one of them, spin up is observed (‘the clone receives card A’), and in the other, spin down is observed (‘the clone receives card B’). There is no sense, in the MWI, in which one of the branches is ‘the one the original walked into’.

Then how do you define physicalism? The definition I gave—that the physical facts fix the mental ones—seems to me to leave little room for disagreement, while still calling the resulting position ‘physicalism’ in any meaningful way.

But I’m never in the same physical state as a fish. If I were, then physicalism would entail that I do indeed experience being a fish, and any theory disagreeing on this point would not be a physicalist one. But your theory disagrees exactly on this point: I can be in the physical state |"0"0> + |"1"1>, apparently without having the same experience as a superposed consciousness. According to you, I can be in the same state as a fish, without feeling like a fish.

In no sense is |"0"0> + |"1"1> ‘both of the physical states together’. In the absence of decoherence, I could perform an interference experiment that can only be explained by considering the complete state.

I’m saying exactly the opposite: that there is no magical you-ness needed if the mental supervenes on the physical (whether one-to-one or not, I’ve already said that I’m not sure about my position there).

What? No. Let’s say you have a wavefunction composed of n copies of the superposition |0> + |1>. At t[sub]1[/sub], you measure the first one; at t[sub]2[/sub], the second, and so on. At every point will the probabilities for you to get |1> or |0> be independent of whatever went on before; this is all that I require.

You asked for a definition: I provided one.

See my previous post.

Again, see my last post: without some rule to identify minds, there is no meaning to talk according to which some clone has made some set of observations. There is, however, in a physicalist construal, where at each point the mental is completely specified by the physical: the identification rule is just given by the physical evolution, and thus, to have made an observation or a set thereof is perfectly well defined.

From which one of the minds at t[sub]1[/sub] am I a descendant?

No clone has ever sufficient reason to propose or accept any scientific hypothesis, simply because the assertion ‘I have made the observation, that…’ is always false: he has not made that observation; he wasn’t even around to make it. This is just a brute fact of your model.

Why would that memory cause the current information processor to believe that? Especially if, as is the case in your model, the current information processor would have to assume that his memory is simply in error…

That’s just it! In the Copenhagen interpretation, you would have to be unlucky—the typical observer would observe probabilities in accordance with the Born rule. This is just not the case in your MWI construal: there is no way to select the histories with the correct frequencies over those with the wrong ones!

Wanting to be able to do science (and the knowledge of better alternatives that do not force me to such extravagant conclusions) is the reason I resist that conclusion…

You are deviating substantially from my thought experiment. There should now be 4 minds. At time 1 there is mind A and mind B. At time 2 there is mind AA, AB, BA, BB. Let’s take one of the minds, say BA. This mind has just received card B, and has a memory of receiving card A previously. Each clone has such a narrative. To each clone, such a narrative is meaningful.

So? This is vague. “Grounded in anything”? Again, it seems you want some extra-physical rule to be true that somehow guards against Omphalos, even if physicalism by itself turns out to predict that such possibilities must be considered.

This does not follow. For example consider the first splitting. Let’s say that only one splitting ever happens. Before the split there is an experience. You seem to be saying either:

  1. After the split, neither of the clones any longer has any experience.
    or
  2. After the split, only one of the clones (the “original”) continues to have any experience.

but somehow neglect:
3) After the split, both of the clones continue to have an experience. Each one “had” the experience associated with the first, and each one continues to have an experience associated with its continued existence.

The fact that you neglect 3) is bizarre to me because 1) and 2) are so obviously untenable.

He can. This can be seen trivially by working through such an experiment and seeing what is observed from the stand point of each clone at any given time.

That’s flatly false: in classical mechanics, together with standard physicalist assumptions, upon some physical state, a mental one supervenes. The physical state evolves, and so does the mental one—because ultimately, they are one and the same. To deny the evolution of the mental state would be to deny the evolution of the physical one; to deny the transtemporal identifiability of minds would be to deny the transtemporal identifiability of physical systems.

What is added in your model that makes it problematic is that there is more than one mind associated with a physical state, and moreover, that the minds do not stay separated: consciousness of the whole state can evolve into consciousness of a part of a superposition; since this is not determined by the physical rules, you are forced to deny the transtemporal identifiability, and hence, to deny the existence of any evolution at the level of the mental at all. This may arguably save you physicalism, but as I said, the costs you incur just do not appear worth it.

This is causality between the physical states—that I haven’t denied—; it’s causality between minds that is at issue here. And this is just not given: no mind’s state influences another mind’s state, not without any extraphysical rule at least. This for the same reason that there can’t be evolution of minds in your scenario: the physical rules cannot determine whether a mind at t1 influences a mind at t2, because they can’t tell which mind is to be influenced. At best, all are, or none is; but since two minds at t2 may only be different in seeing spin up or spin down, it can’t be the case that both are influenced, and thus, none can be influenced.

But to the clone BA, there is just no matter of fact that he ‘had’ card A previously. He hasn’t had any card previously: he didn’t exist previously. Saying that there is a clone BA is saying that there is transtemporal identifiability. That’s the problem in a nutshell.

OK, but this doesn’t seem problematic to me because “as far as it is concerned” is what I take to be the usual problem that minds have. But the world has its concerns too, and so we can still succeed in referring (i.e. at least one style of response to a classic BIV problem seems to me to work to this special case).

This move I don’t understand. Say there is some state, at a moment, call it “s[sub]1[/sub]”. s[sub]1[/sub] only exists at a moment, but that doesn’t seem to exclude it from being casually connected with other moments. But we have a bunch of distinct states {s[sub]1[/sub], s[sub]2[/sub], …} at the same moment (e.g. corresponding to minds in the examples given); does that do the work (in getting causation to fail)? I could see how that could work if we exhausted all possible states (where “possible state” is interpreted as broadly as possible), but that doesn’t seem to be the case here.

ETA- You guys move fast.

They are both the same, but that does not mean that they are both one. See: duality.

That mental states evolve in the same sense that physical states evolve is not denied.

You are inserting a mystical meaning upon the word “evolve”, as though minds and matter distributions “evolve” of their own volition. Matter distributions change. The consciousnesses that result from matter distributions therefore change. That these consciousnesses therefore change within my description is trivially true. But you are attempting to differentiate “evolve” from “change” in a way that I think is metaphysical. You would say that while the consciousnesses “change” they do not “evolve”. I think this is nonsense.

Just saying that he didn’t exist previously does not erase his obvious conscious experience! He has memories of having had card A. At any given moment, that is the best any of us can say about anything. It also happens that he did exist previously, and got card A. Just because there are now two versions of him does not erase the fact that he previously existed!

From the point of view of each clone, science works: each clone has experiences, memories, and can make predictions. Future clones will have memories of those experiences and predictions, and can test previous predictions, and can store memories that will be carried forward by future clones. Science marches on…

It looks like this is the argument (that every possible state is caused, which means there is no interesting sense in which there is causation).

This was the sort of thing that I didn’t think was a wide enough instance of “every possible state” to get failure of causation. Reason being that “can only be different in seeing spin up or spin down” is a physical law (there are physically impossible cases that are possible too, aren’t there?), so the effect could be exactly “seeing spin up or spin down” (a non-trivial effect). One might worry about a disjunctive effect. And this possible vs physically possible is a non-trivial issue. And I could just be misunderstanding the case, and thus what is possible.

I can’t see how the temporal identity matters here though, so, grant the argument works, it seems to work because everything is caused (or “caused”). Maybe you need failure of identity to get every possibility to be possibly “caused”; and then on top of that, moreover, every possibility is actually “caused”.

Even if every possible state were to be caused, there can still be an interesting sense in which there is causation, because some states are “more caused” than others (caused more often). And even if every possible state were to be caused equally, there would still be an interesting sense in which there is causation, provided by anthropic selection. This is going a bit further than has thus been discussed here, but if only certain causal pathways can be mapped to consciousness, then an anthropic argument can be used to derive physical laws. I find that to be an interesting sense in which there is causation.

Wait, in what sense did he exist previously? I thought you said there were no way to identify minds across time?

Anyway, all this quote mincing isn’t going to get us any further, I’m afraid. My position is simple: say I were to accept your position, consequently believe that I am not connected to any mind that was before me in this moment in such a way that both these minds can be said to be ‘me’. I draw card A. What reason do I have to believe that previously, I drew card B? Or that I drew any particular card? None at all, obviously, since I believe that I am not connected to any mind that came before in such a way that this mind could be said to have been my past version. What I remember or not does not enter into it: I have no reason to believe my memories; in fact, I have ample reason to believe them to be false, since they seem to insinuate that there was some other mind before me such that that mind was also me; but according to your proposal, that is not true.

But then, of course, I can’t validly believe that I have observed any particular sequence of outcomes; in particular, not one in accord with quantum mechanical frequencies. Hence, I cannot validly believe in quantum mechanics, and consequently, also not in your interpretation, which is thus as empirically incoherent as the bare theory.

But anyway, I still think the problem does not arise in the first place: from quantum mechanical linearity, it follows that there is no possibility of being aware of a superposition; each observer will always take themselves to be in a definite state, and thus, there is just no need for the exotica you posit.

TATG, maybe you’re right: the question of causation between minds is not as clear cut as I originally thought it might be. So when is one mind causally influenced by another? Evidently if its state depends on the other’s. So is such a thing possible in the present scenario?

The difference between the minds at any given point in time comes down to one elementary alternative: A or B. All larger differences can be constructed out of such elementary ones. So let at point t[sub]1[/sub] two minds exist, that differ in the decision of this alternative, and nothing else. We can thus index these minds by A or B. Now let there be a split, the time now being t[sub]2[/sub]: four new minds will be created, each of them not the future version of any mind at t[sub]1[/sub]. Of these minds, two will have received card A, and two card B; this is their sole distinguishing characteristic, and their state is thus completely described. Now does this state in any way depend on the state of any mind at t[sub]1[/sub]? I don’t see how: it is simply either A or B.

If, now, there were a matter of identifying minds across time, then things would look different: from two minds A and B at t[sub]1[/sub], we would get two minds descendant from A, each getting either A or B; and we would have two minds descendant from B, each again getting either A or B. We can thus call these minds AA, AB, BA, and BB: the mind BB is clearly different from the mind AB, and thus, the state of a mind at t[sub]1[/sub] has influence on the state of a mind at t[sub]2[/sub]. So I would say there is mental causality here.

But if the difference between BB and AB vanishes, as it does in the case where there is no way of seeing minds at t[sub]2[/sub] as descendant from minds at t[sub]1[/sub], I don’t see any room for causation: there is no difference between minds other than their instantaneous state, and that is independent from anything that went before.

I did not say that. I said that minds do not have (implied: intrinsic) trans-temporal identity. But empirically and practically yes we identify minds across time. In what sense did he exist previously? In every usual sense: his body and mind existed previously. His previous and continued existence should not at all be contingent upon the existence or non-existence of another copy of himself. You somehow think that by adding a copy, we somehow remove his previous existence, which is bizarre.

You have a memory of having drawn card B. This is all anybody ever has.

If you choose to believe that, then you can choose to be a nihilist. That doesn’t negate the fact that you can continue to live life the way you always have, making predictions, trusting your memories, and validating or falsifying previous predictions. You seem to be making an emotional appeal here, rather than a logical one.

Yes you can. If the whole Born rule thing works out (leaving that to another discussion), then the prediction is that you will have memories of experimental outcomes that are consistent with QM. This is what we observe. Science still works. Technology moves forward. Our lives move forward just as before. Of course you can choose to believe that everything is meaningless, that somehow even though QM can be used to correctly predict the probability of certain memories, that somehow deep down it is ultimately empty… but that does not negate the brute fact that you can, if you choose, continue to live your life and advance science in the exact same way you always have.

Why not? They are just as much the future versions of t[sub]1[/sub] as the versions at t[sub]1[/sub] were before they started the experiment.

No. They have memories. Memories can be used to identify minds across time, if that is your fancy.

BTW, you have not replied to what I think is a critical point I made earlier, which I will repeat:

Let’s say that only one cloning ever happens at time t1. Before the split the test subject has a conscious experience up until t1. You seem to be saying either:

  1. After the split, neither of the clones any longer has any conscious experience.
    or
  2. After the split, only one of the clones (the “original”) continues to have conscious experience.

but you somehow reject:
3) After the split, both of the clones continue to have a conscious experience. Each one “had” the experience associated with the first, and each one continues to have an experience associated with its continued existence.

Since the MWI argument simply comes down to a repeated application of the above, I’m hoping to get an answer out of you. Why do you reject 3)? Do you really accept 1) or 2)?

This alternative is non-trivial in the sense that it is not a choice between A and ~A. Is it stipulated that all other physical facts entail that it is A or B? I guess so, but it is non-obvious to me; maybe the physical facts determine every fact except spin, and spin, as a natural law, may be only one of two states; but this would still leave room for something non-trivial to be caused (namely that it must be one of the two states, rather than some physically impossible state).

This is interesting. I’m wondering if identity over time entails causation (or if there is some argument that it does). It might be an odd causation (because it is a causation about identity). It does seem that the causation is about identity in this case, because the difference between AB and BB are their identities. (And if it wasn’t the difference, then identity wouldn’t be doing the work.)

I think when Half Man Half Wit uses “experience” he isn’t thinking in terms of things like qualia or what-not; it isn’t internal to the subject. We experience things in the world, and we can only do that if we are connected to those things in the right way. Thus if we get breaking of causal connections, we don’t have the right sort of connection to the world to have experiences. I think Davidson’s swamp-man reflects how Half Man Half Wit is thinking about the issue.

I should think that the swamp-man argument is anti-physicalist (and thus unlikely to reflect HMHW’s position), since the swamp man is physically identical to the original man. I’d be curious, however, in any case, to understand his position more clearly on this issue. Any way I try to look at it (meaning “experience” as qualia and what-not or as a purely pragmatic definition such as through the existence of memories), 3) seems obviously correct to me, and variants of 1) and 2) gravely flawed.

The world isn’t physically identical, and they are only identical at a time (not over time), and so we still have physical facts to work with. In particular, the causal histories of swamp-man and original man are not identical. And that physical difference is supposed to make all the difference, so I don’t think it is anti-physicalist at all.

This is right. The people who advocate for the thesis that swampman doesn’t have biological functions are all (all I know of anyway) physicalists. They just think that what it is to have a biological function involves, not just the present configuration of particles, but their history as well.

OK, you’re shifting your position here. Recall that you denied the identifiability of minds over time because I pointed out that in your model, there is no physical way to account for mental evolution: since the evolution of the physical state is consistent with the evolution of any mind into any other, it is obviously not physically fixed. Your reply was that then there is no mental evolution: no mind is the successor of any other mind.

But now you’re shoehorning it back in, trying to have it both ways; but this doesn’t work. If the memory were determined by the physical state in the way that you claim it is, then the physical state would also determine the evolution of the mental by virtue of the same determination. But clearly, this simply isn’t the case: it’s equally consistent with the physical evolution for a mind at a given time to end up with every possible set of memories. This is in fact the same argument I gave above re mental causality: if the minds at t[sub]2[/sub] were to be individuated by there memory, their states would have to be distinguishable in four ways. But there are only two: getting A or getting B.

If you disagree with this, I’d ask you to provide an explicit model by which the wavefunction in some way accounts for a true memory, without simply enlarging the state space (because if you do that, you simply have two minds supervening on two different states, not two minds supervening on the same state, which is what your model requires).

I can’t choose to believe that—it’s the only conclusion from your model.

Can you give an explicit example of how this works?

Well, if we’re going to go on about outstanding replies… But oh well:

As TATG observed, I did not mean experience in the sense of qualia. Each of the clones will have conscious experience in the sense that there’s something it’s like to be that clone. They will, however, not have experience in the sense of accumulated sense data: they will not have made a series of observations ABAABABBAB…, since they weren’t around for it, and their state at the moment (say B) is equally consistent with any possible history of observations.

I could have just said A and not-A; B is just a more convenient label. It’s meant to be an elementary alternative in this sense.

Well, it does not seem so strange to me: consider physical causation. If there is no way to identify a physical state s as the successor to some physical state s’, then what could be meant by causation?

Yes, I think that’s a fair summary. But I think that there’s a factor that makes things even worse in this case, namely the fact that just the state of a clone at any given time is consistent with any possible history of observations; if it weren’t, then there would be a straightforward implication towards some ‘actual’ history. Swampman at least has this implication: I don’t think the clones do. Swampman can validly (though falsely) hold the belief of sharing Davidson’s history; the clones, if they believe iamnotbatman’s interpretation of quantum mechanics, cannot validly hold any such belief at all.

OK. This doesn’t make a lick of sense to me. I reject it as anti-physicalist and incoherent to boot. You are making them sounds like homeopaths!

As far as physics is concerned, building a machine at (x1,y1,z1,t1), taking it apart, and re-building it at (x2,y2,z2,t2) it should function identically. Similarly, giving the machine a velocity relative to it’s current position, or taking it through various causal pathways that lead it to (x2,y2,z2,t2) it should be equivalent (and is, experimentally). So this position seems clearly anti-physicalist to me.

I’m not sure I see you as having a coherent definition of the identifiability of minds over time. I reject your definition, from what I can tell that it is. But of course I can trace out some string of minds and identify them using my powers of pattern recognition. This is different from what I interpret to be your “trans-temporal identification” that you are making. Similarly, anthropic selection self-identifies strings of minds that have coherent conscious narratives. For whatever reason, you fail to see the equivalence between this concept, in which there is no objective anti-physicalist magic indentification of minds of time, with yours. But they are equivalent in terms of what is seen by a given observer. The difference is that, as I see it, your position is not physicalist, and mine is.

The states are distinguishable in four ways. Their memories are AA, AB, BA, and BB.

But this is going overboard. The clone example has been provided. If we can’t agree on that, then we are not going to agree about the wave function.

Similarly I think the only conclusion of physicalism and determinism is that consciousness is not special. I am just an automaton, and the concept of free will is incoherent. Nonetheless (perhaps because I don’t have free will) I can choose to continue enjoying life and nothing about my philosophical qualms prevents science from continuing to work and having results to show for it.

The clone example.

This thread is completely out of control. I don’t have the time to reply to every sub-thread. It would be nice to focus on the clone example, breaking it down completely.

When you say “they will not have experience in the sense of accumulated sense data” it is difficult to pin down exactly how you mean for this to matter if it does not amount to some qualia. I will interpret it to mean that in some way which you cannot define coherently, there is a vague difference between the experience of clone A from the experience of the person had they not been cloned. I find this position is incoherent from a physicalist perspective. It amounts to the swamp-man argument, or the argument that the star-trek transporter would not work. This is equivalent to the argument that if we remove every atom and field from your body for a fraction of a second (which can be made arbitrarily small), and then replace them all back in exactly the same state, that somehow something would be “lost”. Of course for deltaT–>0 this is equivalent to doing nothing! It would be physically impossible for something to be “lost” by the definition of “same state”. Therefore we must conclude that nothing is lost in a star-trek transporter, if it can replicate the same state, and therefore that nothing is lost, if we clone the experimental participant.

Consider the experience of an experiment participant who has yet to be cloned. He walks into the clone chamber at time t1 and is cloned. Suppose the original copy of himself were destroyed. Then this is equivalent to the transporter, in which, by physicalism, nothing is lost. Now let’s consider the case that the original copy was not destroyed. The existence of the original should not affect the fact that nothing is lost in the clone. As far as I can tell, your thesis requires that you assert that the existence of the original clone in some way subtracts from the experience of the copy. In fact, you have essentially said as much, when discussing the lack of identification between causal successors. If we don’t destroy the original, there is now a lack of identification, which bothers you. And yet, physically, the destruction of the original should have no bearing on the experience of the clone.