Does the Schrodinger's Cat experiment say/mean what this personal trainer/life coach says it means?

Nope, it is correct. The basic problem is that the notions of an orthonormal basis and a vector space basis (i.e. Hamel basis) diverge once you get in to the infinite dimensional case and hence an orthonormal basis is not necessarily a vector space basis. Explicitly constructing a vector space basis for an infinite dimensional vector space is impossible I believe as it would be akin to proving the axiom of choice.

The eigenfunctions of an operator with a continuous spectra are in fact ‘generalized eigenfunctions’ and they live in a ‘rigged Hilbert space’, of which the Hilbert space in question is only a subset. The Dirac delta function is not square-intergrable and so is not a member of the Hilbert space of square intergrable functions, which is the state space of quantum mechanics. Like I said it’s a convenient fiction often to pretend they are, but as with all fictions it is important to recognize it is a fiction only.

I think you are departing from MWI here, the worlds should roughly correspond with classical worlds. A world in which, for example (and I only picked on position as an example) position is sharply defined, but momentum is completely undefined is not remotely like a classical world.

Regarding your previous post, the problem with delta functions as physical states (or one of them, at any rate) is that if, say, you have a state that’s a delta function in the position basis, it’s a plane wave in the momentum basis, which is not normalizable and hence not a wave function. You can get around this using Gelfand triples (rigged Hilbert spaces), but it’s not as straightforward as you claim it is.

One could just as well argue that whenever there’s two options, you should expect to find either realized with a probability of 50%; to me, this seems at least as natural.

And regarding the branches with non-quantum frequencies, it’s really simple: for any sequence of observations, there will be a branch in which it occurs. But then, in most of the branches, the observations don’t match the Born probabilities. Take an ensemble of states a|0> + b|1>, and repeatedly measure them in the {|0>,|1>} basis. There will be worlds in which you get |0> |a|² of the time, and |1> |b|²; but there will also be worlds in which you get either with equal probability, or only |1>, or anything else. Why we should observe only the ‘correct’ statistics is then not explained.

A good, critical article about probability and many worlds is the one by Hemmo and Pitowsky (pdf link), or the discussion over at the Stanford Encyclopedia of Philosophy in the many worlds article (though overall, their entry under ‘relative state interpretation’ is the superior resource regarding Everett and allied interpretations IMHO).

You only need the notion of an observable A, which upon measurement yields possible outcomes a[sub]i[/sub]. A proposition about the system is then ‘upon measuring A, you will find outcome a[sub]i[/sub]’. By Solèr’s theorem, the only suitable mathematical structure representing the appropriate notions (those of quantum logic, i.e. a non-Boolean orthomodular lattice) is that of the lattice closed subspaces of (real, complex, or quaternionic) Hilbert space. And in this setting, Gleason’s theorem tells you that the probability of a certain proposition being true is given by Tr(PW), if P is the projection onto the linear subspace representing the proposition, and W is the density operator of the system.

Why 50%? Why not any other percentage? Also, if you have a look at the discussion in the many worlds article at SEP above, you’ll see that apparently not many people share your intuition here.

As can consciousnesses!

No, there is no simultaneous awareness of contradictory observations. For this to be the case, it would be necessary for there to be some some way to end up in an eigenstate of ‘observing a superposition’, |super!>. But you can easily convince yourself that if you were to end up in such a state if you observed a superposition, you would do so also if you didn’t observe a superposition. Also, have a look at the discussion of the ‘bare theory’ in the relative state article on SEP.

True, and there’s a chance that Einstein did some of his thinking in the bathtub too.

But there’s quite a bit of difference between thinking in the bathtub, and doing experiments in the bathtub. Archimedes was the only person famous for the latter.

‘Your’ here meaning iamnotbatman, and ‘previous post’ consequently this one. Asymptotically Fat snuck in when I wasn’t looking!

This side argument seems completely beside the point. While I am aware the the actual use of rigged hilbert spaces is not completely trivial, I see this as a nit-pick that has nothing to do with the larger argument.

Each world corresponds to the real world, where macroscopic things are classical, and quantum phenomena can be seen in careful experiments. The wave function is non-classical, and each trace of the delta function is classical. In this sense you can envision the wave function as “guiding” the various classical traces, ala Bohm, if it helps you. Each classical subset of the wave function is indeed well-defined, and corresponds roughly to the assumed classical measuring apparatus in the Copenhagen interpretation, and has nothing to do with the fact HUP applies to measurement.

We disagree, obviously, but I will delay replying until I have had the time to do you the service of reading and absorbing the various links you keep putting up! It is true that I may not share the canonical interpretation of MWI, because I have developed/learned it almost entirely independently. Everett is not really the best resource for pointing to a canonical MWI methinks, since he really never developed it (he quit physics after writing his PhD thesis), leaving it rather incomplete and somewhat open to people like me coming along and thinking it is “obvious” while having developed an intuition for it that may have strayed rather far beyond the original line of thinking. But in the end, from what I have studied of consistent histories and other everettian approaches, they all seem completely equivalent to me.

It’s not a nitpick, there’s so much wrong with trying to interpret the Dirac delta function that way:

  1. There’s simply no need to interpret at all. The Dirac delta distribution is useful, but it’s not mystery that the spectrum of an operator contains values which are not eigenvalues of the operator.

  2. To interpret it as a quantum state is against the postulates of quantum mechanics which says the physical state of a system is completely specified by equivalence classes of solutions to the Schrodinger equation (or other suitable wave equations).

  3. If even despite that you decide they should need to be interpreted, you will run in to problems. They don’t obey the Schrodinger equation, which needn’t be terminal, but it’s not clear to me at all how such a state would evolve.

Again I think you’re trying to do things with the Dirac delta that you just can’t do. I’m not sure I fully understand what a ‘world’ is in many worlds (but I’m not sure if it’s proponents have defined that yet either), but it seems clear to me one thing it is not is a Dirac delta function.

Asympotically fat, you don’t remotely seem to be following me. The delta function I mentioned is just a shorthand for “break up the wave function into infinitesimal slices.” They are not actually delta functions. And even if they were, that is completely beside the point. You are not evolving them individually. I’m very confused at why you are being so pedantic here; either you are setting up a straw man, or you genuinely don’t understand the concept of linearity allowing you to reinterpret a continuous wave function as the sum of an infinite number of more localized distributions. Finally, you are completely wrong about the delta function. Of course it obeys the Schrodinger equation, its time evolution gives you the propagator.

Are we seriously arguing about the relative merits of theoretical vs. experimental bathtub scientists? :smack:

Without the work of experimental bathtub scientists, the theoretical work on things like water temperature relativity and hot/cold tap symmetry breaking would be meaningless - and they know it. :cool:

Really?

You mean we can’t, for example, take something like the double slit experiment with single electrons and introduce some other particles/interactions at some point and then check if we still get an interference pattern? Doesn’t the existence or non-existence of the interference pattern then tell us whether we introduced something that caused a collapse?

I’m not sure I am following you.

Yes we can break the wave function in to infinitesimal slices, though this is not actually the same as breaking down the wave function in to the sum of other wave functions (you can see this from the fact that it’s expressed as an integral rather than a series). This is very useful as you can use it to determine the state of the wave function after a measurement, but as I say the infinitesimal slices are clearly not solutions to the Schrodinger equation, they’re not even complex-valued functions.

I’m fairly certain that the exact evolution of the Dirac delta function as the limit of a sequence of functions depends on the particular sequence of functions you choose to take it as a limit of (besides which it will only evolve in to another unphysical state).

I’m not trying to be pedantic, you brought up the Dirac delta distribution and, at least from how it appeared, implied that it was a physical state.

I don’t follow you. A series? Where are you getting a series from? The wave equation is linear. You can decompose the wave function into a sum of other wavefunctions, such as plane wave solutions, until you get any shape you please.

But this is beside the point. The point is interpretational. You can interpret each point along the wave function as ontic, as equivalent to a population of point particles. As the wave function changes, the distribution of the population (or “branch density”) of such “universes” changes. Each such universe is “classical” in the sense that it is not in superposition, however that does not mean that the dynamics along any given history are in any sense Newtonian.

Why not? The wave function is linear. Round the edges of each slice if that comforts you.

It is a physical state in applied QFT. I’m not sure what level of rigor you are trying to get at, whether your argument essentially boils down to a complaint that QFT is not mathematically well-defined. But in practice we work with plane-wave solutions (and correspondingly delta functions) in almost every practical QFT calculation there is. In basic QM plane wave solutions exist (they are the first generally discussed if you learn QM), although yes, they can’t be trivially normalized.

I think this point is more problematic than I originally believed, so I’d like to come back to it. If the splitting of worlds is actually due to the fact that consciousness is special in the sense that it cannot be in superposition (which I believe is the opposite of what most other many-worlds approaches would hold), then we are actually back to square one: there are two separate, dynamical processes, one the unitary evolution of the wave function, and the other the splitting of worlds that occurs whenever otherwise a conscious mind would enter into a superposition; instead of consciousness causing the collapse, we now have consciousness causing the split, which lacks the charm of alliteration, but otherwise seems completely equivalent.

Really, for most Everettian approaches to quantum mechanics, I think about all they agree upon are that they trace their origin somehow to Everett, so it seems a natural starting point. I’ll be happy though to discuss any resource that you believe encapsulates your view on Everettian QM, if you can point me to one.

Consciousness does not cause the split. The split happens whether consciousness exists or not, as the wave function naturally diffuses and entangles with itself. Consciousness is merely the mechanism by which localized parts of the larger wave function recognize their own independent reality. Standard decoherence is still the mechanism behind the appearance of wave function collapse. The point is that it is only applicable to an observer that is not in superposition. If the observer is itself capable of entertaining simultaneously contradictory quantum events because the observer’s consciousness operates in a macroscopic superposition, then entanglement between the observer and the measured and the environment is not anymore relevant, and neither is decoherence.
[Edit] Also, you start off saying “I think this point is more problematic than I originally believed”, but then end with “seems completely equivalent”, so I’m not sure what the problem is you are referring to.

Completely equivalent to a collapse formulation, hence not solving the problem it set out to solve.

And I’m still not clear about what you’re trying to say here. On the one hand, you maintain that consciousness does not cause the split; on the other, that the observer can’t be in a superposition. If the latter is true, then it would seem that consciousness causes the split after all; if the former holds, then the observer can enter into a superposition just like any other physical system.

To put it more bluntly, is there any difference between the electron state |0> + |1> and the electron + observer state |“0”>|0> + |“1”>|1> (where the quotes again denote ‘being consciously aware of the observed electron state’)? Does the former represent two distinct worlds, or not? In the latter, is the observer in a superposition or not?

The way I see it, either the unitary dynamics holds all the way through—then, the observer can enter into a superposition. Or, the observer can’t enter a superposition—then, consciousness causes the split.

A series is a sum.

This is my real issue, I don’t understand why you need to interpret these unphysical states or what you gain by doing so. I also don’t think this is what the standard MWI says either, rather it would say that each ‘world’ is defined by the branching of the wave function due to decoherence. In the classical world both postion and momentum are both approximately well-defined, which is not the case with these states. Also Everettian interpretations interpret the Hilbert space as the actual reality, why then would you choose to interpret objects from outside the Hilbert space as real too?

It isn’t, because it isn’t, it’s not even a function (with the correct range).

I’m sure such states are used in QFT, but that doesn’t mean they’re interpreted as physical.

A sum of a sequence. I don’t know how a sum of a sequence relates to what I was saying.

If you believe the wave function is ontic, then these are not “unphysical states”. They are ontic parts of an ontic thing, no different than the fact that the atoms in my body have their own independent existence. Ontic parts can interact with ontic parts of other wave functions. Since these ontic parts can be made as finely spaced as one chooses, and the finely-spaced ontic parts vastly outnumber the course-grained ones, it stands to reason that these more classical interacting objects are indeed realized with the greatest statistical population. And on a separate line of reasoning (that HMHW objects to), even if the course-grained or superposed ontic parts were realized with the greatest density, then experimentally it would seem that consciousness does not avail itself of superposition, or is incapable of it (which is more my belief). What you gain by reinterpreting the wave function in this way is a mathematical equivalence between a single wave function and a field of “universes.” The great explanatory power of this reinterpretation is that anthropic self-selection out of these universes (combined with decoherence) explains the appearance of wave function collapse.

Decoherence is an effect, not a cause. The wave function is constantly branching as it diffuses and interacts. Decoherence is merely the observation that physics conservation laws must still apply, which therefore restricts the phase space of how one piece of the wave function relates to another. This is just entanglement – the enforcement of conservation laws across the wave function.

Yes it is the case with these states. You have to look at a “history” of such states at different time slices to see this. You end up with a distribution of histories. If, for example, your x-space wave function is a bell curve spreading out with time, then the histories are populated in such a way that if you pick a history out at random the probability given by the wave function is the same probability of you picking a history ending in that position. The predictions from both viewpoints are the same.

I am not. As I said, the wave function is interpreted as ontic. It’s parts are no less ontic than its whole.

Again, you don’t seem to accept the linearity of the wave equation, or what that means.

Every scattering amplitude that is calculated and found to agree with experiment uses such “unphysical” states as incoming and outgoing states. Positivistically, this seems pretty physical to me!

I don’t understand how you have shown that it is equivalent to the collapse formalism. The collapse formalism, first of all, is logically inconsistent, which is not the case here. Further, I am not assuming that consciousness must be classical. This is just an empirical fact.

The observer can be in superposition. It just turns out that our consciousnesses do not operate in superposition. The brute fact that they do not operate in superposition is what allows entanglement-based decoherence to do its magic.

It represents both two distinct worlds and one combined world, simultaneously. Mathematically it an be interpreted either way. If it turns out that the “combined world” is vacuous because the separate parts of the superposed wave function due not communicate in any meaningful way, then the state is effectively just two worlds. Now linearity implies that in order for the combined world to not be vacuous, the two parts of the superposition can only “communicate” through interference effects, but AFAIK it has not been ruled out that a conscious being could be self-aware of such interference effects. Therefore it is possible that there could co-exist the combined and separate worlds. It just happens that from the point of view of our own conscious experience, we anthropically self-select from the set of separate worlds. It is possible there is another group of consciousnesses that self-select from the set of combined worlds. For such consciousnesses there would be no wave function collapse, because they could be consciously aware of interference effects between what would otherwise be mutually incompatible events, which would otherwise be ruled out by entanglement/decoherence.

The observer can enter a superposition. The question is whether the observer can be aware of that. I am simply saying that experimentally, the observer is not aware of it. Therefore when the observer goes into superposition, each part of the superposition is independently aware, and represents a separate universe. Unitary dynamics still holds all the way through.

Well, there’s of course the pretty major difference that there’s only one way to partition your body into its constituent atoms, while there’s lots of inequivalent ways to do that with any given wavefunction. But going there will get us head-long into the preferred-basis problem, which I think we should best postpone for the moment, seeing as how we’ve not even cleared up the probability thing yet…

Just as it is an empirical fact that measurement results are classical. Going from there, proponents of the collapse approach—let’s use Wigner’s consciousness-based one for definiteness—claim that there’s a need for a different dynamics in order to account for this empirical fact. That’s exactly the same logic you follow. Otherwise, if there were no difference in dynamics when it comes to consciousness, why would the empirical fact of its classicality be at all relevant?

This I don’t get at all—decoherence works perfectly well in systems in which there is no consciousness present at all. Or do you mean something different by ‘entanglement-based’ decoherence?

This is much more in line with the way the many-worlds interpretation is typically phrased. But note that this is much different from what you said in previous posts, and is not in any way a special property of consciousness—the electron state |0> will interact in the same way whether or not it’s part of a superposition; that’s just the linearity of the dynamics (the same linearity which of course implies that an observer can’t be aware of being in a superposition re the ‘bare theory’ argument, which you seem for some reason to have quarrels with).