Spintronics and Quantum Computers

Science is recently uncovering hitherto unimagined paths of technological evolution that has to do partly with particle physics.

First you need to understand that atoms and especially their nuclei and electrons specifically have an intrinsic spin. This spin can’t be measured as left or right or clockwise or counterclockwise, instead scientists measure it as Spin-0 or Spin-1. these can be read as the 0s and 1s of binary codes. The difference is that where electronic solid-state data storage can be measured in gigabytes spintronic solid-state storage can potentially be measured in terabytes, even petabytes. this opens up the door to quantum computation.

As to what quantum computation is, it is a computer that could theoretically compute across multiple universes. Say you asked your computer to give a number from 1 to 100 if you asked a quantum computer it would give you the correct number every single time the first time. This doesn’t begin to explain how it works but it gives you an idea. Anybody else want to offer an explaination or possibly some news on the development of these devices?

Spintronics or magnetoelectronics don’t necessarily have anything to do with quantum computing – they do exploit a characteristic of electrons that’s of a quantum nature, its spin (but then, so do ordinary computers – charge is as much a quantum number as spin is), but do so entirely within a classical computational paradigm, i.e. an electron in a spintronic system is always treated as having a definite ‘spin up’ or ‘spin down’ state, encoding a 0 or 1 bit.

Quantum computation takes its power from exploiting the (purely quantum) possibility of having a system not residing in a definite state, but rather in a superposition of states – i.e. the electron is not either in state ‘spin up’ or ‘spin down’, but rather in a state that in some sense contains both possibilities, and only (or so most quantum ontologies would have you believe) upon measurement is forced to decide for one or the other possibility.

The popular gloss is that the quantum computer uses this capacity to ‘explore’ all possible computational paths to arrive at a solution (i.e. if you have the problem of factoring a large number, the computer ‘tries out’ all possible factors simultaneously, rather than one by one), to then ‘decide’ on the correct one.

It’s easy to see the difference between a quantum and a classical computer: a classical bit can be in either the state 0 or 1, respectively; two bits can be in one of 2[sup]2[/sup] = 4 (00, 01, 10, 11) states, three bits in one of 2[sup]3[/sup] = 8 bits and so on – in general, a classical computer with n bits can occupy one of 2[sup]n[/sup] states at any one time. A quantum bit (‘qubit’), however, is not so limited: rather than existing either in state 0 or 1, it can be in both states, simultaneously; this means that an n qubit quantum computer can be in any arbitrary superposition of 2[sup]n[/sup] states.

This is how David Deutsch, one of the most well-known researchers in the field, likes to frame it. However, this depends on the truth of one particular interpretation of quantum mechanics, the many-worlds interpretation due to Hugh Everett. There is controversy about which of the known interpretations of QM, if any, is the correct one, and at present, I believe it is still the case that most practitioners subscribe to the so-called Copenhagen interpretation; consequently, there is considerable disagreement about whether or not there are in fact multiple universes, and it is possible to give a coherent account of quantum computation without referring to them.

That’s not really right. In principle, every quantum computation can be reduced to just measuring a prepared system in the right basis to obtain the correct result; however, in general, this basis isn’t known. So effectively, most known quantum algorithms give the right answer only with a certain probability, and one has to repeat the computation in order to get appreciable statistics on what result is the correct one. Also, I’m not sure what you intend your example to mean – if I just thought up a number between 1 and 100, a quantum computer couldn’t do any better than a classical one in guessing the right one.

Why’s this in Cafe Society, by the way?

Haven’t they devised experiments that show this conclusively? My (admittedly limited) understanding is that much of the modern world wouldn’t be possible if complementary particle states are defined-but-unmeasurable; they have to be literally undefined for mumble mumble to work. Computer chips? CRT screens? I forget what it was, and now I’m wondering if I just made that up or I really did read it somewhere.

I’ve always found the many worlds theory distasteful. It effectively eliminates the possibily of humans to be moral creatures, since for every moral decision you have in your life, there is a universe where you made the “evil” decision.

I think we’re talking about slightly different, but related, things here. I merely meant that one can interpret quantum mechanics in different ways depending on whether or not one accepts the reality of a wave-function collapse – i.e. that there is some ‘actual’ superposed state that, upon measurement, jumps to a single definite realisation of the possibilities within the superposition. This is how things work in the Copenhagen interpretation; however, even in the Many Worlds interpretation, there is no collapse as such, as all possibilities are in fact realised, and measurement merely tells you which one you inhabit. In this sense, MWI is deterministic, while Copenhagen isn’t.

However, I believe you may be referring to what’s more commonly known as ‘hidden variable’ models – basically, the naive belief that underneath it all, some quantum particle actually has some definite state, we just don’t have access to it, and our description merely reflects this ignorance. Such models have indeed be ruled out experimentally: they predict a certain correlation in a special kind of measurement, while the formalism of quantum mechanics predicts something different; experimentally, QM wins out: no local hidden variable theory can fully account for quantum mechanical phenomena. (The ‘local’ here essentially leaves open a loophole: it is possible to create a non-local, ‘definite’ theory that underlies and is compatible with quantum mechanics, and this has been done in the form of the so-called de Broglie-Bohm theory; however, those models have other problems that would take us too far afield.)

I merely added the parenthetical caveat to somewhat de-emphasise the special role of measurement: you encounter often very strong, but misleading, claims that in some sense, measurement ‘creates’ physical reality, from which it is a slippery slope to bad quantum mysticism of the ‘What the Bleep do we Know?!’-kind, where you create your own reality through observation and other nonsense like that.

Interesting, I’ve actually never thought about the moral dimension of multiverse theories. You’re right, it makes an individual’s morality look a lot like a random walk over possible decisions, so that ultimately, you’re good just because you’re lucky. Might make for an interesting court defense, though: “It’s not my fault reality branched such that I stabbed him thirteen times in the chest!”

BTW, there is a big difference between multiple universes and the many-worlds interpretation of quantum mechanics. The two concepts are quite distinct.

Also note that while technically in the many-worlds interpretation what you say about morality is true, you are leaving out the fact that your moral decisions may vastly outnumber your immoral decisions. The small set of “worlds” in the many-worlds interpretation in which you are immoral is equivalent to the small probability in the copenhagen interpretation of you doing something immoral due to the normal probabilistic aspects of quantum uncertainty. So there is no reason to worry about morality in the many-worlds interpretation any more than there is in the copenhagen interpretation. But in any case I don’t see how your distaste is relevant to the truth-value of the many worlds interpretation. :stuck_out_tongue:

As far as I can tell, the only truth-value to the many worlds interpretation is that many people feel emotionally uncomfortably with uncertainty. It’s not like there is a shred of evidence supporting it or even a scientific need that the theory satisfies. Thus my objection to it is exactly as relevant as the theory itself.

It is on equal footing with the Copenhagen interpretation. They are both valid, fully equivalent interpretations of QM. Obviously you have some emotional investment in one over the other. But I also suspect you don’t really understand the many-worlds interpretation very well (even most grad students in physics don’t unless they study it specifically).

Except for where the many worlds theory creates a ridiculously large and wholly unnecessary multiverse. It takes “cumbersome” to a whole new level. Where was Occam’s razor when we needed it most?

Occam’s razor applies to explanatory entities – of which the many worlds interpretation has less than, for instance, Bohmian mechanics or any objective collapse theory (and depending on whom you ask, the Copenhagen interpretation, as well).

In a sense, many worlds is just accepting what quantum mechanics tells you at face value: that there’s a quantity that characterises the evolution of a quantum mechanical system, and that this quantity can (and typically, does) exist in a superposition of what one classically thinks of as a state. Most other interpretations have to, in some way, incorporate additional structure to ‘get rid’ of those superpositions.

**Ellis Dee ** – Wrong. The many-worlds interpretation doesn’t do that at all. That is why my first comment in this thread was:

*BTW, there is a big difference between multiple universes and the many-worlds interpretation of quantum mechanics. The two concepts are quite distinct. *

Christ, I guess I have to educate you. You see, the Copenhagen interpretation of quantum mechanics has long been an embarrassment in physics. Why? No one has solved the “measurement problem.” The problem is this: there is the Schrodinger equation that evolves wave functions. Fine and dandy. But whenever we want to measure anything, the Schrodinger equation seems like it is no longer valid. Instead of continuing to evolve the wave function during the measurement process we instead forget about the Schrodinger equation entirely and just use a rule of thumb: the probability of a measurement is related to the height of the wave function, and the wave-function “collapses” after measurement. We don’t know how the measurement works or why the Schrodinger equation ceases to be valid or what the hell is going on when the wave function collapses or what causes it to collapse whenever we try to measure something. The problem has nothing to do with an “emotionally uncomfortably with uncertainty.” The problem is that the process is logically inconsistent. It works, but it is viewed as a fudge. This problem has vexed and really annoyed many of the greatest minds in physics.

The Copenhagen interpretation is simply agnostic: the idea is that while there may be something we don’t understand, and the process may not make sense, we have a set of rules that work, so let’s just use those rules and not worry about where they come from, or what deeper is going on.

Now, the many-worlds interpretation is simply an observation that the linearity of the wave functions implies that they are equivalent to the sum of infinitely many dirac-delta functions (you know what I’m talking about if you’ve studied greens functions or fourier transforms). If you let the wave functions evolve forever and never “fudge it”, then keeping track of each little delta function is informationally equivalent to an infinite number of individual world-lines existing simultaneously. It is also fully equivalent to the Copenhagen interpretation, both mathematicallyt and predictively, but with the added quality of explaining the rules of thumb within the Copenhagen interpretation, and doing away with the poorly-understood notion of physical wave-function collapse.

Moved Cafe Society --> GQ.

Since I have no idea what y’all are talking about, that may or may not be the correct destination.

I must confess I never heard it put like that. Could you elaborate?

That is because the interpretation has been popularly dumbed-down into a sound-byte. To illustrate, consider a wave-function consisting of two delta functions at spin +1/2 and spin -1/2 (this wave function describes the spin of an electron). The electron passes through a magnet (ala Stern-Gerlach) and we see it deflect and end up with spin +1/2 (its wave function “collapsed” to a single delta function at +1/2). In the many worlds interpretation, you take the wave function at face value. It describes two electrons, and both before and after the magnet there are still two electrons, one with +1/2 and one with -1/2. Why do we only see one electron? This is the difficult part to explain, and the reason why explainers typically give up and say "there are two universes, one with an electron with spin +1/2, and one with spin -1/2, and we live in the universe with the spin +1/2 electron). But it is not so simple. For one thing, the two electrons described by the wave function can interfere with one another (ie the two universes could not be considered separate). The actual reason why we only see one electron is called quantum decoherence. Basically, we only see the +1/2 electron because we are entangled with it. As a vast oversimplification, think of us as just another delta function (a component of the universes one smoothly evolving wave function) that could either interact with the electron with spin +1/2 or with spin -1/2. Since we are just a delta function, we can only interact with one of the two electrons: we can’t see them both at the same time.

Hmm, that’s pretty much the story I know, or at least part of it, just phrased a little differently. However, if I’m not mistaken, decoherence doesn’t explain the appearance of one single macroscopic world – you still have (ultimately) a universal wavefunction in superposition, containing states in which we observe and become entangled with a spin-up-electron as well as states in which we observe a spin-down-electron. Accepting both possibilities as being on equal ontological footing, and hence, as being equally ‘real’, is then what leads to the postulation of ‘many worlds’; this isn’t necessary (for instance, Bohmian mechanics essentially claims that only one of these possibilities is actually populated with real particles etc.), and as far as I know, Everett himself was silent on how to interpret the different ‘parts’ of his universal wavefunction, but it’s a consistent stance to take.

As for the interfering electrons, I’m not sure that’s all that much of a problem for many worlds: after all, whatever your favourite interpretation deems to happen in order to create the classical macroscopic reality we all know and love – wave function collapse, decoherence, branching of universes etc. – happens only after the electrons can no longer interfere with each other (in any measurable way). Think about how the interference pattern in the double slit experiment vanishes if one tries to catch the electron red-handed in the act of interfering with itself: in a Copenhagen-like ontology, the observation causes the wave function to collapse into a definite state; in the decoherence approach, the interaction (and hence, entanglement) with the environment enlarges the state space such that the overlap of both states effectively vanishes; and in many worlds (at least, the flavour of many worlds that takes every possibility as equally real), the universes branch (because of decoherence).

I think you are mistaken – that is exactly was decoherence explains.

As you indicate in your last parenthetical, decoherence is the mechanism by which “universes branch” (quotes because I am not advocating that language). But what is actually happening is just decoherence. The “branching” is just one effective description enabled by the anthropic principle. There is still only one universal wave function.

If I can take a WAG at where your reasoning is held up – in your descriptions you seem to be assuming a fixed observer-state, whereas the electrons (for example) to-be observed are in superposition. In reality the observer is in superposition as well as the to-be observed. However both the observer and the to-be observed are collections of delta functions, ie collections of collapsed states. There are an infinite number of combinations of such observer-observed states (ie delta-function pairs) inside the universal wave-function (the classical combinations far outnumber the non-classical btw). “Dividing universes” is just a semantically-loaded way of saying that each combination is equally valid, though that (in the spirit of the anthropic principle) each will appear to itself to be part of a unique history. This is in contrast with:

Which is clearly being confused by the semantically loaded “many worlds,” and by the similar-sounding “multiverse” ideas in the field of quantum cosmology, which actually do posit multiple universes, and have nothing to do with the many worlds interpretation. Many-worlds doesn’t “create” or “add” “unnecessary” universes – it merely makes logical deductions from the standard theory of wave function evolution (that can be interpreted ontologically in a number of ways). The only thing “added” is an explanation for wave function collapse (again: without adding anything to the current theory).

Perhaps I should be more clear – decoherence does get rid of quantum interference by quickly reducing the overlap of the state vectors effectively to zero; in this regard, it ensures that only a classical reality will ever be observed. However, it does not lead to the existence of a single classical reality the way, for instance, the Copenhagen interpretation or any objective collapse scheme does: one still has a superposition of the wave function of the universe, and hence, is still left with the question: “Why do we observe this part of the wavefunction (say, the electron-has-spin-down part) rather than that (electron-spin-up) part?”

Copenhagen, objective collapse, and many worlds answer this question, respectively: the wave function, representing our knowledge of the system, probabilistically collapsed due to the update of our information through measurement; something caused the wave function to collapse in an ontologically ‘real’ way; and: every outcome does, in fact, happen. Decoherence alone does not address this question, which is why it is typically taken as a jumping off point for several distinct interpretations.

I don’t think there’s anything particularly anthropic about such a description. Any environment ultimately interacts only with one element in the superposition, so a branching description can be given in the complete absence of conscious observers – i.e. there is no need for the universe to be the way it is in order for us to ask the question of why it is that way, removing the anthropic element.

Hmm, so you mainly object to the distinct decoherent parts of the universal wavefunction being referred to as ‘distinct universes’, because, after all, there’s just one wavefunction describing all of them? If so, I can get behind that, to a certain extent; however, as far as the experience of any given observer is concerned, those decoherent histories will appear as distinct from each other as anything.

You’re right to point out the difference between the parallel universes posited by, for instance, eternal inflation or some interpretations of the string landscape, with the many worlds of Everett, but I think it is not entirely inappropriate to use the term ‘universe’ in either case, if one takes an observational stance towards the definition of the term ‘universe’ as loosely being ‘anything I can observe’ (of course, in a sense this incorporates multiple universes already into classical theory, as each observer strictly speaking exists within his own ‘Hubble bubble’, but this really becomes a merely semantic issue quickly).

You’re also right that one can take the stance that many worlds in this sense is the most conservative reading of quantum mechanics (which I alluded to earlier as well), but it does become a bit of an issue of aesthetics. Many see this enormous, entangled, superimposed universal wave function as a horribly bloated thing and would like to get rid of it, and there are some ways to do this without doing excessive violence to parsimony; and there also is some leeway to the interpretation of Occam’s razor in this regard – after all, one can argue that a theory that is able to explain why I experience precisely this history has strictly greater explanatory power than a theory that treats every possible history on equal grounds, including all those histories that I don’t perceive.

It sounds to me like we see eye-to-eye, despite some minor communication problems. Personally, I find the many-worlds interpretation to indeed be the most conservative and parsimonious – but also the most aesthetically and philosophically pleasing. It has a certain symmetry about it (you know, that thing so celebrated in theoretical physics ;)); one arbitrary history seems… arbitrary, and what you call “this enormous, entangled, [horribly bloated] superimposed universal wave function”, I call “round”. The nail that sticks up gets pounded down, I say. :slight_smile:

In any case, other interpretations get in some pretty ugly tangles, and the copenhagen interpretation is not an interpretation so much as a set of blinders and a rule of thumb. But IMHO of course…

Yes, we probably do. I thought as much as I was writing my last post, but I didn’t want to throw it in the garbage bin… Anyway, as long as the physics works out the same in the end, the rest is really just terminological quibbles. :wink:

I find myself doing an awful lot of flip-flopping on the issue – though it’s hard for me to pinpoint exactly where my unease lies. Basically, I’m not sure I find any current interpretation wholly convincing, and I’m actually starting to believe that maybe the problem runs deeper than quantum mechanics – that we need to revisit the fundamentals of what, actually, a physical theory is wrt what’s really ‘out there’ before we can really understand what quantum mechanics is trying to tell us. But that’s a whole 'nother debate. :slight_smile:

Agreed. I sometimes think that Copenhagen is the ‘mainstream’ interpretation solely because it’s the favourite interpretation of those that don’t think much about the interpretation of quantum mechanics…

Btw, I wonder why the OP never revisited the thread?

He/she did in a different universe.

No debate here. I completely agree. Just too bad there isn’t more mainstream support of such work. The Strand Model, anyone? :stuck_out_tongue: (just kidding)

The entropic gravity and some of the spin network stuff might be a step in the right direction. Also Max Tegmark’s stuff. Just to shotgun a few illustrations of “outside the box” thinking out there.