I'm not really understanding what "quantum computing" is all about - Please help

Here is an article on quantum computing. The article seems to be saying quantum machines do no use standard binary logic architecture, and they can handle handle indeterminate logic states.

I am totally not getting how you can have a working computer that does not use binary logic circuits and still arrive at useful answers or why this would be faster than normal computer logic architecture. It sounds like science fiction jibber jabber.

It’s well established that there are certain algorithms that work on quantum architectures in such a way as to solve a certain problem much faster than their (known) classical analogues on conventional computers. The best-known example is Shor’s algorithm, which is capable of solving the problem of factoring arbitrary numbers into its prime factors much faster than any classical algorithm can—something which is very relevant, because conventional cryptography schemes rely on exactly the fact that prime factoring is computationally very expensive.

As to the question what exactly causes this ‘quantum speedup’, opinions are divided. You will often hear talk about ‘quantum parallelism’, which is roughly the idea that all possible computations are carried out in parallel on a system that’s in a massive superposition, and then somehow only the one that yields the right result is kept. This metaphor is useful only to a point: if there were nothing else but ‘exponential parallelism’ going on, then how could you figure out which result to keep?

So far, almost every feature of quantum mechanics has been proposed to explain the speedup; a popular candidate was entanglement, but it’s since been discovered that there are quantum algorithms that yield a speedup even when there is little to no entanglement present. This has shifted attention to a property called ‘quantum discord’, which is roughly the difference between the information stored in a quantum system, and the information that can be extracted via measurement. But there, too, opinion on whether this is the right track (or even whether it’s meaningful at all) are divided.

Incidentally, regarding this ‘D-Wave’ stuff, take everything you read with a large grain of salt: most physicists in the field are skeptical regarding whether what they do is anything like proper ‘quantum’ computing at all.

The abilities of their computer seem to be quite real. This is just a grubby article from the popular press, but the paper will be presented anon.

The paper has been out (as preprint) since last month, and well, in order to not turn this into a hijack, I’ll just say that I side largely with Scott Aaronson’s view on the issue; in particular, while they’re doing something conceivably quantum, what they’re not doing is something that has (so far, at least) any advantage over classical computers: they’ve actually found an implementation on a classical computer that solves the problem the D-Wave One is supposed to solve about fifteen times faster. So while I won’t say that they don’t have anything, I’ll at least reserve judgment—and caution everybody to do the same—until they produce something that’s certifiably better than classical.

How is that the same paper when the author McGeoch, isn’t even listed? You’re referring to an earlier paper that did indeed cast some doubt on what was going on with the D-wave machine. This paper wasn’t available or presented until the 15th. Still sloppy as usual I see.

Actually, according to Aaronson:
[

](Shtetl-Optimized » Blog Archive » D-Wave: Truth finally starts to emerge)

Yes. Aaronson got it right and yet somehow YOU still managed to cite the WRONG paper. I thought I needed to point that out.

Oh, and I suspect that NASA and Google might be a tad smarter than Aaronson since they just announced yesterdaythat they’ll be buying one of the $10M D-Wave two rigs.

I cited the paper more relevant to assessing the D-Wave One’s capacities, whose analysis (according to Aaronson) supersedes that of the more recent one. In any case, this hijack is pointless: if there is something to the D-Wave One, which I would welcome with open arms, then we’ll know soon enough.

Now, you can post one more time to tell me how stupid I am, and then we can maybe return this thread to its original topic. Deal?

No, I’ll just highlight the fact that you never admit your mistakes. You said “The paper has been out (as preprint) since last month” (with a link pointing to the WRONG paper - see the Aaronson link for the correct one) - clearly indicating the paper I had referred to. But hey, feel free to use your normal sophistic techniques for trying to convince people otherwise. I won’t interfere any further. :stuck_out_tongue:

Moderator Note

deltasigma, knock off the personal remarks in GQ. No warning issued, but dial it back.

Colibri
General Questions Moderator

Thank you for the reminder :slight_smile:

Can I get back to the OP?

Suppose you want to factor a 1000 bit number. Unless it is prime, it has a factor of 500 bits or fewer. So your put your number into a register and divide it by a 500 qubit “number”. This has 2^{500} states. You get a 500 qubit quotient and a 500 qubit remainder. Somewhere in the latter there is a 0. You figure out where (how? Damned if I know) and the corresponding state of the divisor is your actual divisor. Obviously there is more I don’t understand about this process than what I do, but what I do understand is above. My understanding is that the process has actually found a divisor of the binary number 1111. The divisor they found is 11 and the quotient 101. Curiously 11*101 = 1111 in any base.

This is actually the problem I alluded to with the ‘quantum parallelism’-picture: there isn’t really a way to find out which of the superposed numbers is a divisor in this way. What you rather do is something different: you reduce the problem to finding the period of a sequence of numbers. These are then what you put into a superposition; but the period now is a ‘global’ property of that superposition, that is, you can make the appropriate measurement (more accurately, apply a quantum Fourier transform) to find it. If it were just a single ‘right’ answer in a superposition of all possible answers, then finding that one would be exactly as hard as doing it classically.

I don’t understand. If we don’t have a decent theory of how quantum computing works, and we don’t have actual quantum computers, what reason do we have for thinking that there any such think is possible? (And if we do have actual quantum computers - I am not quite clear on whether we yet do, or not - how did we figure out how to build them without having a theory?)

I can’t contribute anything to the discussion about whether D-Wave’s quantum computer is actually using quantum effects or not, but I do know that Lockheed Martin bought the first purported quantum computer from D-Wave Systems two years ago, and Lockheed Martin recently produced this video discussing quantum computing and the D-Wave machine.

We do have certain algorithms for which we can prove that they take less time (up to exponentially less time) to run than their classical counterparts, Shor’s algorithm being one example. The algorithm has also already been implemented, albeit for numbers that are not really all that interesting; the current record is 21 (shattering the prior record of 15 mentioned by Hari Seldon). So we know that, at least in principle, the algorithm works.

The question is now merely what is physically responsible for this speedup—what is it about quantum theory that makes such faster-than-classical computation possible? We know a couple of necessary conditions—superposition perhaps being the most obvious. But what is sufficient?

So in a sense we know that quantum mechanics fulfills the conditions to enabling a computational speedup, we just don’t know precisely what they are.

Perhaps this becomes more clear by taking a look at how quantum algorithms work.

The most widespread model is the so-called ‘quantum circuit’ (though there are others, such as measurement-based quantum computation). This is essentially the quantum analog to a classical logical circuit. Its elements are wires and gates; each wire carries logical signals (e.g. 1 or 0 in the classical case), which are manipulated by the gates, performing logical operations on the signals.

In the classical case, typical gates are the NOT-gate, which acts on a single input and takes every signal to its negation (1 to 0 and 0 to 1), the AND-gate and the OR-gate, which implement the eponymous logical transformation on their (two) inputs; the classical logic gates are described in this wiki article. An important concept is that of a universal set of logic gates, meaning that with such a set, any conceivable logical transformation can be implemented, and thus, any kind of computation is possible. In the classical case, it turns out that you only need a single gate for universality, the NAND- (or alternatively NOR-) gate, which is just the AND-gate with its output reversed (i.e. it yields a 0 only if both inputs are 1).

The quantum circuit model now generalizes this to the quantum world, where the logical units are no longer classical bits, but quantum bits or qubits. A qubit is a two-level quantum system whose two levels are conventionally denoted |0> or |1> (the funny brackets meaning just ‘this is a quantum object’ for now). The most striking difference to the classical case now is that a qubit can not only be in either of the states |0> and |1>, but also in arbitrary linear superpositions a|0> + b|1>. Operationally, this means that there is, upon measurement, an |a|[sup]2[/sup] chance to find it in the state |0>, and a |b|[sup]2[/sup] chance to find it in the state |1> (which implies |a|[sup]2[/sup] + |b|[sup]2[/sup] = 1, since you’ll find it in some state with certainty).

One hitch I should mention, but won’t explain further is that contrary to the classical case, the operations in the quantum case must be implemented in a reversible fashion—this is just a feature of the evolution of quantum systems (in the classical case, if you only know the output of an AND-gate, you can’t reconstruct its input; in the quantum case, knowing the output—and the mechanism—of the gate always allows you to ‘undo’ its operation and recover the input).

So, a quantum circuit is again given by wires and gates; the wires carry the qubits, and the gates effect transformations upon them (which are mathematically unitary transformations). Because of the reversibility requirement, any quantum gate has as many outputs as there are inputs. A quantum circuit thus typically looks like a set of parallel ‘rails’, some of which enter certain boxes. Despite the reversibility requirement, it turns out that you can find a set of universal gates, and thus, compute anything you want to compute (although often, due to the probabilistic nature of quantum mechanics, your result will only be correct with a certain probability). This is not unique to quantum mechanics: reversible universal computing is also possible in the classical world.

Now we thus have a paradigm of computation, and can see what it can do. Any algorithm in this setting corresponds essentially to a set of gates and wires; the complexity of this algorithm can be measured by the number of transformations you need to implement on the input to produce the desired output. This complexity will typically depend on the size of the input, the instance of the problem. If you have some input of N bits, then processing that input might require N steps, or N[sup]2[/sup] steps, or log(N) steps, or, in particularly bad cases, e[sup]N[/sup] steps.

It turns out that factoring a number, in the classical case, is one of the ‘bad’ cases: no known classical algorithm can do better than using an exponential number of steps (well that’s not quite true, I think the fastest classical algorithm takes larger than polynomial, but smaller than an exponential number of steps). But it turns out that in the paradigm for quantum computation described above, you can create an algorithm, i.e. a quantum circuit, which takes significantly less time (polynomial in the input size, which is generally considered to be ‘efficient’).

So in this sense, we don’t need to know precisely why quantum computation is faster then classical: we know how to do universal computation in both cases, and we can simply compare how long each takes.

There’s a caveat to be attached to this, though: it’s merely the case that the fastest known classical algorithm for prime factoring takes exponentially long; in theory, it could be possible that there is some as yet undiscovered classical algorithm that might equal the quantum algorithm’s performance. This is generally considered unlikely for various indirect reasons having to do with a number of extremely deep problems in complexity theory, but a definite proof is not available at the moment.

Are the D-Wave computers called quantum computers simply because of the use of Josephson junctions in the design? I know this is the case with quantum voltage standards.

There’s also a problem of math vs. ordinary language. There are plenty of things in quantum mechanics for which we can do the math just fine, but which we struggle to describe in words.