Quantum computing

I’ll admit that I have little idea what that means, but ABC is pitching it as ‘a holy grail in the arcane world of supercomputers’. It is supposed to be unveiled …yesterday. Here’s an old board post discussing the subject, and here is the link to the article.

Sounds like a really big deal, wish I knew more about it.

That would be a big deal. Specifically, the NSA would be very happy to have it (and for no one else to.) I guess we’ll see what the result of the showings is.

Here’s my attempt at a simple explanation.

In mathematics and computer science, there are different complexity classes of problems. Basically, a problem is classified based on several factors of how the resources (like time and storage space) of the problem grow as the size of the problem grows.

Consider the (simple) problem of adding a string of numbers. 2 + 3 = 5 takes one operation. 2 + 4 + 7 = 13 takes two operations (adding 2 and 4, then adding that to 7). In general, adding up n numbers takes n-1 operations, so we could say that adding up numbers is of linear complexity. Now consider the problem of finding two things that match a given characteristic out of some set of things. To do so, you’ll have to look at each pair of things until you find some that match. When you add one more thing to your set, you’ll have to compare it with each of then n things you already have, so this problem grows quadratically: the time it takes to solve is n[sup]2[/sup]. And so on for higher-order polynomials. In computer science, all problems that have running time proportional to some polynomial function are considered part of the same complexity class, P (which stands for Polynomial).

Current computers are quite good at solving problems in P, even for fairly large n.

But there are other, much harder problems, problems where the only way to find the solution is to just try every possible solution, and see if it works. Oh, and the number of possible solutions grows exponentially with the input size. Current computers are really bad at solving these problems of any reasonable size. The possible solution set is just too big to test before the universe burns itself out.

Some (not all) of these problems that take a really long time to solve with convential computers (2[sup]n[/sup] time), take a pretty short time to solve using Quantum computers (sqrt(n) time). These problems are part of the class BQP. Factorization of numbers (which is a very important problem for computer security and encryption) is one of these problems. To see why people are so exciting about quantum computers, consider one of these hard problems with 50 bits of input, where testing an answer takes 1 second. On the traditional computer, that problem will take 2[sup]50[/sup] seconds, or about 35 million years, to solve. On a quantum computer, that problem will take sqrt(50) seconds, or about 7 seconds to solve. Sure, modern computers perform millions of calculations per second, but 50 is a relatively small number. When you start to run problems on databases with thousands or millions of entries, it’s just impossible to ever complete them with non-quantum computers.
Now, for someone who knows more about this than I do: Why can’t quantum computers solve NP problems?

Quantum computers can solve NP problems, but so can regular computers. We just don’t know any algorithms that can solve them in better than exponential time. It’s an open question whether a better than exponential time algorithm to solve NP problems exists.

I’ve been patiently waiting for years for QC to become a reality. Just think of the possibilities – unparalleled weather mapping, true virtual brains, and a halfway serviceable SDMB server.

Right. I suppose I phrased my question incorrectly. How about this:

Since a quantum computer can operate on data with superpositioned states, why isn’t a quantum computer a Non-deterministic Turing machine (with a finite tape, of course)?

Whoa, there. It’s a technological breakthrough, not a miracle.

It is, with the caveat that a Turing machine with a finite tape is a finite state automaton.

The two models look somewhat similar, in that in both cases you can create a sort of a superposition of states
f(0)+f(1)+…+f(N-1)
(with N potentially very large). Here’s the difference. In the nondeterministic Turing model, you imagine that the machine which has found the correct answer can always shout it out loudly enough for you to hear it (even if there are 2[sup]100[/sup] machines). In the quantum computer, each state in the superposition has some associated probability (really a complex probability amplitude), and each state has to shout something, with a volume proportional to its probability amplitude. If you have 2[sup]100[/sup] states and only one has the correct answer, you’re not going to hear it.

What makes quantum computers work any better (apparently) than classical computers is more subtle. The most celebrated quantum algorithm, Shor’s factoring algorithm, does some clever number theory to make a large fraction of the quantum computers shout something helpful. (More formally, Shor probabilistically reduces factoring to finding the periodicity of a function; because the quantum Fourier transform is very fast, this can be done efficiently. The end result is that a large fraction of the probability amplitude gives information about this period.)

I work in testing, which is involved in demonstrating that a chip actually does what it is supposed to do. All I can say is that I’m glad I’ll be retired (and probably moldering in my grave) before I have to figure out how to test these things. Supposedly deterministic ICs are flakey enough today at 65 nm!