Why are quantum computers (going to be) important?

I watched a short documentary on QC, and the whole thing kind of made me say “why?”

Prototypes are the size of school buses
Components need to be supercooled to work
Scientists can’t get the bits to behave the way intended.
(According to the doc) never intended to be consumer friendly

Not a knock on the people researching this stuff, but it seems like we have awesome computers right now, why not just make those better/faster?

That list of issues is extremely reminiscent of very similar issues that were cited about early computers. Thomas Watson was reputed to have said “there is a total world market for maybe five computers”, and though the statement may be apocryphal, others are documented as having expressed similar sentiments, and much later, Ken Olsen, the president of minicomputer maker Digital Equipment, expressed skepticism that anyone would ever want or need a computer in the home.

That said, quantum computers will probably always be specialized devices, capable of incredibly extraordinary performance but only in narrow problem domains. Maybe in the distant future, when they’re small and cheap enough, they may be used to augment conventional processors, similar to the way that processors used to be augmented with specialized floating-point processors or today are augmented with GPUs.

Quantum computers are (supposed to be) great for modeling systems governed by quantum mechanics. Computational chemistry must resort to approximations using traditional computers.

This C&EN article may be paywalled, so I’ll quote some parts:

Note that we don’t have anywhere close to 250 qubit computers available right now.

You can’t make conventional computers much faster. With the exception of tasks that can be made massively parallel, Big single-core improvements are pretty much over. (By "big"improvements, I mean orders of magnitude–I don’t expect to ever see a single x86 core with 10x the performance of a modern 5ish GHz core–or even 5x.)

The issue with quantum computers is that the space of problems that they are capable of solving is so incredibly different than with electronic computers. I don’t really know enough about the subject or remember what I learned earlier that was in a video of a class at MIT, but I remember that they seem very well suited to certain kinds of problems that scale much too fast to be feasible to solve on electronic computers. This includes pretty much anything in modern cryptography, which generally assumes that while the cryptography can be theoretically broken by brute force, it would take someone far to long to do so. If someone can use a quantum computer to solve the problem that’s inherent in decrypting something without being given the key, that’s a big deal.

As I understand it, the discrete logarithm problem is a good example of something that quantum computers could solve quickly. This would have major impacts on encryption.

What you said is accurate with respect to silicon, but using other materials and methods (graphene, spintronics, photonics, etc.) will most likely result in significant single-core improvements.

Right. Moore’s Law doesn’t apply only to silicon. Gordon Bell extended it backwards in time, and it still held.

Most likely is probably a bit optimistic at this point, as the band gap issue currently limits graphene to analog applications.

Just this year they got that up to 0.05eV, when they need to get closer to 1eV to be useful. Until someone can demonstrate turning off the transmission of electrons through graphene without losing the advantages is potentially has this is a big unknown.

While there may be breakthroughs right now the results suggest that a useful band gap will not be exponentially better than current options as far as speed goes.

The speed of technology hasn’t been doubling every year like the original Moore’s law suggested for a while now anyway. GAA FET has less performance improvement and after 5nm in the next couple of years most of options have a much larger drain to source pitch that reduces them to card tricks as far as Moore’s law goes.

A lot of the early big improvements didn’t come only from smaller process nodes, but from things like predictive scheduling which could be fit into silicon. There aren’t a lot of improvements like that coming, and the increased complexity of trying to do more would stretch out design cycles. For the last five years or so most of the improvement has come from increasing the number of cores on a processor.
Only really big jobs need massive parallelism. For most people more cores will mean more speedup since all the processes get a core or a thread to themselves.
Quantum computing will be the kind of niche market supercomputing is today.

I worked in the area of showing that processors and other ICs got manufactured correctly. Things like Serdes, where inputs aren’t synchronous, are already a big pain in the ass. Nondeterministic computers are going to get really interesting. Bespoke quantum computers I can see, but mass produced ones? Not likely.

There is precedent for this. A while ago a lot of people were excited about asynchronous logic, where gates would trigger when data was available, without having to wait for a clock. HP had a big project in this. It didn’t make it because no one could figure out how to test this stuff.

The biggest one is that, if we could get them to work, quantum computers could quickly solve the sorts of problems which currently serve as the backbone for all of our cryptography. If you had a decent-sized quantum computer right now, and nobody was forewarned about it, you could rob banks. All of them, at once, completely. How much money is there in all of the banks in the world combined? That’s how much a quantum computer is worth.

This is a gross exaggeration. Robbing banks electronically requires a heck of a lot more than just breaking encryption (and in some cases can be done even without breaking encryption).

But even if encryption were the only barrier, simply having a quantum computer doesn’t necessarily mean that you could do it feasibly. Encryption methods are considered secure today because to break them, the number of operations a computer needs to perform are related to the size of the encryption key by a rapidly growing function (say, an exponential one). With a quantum computer, you might get that function down to something that grows less rapidly, such as a polynomial one. But if the polynomial exponent is large enough (say, 1000) then you’re still looking an eons of time to brute-force the encryption key.

Not only that, but aside from the very valid points psychonaut just brought up, even if RSA / Diffie Hellman is vulnerable to a quantum computer using Shor’s algorithm, there are whole classes of quantum-resistant algorithms that already exist. Supersingular elliptic curves, ring learning, and lattice methods are a few.

The main reason we haven’t adopted any of these yet is NIST hasn’t formally blessed one of them as the best or most implementable (although they have deemed 69 of them complete and proper), so they haven’t made their way into mainstream crypto libraries yet.

There’s a startup company in the Bay Area, Rigetti, that offers a 19 qubit quantum computing developer environment free. It sounds snazzy on the website, but I don’t pretend to understand their product.

NIST, among others, endorsed a backdoored cryptographic algorithm, so I would not place too much weight on their formal blessing.

Oh, I totally agree from a personal trust perspective - I meant more from a widespread adoption perspective. The government needs to use NIST-approved cryptography, and is a big mover in terms of defining the dominant crypto libraries used.

I made a post in an old thread about how Quantum Computers differ. For context, the thread was asking whether Quantum Computers would aid in computing fractals more quickly:

The value in quantum computers is that it opens up an avenue to solve different sorts of problems we can currently only solve on a very small scale because our current computers need to simulate a quantum computer’s thought process to solve them*.

  • NB: we don’t usually solve these problems by literally writing a QC simulator, we usually use some parallel algorithm that just so happens to explore all the possibility branches a QC would naturally explore via quantum state.

That’s why I included “and nobody was forewarned about it”. In the real world, progress is incremental, and before we get a quantum computer big enough to deal with real-world-sized encryption keys, we’ll get a whole succession of smaller ones working up to that size, and at some point the imminent threat is going to be obvious enough that everyone will switch over to one of those other techniques. But if some James Bond supervillain built a kiloqubit machine right now, he’d be able to wreak absolute havoc, because right now, those other methods mostly aren’t implemented.

Ummm, quantum computing is ALREADY here. IBM (and a few other companies, likely) offers quantum computing as a cloud-based service. They’re currently rather massive (especially on a per-qubit basis), so this some super-tiny or super-powered processor.

source: https://arstechnica.com/science/2018/03/ars-visits-ibms-quantum-computing-lab-but-finds-no-cats-trapped-in-boxes/

Yeah see the “prototypes the size of busses” part of the OP?