Lets say they make a quantum computer that has something akin to Moore’s law. It may have 100 qubits in 2019, then it has 200 next year. Then 400, then 800, etc.

What would having affordable quantum computers actually change? I’ve heard some people say you need millions of qubits to do some calculations. Is that true? I would’ve assumed 100 qubits was more than sufficient for most calculations, and supposedly after you get to a few hundred qubits, isn’t your quantum computer more powerful than a regular computer if it were built of all the matter in the known universe?

Would trillions of qubits make an impact, or does the benefit level off after you get to a few hundred?

What fields would really be radicalized? I assume cryptography and molecular modeling. Anything else? Would AI rapidly advance or is that a pipe dream?

Keep in mind I don’t understand IT well, so a lot of the answers will go over my head.

Regarding AI : certain solvers are solving problems that resemble the traveling salesman problem. A robot arm planning a sequence of moves is doing this. So if quantum computing were readily available, some steps would be solved optimally faster and robots would be even better at spanking our human butts. But only somewhat better, exact solutions are only a little better than approximate.

This by itself would not make AI sentient.

As for cryptography : yeah, that’s the big worry. Only thing stopping someone from falsifying crypto currency transactions is a private key. And an algorithm exists to find that key, its straightforward. Just would take longer than universe lifetime for a digital computer to crack.

When the early computers first appeared they were thought of as solutions to a few known problems. As they grew more powerful, they were put to work on a million problems, many of which nobody had ever thought to work on before. The same would be true for quantum computers. If they are truly useful, they will be used for a million things nobody today ever imagined.

Fair enough. Do you think my statement about qubits not really improving things after the first few hundred was in line with the same thing (a lack of imagination)?

Similar to the quote attributed to Bill Gates in the early 80s when he supposedly said ‘640KB of memory should be enough for anyone’? Is assuming that a few hundred or few thousand qubits is all we need the same kind of thinking that’ll be absurd by 2070? Would trillions or octillions of qubits possibly open up new horizons we can’t really think of yet? Doesn’t every additional 10 qubits increase the power of a quantum computer by a factor of 1000?

Here’s my fantasy use of quantum computers: predicting the weather accurately.

Using nanotech, you’d make billions and billions and billions of sensors the size of a mosquito, which float through the atmosphere all over earth measuring temperature, barometric pressure, etc, and reporting the data to a super-quantum computer.

After a few decades, the computer has a massive data base.
It then compares today’s data, finds the best match, and tells you that in your location it will rain today between noon and 12:25.

I dunno if this is possible even in theory.
But it sure would be nice.

It’s possible only as an approximation, which is essentially what we do today, only on a very rough scale. No doubt with much better information and much more processing power weather approximations can become much more accurate with much better predictive power. The problem is that perfect information and perfect predictability is impossible even in theory because as you drill down to finer and finer levels of granularity, you’re eventually faced with an initial-condition problem that is not deterministically resolvable because it’s affected by quantum uncertainty. Your weather forecast will be wrong because you didn’t know some atom’s electron was going to change its energy state and an air molecule would move in a slightly different way.

It’s important to understand that a quantum computer is not just a faster type of classical computer. Quantum computers work in a fundamentally different way than classical computers. Algorithms that are practical on classical computers generally are not practical on quantum computers and vice versa. So we have to invent completely new techniques for programming quantum computers. After years of research, we still know of only a handful of algorithms that are useful for quantum computers, like Shor’s algorithm for factoring integers. It’s probably true that there are many problems where quantum computers are just not applicable and classical computers will always perform better.

The same sorts of folks who told us in the early days of the telephone that telephone systems couldn’t scale either? The “obvious” problem being the necessary switching capacity, so that soon everybody’s sister, mother, and daughter would have to become a switchboard operator and even that wouldn’t be nearly enough to route all those calls if everyone had a telephone!

And digital computers couldn’t scale, either. Vacuum tubes got hot, and they burned out and failed, so that a computer’s reliability fell off exponentially as the number of tubes in it increased. Plus at some point all that concentrated heat generation would be unmanageable. Fortunately, only about 4 or 5 digital computers would be needed in the entire world, per Thomas J. Watson Sr. And certainly, no one would ever need a computer in the home, per Kenneth Olsen, founder of Digital Equipment Corporation (DEC) – a statement made many years after Watson’s absurdity. Futurism and absolutist predictions are fraught with peril.

Quantum computers don’t scale in the same straightforward way as classical computers, although some similar technologies can be employed, like large-scale integrated circuits and error correction. Most engineers working on the problem don’t share your stated defeatism.

My understanding is that the need for millions of qubits is because error correction is so difficult. An Australian firm is already investigating possible techniques, although they’re a long way off and nobody knows if that will became practical. New Technique Could Put Millions of Qubits on a Chip.

It’s a worry, but not a show-stopping one. Sure, any Diffie Helman / RSA algorithms are vulnerable to Shor’s algorithm, but there are a number of replacement algorithms actively being explored such as super-singular elliptic curves and lattice-based methods that are not vulnerable to quantum factoring.

The big thing is none have risen to the top in terms of implementability / robustness, and so NIST isn’t currently recommending any quantum-resistant algorithms yet.

As for use cases for a quantum computer with arbitrarily many bits, my first thought is to grid search all the things!

A grid search is useful where you have a predictive model with an unknown number of dependent variables but many thousands or tens of thousands of candidates, and a known number of hyper-parameters in the model that can all vary the model’s outputs and accuracy / precision. A grid search literally partitions the solution spaces for each of these, and just grinds through all the options to find the optimal set of variables and the optimal values in your hyper-parameters.

Right now even with Dask on top of AWS and it’s seemingly infinite / parallelizable computational resources, you can still easily run into problems with predictive modeling that you can’t just throw more CPU at to optimize fully, because depending on the number of layers and hyper-parameters in your model, the calculations needed grow combinatorically.

But a quantum computer should be able to grid-search large parameter-spaces easily, and you just chain enough of them together that you can optimize every modeling layer and parameter in your modeling stack. Even if you were only able to do this rarely, you’d probably find counter-intuitive parameters and hyper-parameter combinations that are optimal that you would never have found without a grid search, that can then help inform any future models of this type even without quantum grid searches.

Yes, point taken on the cryptography. Though I will point out that no matter the method, you know a binary string of a known length is the answer and you can check it for validity. So some method may always exist to crack any form of encryption that continues to use the same key. I won’t pretend to understand the details of the math other than to say in the case of crypto, other agents use an algorithm to validate a transaction signed for an account, and thus you could build a computer that test validates keys until it finds one that works. If the computer could somehow try many combinations of bits simultaneously you could do this no matter the algorithm. (I understand that present theories for quantum computing are nowhere near this general purpose)

As for grid solvers : consider this. An AI serving as an engineer is essentially trying a grid of potential parts from a catalog and simulating the resulting design versus a problem. Human engineers can’t consider all that many combinations of components (and ideas that essentially procedurally generate a custom part, like 'use a planetary gearset sized for this load here’s)

AI agents that grid search this way could find designs that are actually as optimal as the simulation used for validation will allow.

This is a very interesting idea, but I think it’s along the lines of “if you could model every atom in the room/state/universe, you could predict which roulette number the ball will land in every time.”

The numbers may theoretically be finite and iterable, but it’s probably one of those “every atom in the universe would need to be part of the computation, and even then it might take a few billion years” sort of things.

It’s my understanding that Shor’s algorithm works by defining a periodic function that outputs the factors of your number-to-be-factored, then using quantum superposition to iterate over the entire function to solve for what the period may be, which is then Fourier transformed to the answer.

I wish I understood the specific implementations of quantum algorithms better to know specifically how/why your “any string of bits is iterable” would not be implementable, but I think it must, because otherwise there’s literally no limit on what you could do. The collection of “important” information that’s really just a series of bits is literally endless - you could define yourself as a series of sufficiently well-defined bits if it comes down to it, not to mention bank logins, nuclear launch codes, and best-selling novels, among many other things, all of which I doubt quantum computers will be defining / hacking / churning out any time soon.

I would bet on something like this “any string of bits” not being amenable to superposition or periodicity, but don’t know enough details to know why, or even whether the periodicity in Shor’s algorithm is core to its quantum implementability, or whether other whole-function-at-once-kpi’s would work just as well (like min/max or continuity or what have you).

Hey! I love this idea. I mean, there’s two big “ifs”, but they may be solvable. As long as our simulations are sufficiently robust and predictive of real-world behavior, and as long as the iteration of the simulations is amenable to superposition and periodicity being a key measurement, it may be a great application. I’m not too versed in the field of mechanical engineering, but I’d imagine we have pretty good simulation capabilities there already.

A similar idea occurs to me in microbio engineering - it’s another of those combinatoric explosion fields where trying to explore the full protein-knockout space of even relatively simple constructs requires more CPU power and physical iterations than is ever currently possible, so the solution space needs to be greatly constrained by expert guesses before trying anything. So just like above, first we would need robustly predictive simulation capability, and then to make the iteration of such simulations amenable to periodicity and superposition being key to the solutions.

If we had this ability, we would get MUCH better at biological editing and constructs, and it could be the factor that sets off an explosion of targeted and predictable gengineering.

Regarding general solvers : the difference between using a quantum computer to guess what nuclear launch code a human picked and guessing a private key is that in the nuclear launch case you don’t have the data without a perfect molecular scan of the human picking the key at the moment they did it. The information isn’t available. With a private key, you have all of the information you need. You have a way to exactly test and verify any key you want to try.

Regarding simulations : the simulation would have to be a machine using probably both a classical model, a neural network to reduce errors between the outputs of the classical model and observed states, and would need to output not just predicted state but a measure of the simulation variance for that regime. So the metric used to evaluate designs would have to take into account both predicted performance and simulation variance. (You want high performance and low variance in some weighted combination)

Obviously you then try building the design using robots, you measure the actual performance and you run the simulation frame by frame, updating it by the errors between predicted and actual outcome.

Then usually you have the robots recycle the prototype and you run your quantum solver again with the results from this attempt.

Eventually you will get to optimal designs that work in reality.

You don’t need to know every molecule in the casino to predict the roulette ball. Computers already exist that work for that, and have for years, with inputs so simple the operator can enter them just by tapping their foot. Of course you’ll also be jailed if you’re caught using one in Vegas, and they’ve gotten very good at catching people.

For weather, that swarm of mosquito-drones would make a lot more difference than the quantum computer behind them. It’s not the calculations themselves that are hard (at least, not compared to the capabilities of modern computers); it’s getting enough data to feed into them.

Ah, got it. I wasn’t getting that you were making an assymetric-key cryptography argument instead of a general solution argument.

But on key-exchange, I think there are already quantum-resistant methods with perfect forward symmetry, so that even if you broke the private key, your session keys still protect your other communications / sessions. On looking it up, SIDH fits that bill. But maybe I’m misunderstanding your implementation.

Very interesting - so it sounds like you’re proposing a 5 layer implementation here.

Simulation space modeling

Meta-model variance / accuracy tracking model

Physical implementation of most promising output

Physical testing of what you just built for desired properties

Outputs in 4 feed back to 2 to train the model for better scoring

The hard part there seems to be automating 3 and 4 except for very specific and constrained solution spaces, but I get it, and that sounds awesome. I’d like much the same thing for the microbio application.

And I’m assuming the quantum evaluation is happening in 2?

You know, now that you say that, I think I remember hearing about somebody using a shoe-computer to do exactly that. So that was a particularly bad example.

For some reason I was thinking it was like billiards, about which I have a dimly remembered factoid along the lines of “if you wanted to calculate the precise trajectory of a 20-bounce billiards shot, you’d need every atom in the universe working as a computer.”

I guess I had the notion that roulette should be similarly sensitive to initial conditions and perturbations, but it’s likely not as complex. Or I could be wrong about the billiards factoid too!

But weather. Weather was a good example of a non-tractable problem, even if you had nigh-infinite quantum computing power to throw at it, due to high sensitivity to initial conditions and successively increasing uncertainty / measurement gaps.