Number of qubits vs. size of integer than can be factorized using Shor's algorithm

I recently read in New Scientist an interesting purported fact which is that each time you add a qubit to your quantum computer it doubles its computational capacity. This is somewhat analogous to Moore’s law which describes the doubling rate of transistor density except we don’t know how frequently we can expect to add qubits to our quantum computer. Let’s make the naive assumption that every time you add a qubit it is exponentially harder than adding the last one, and at the same time our ability to add qubits increases exponentially such that we are able to add 1 qubit per year. Now we just need some kind of characterization of the growth of our computing power in terms we are familiar with. I suggest the size of the integer that can be factorized that year using Shor’s algorithm.

Unfortunately I have no idea how to produce such a plot. I know I’ve seen some quantum gurus around here so perhaps someone can shed some “light” on the issue :slight_smile:

If you only have n qubits, you can’t factor an integer larger than 2[sup]n[/sup]. There may be other restrictions that lower the amount you can do, but that’s a hard upper bound.

I see no reason why the difficulty of adding more qubits increases faster than linearly. But even 8 is hard to achieve. As far as I know, 15 is the largest number that has been factored using a quantum computer (it turned out to be 3*5).