How long for the fastest computer to multiply two numbers whose product is a 30-digit number.?

COMPUTER QUESTION: How long does it take the fastest current computer to multiply two numbers whose product is a thirty-digit number?

I realize that it would take even the fastest current computer, say, millions of years to find two numbers that multiply to a specific thirty-digit number; (I.e., to factor the number). That is the basis of modern cryptography.

But, given the two known numbers that multiply to that thirty-digit number, how long would it take the fastest current computer to do the multiplication?
A second? A minute? An hour? How long?

I would guess on the order of microseconds.

I 'm pretty sure think it wouldn’t take me an hour to multiply two numbers that resulted in a 30 digit number. I can’t believe it would take a computer any kind of time you could measure with a stop watch.

More like nanoseconds. Have you tried it in your calculator app? (Or factorizing a 30-digit number, which is not so bad as you seem to think.)

Multiplication of really big integers has a time complexity more like roughly n log(n), so hundreds of thousands of digits are not a problem.

To add, that microsecond guess is for a typical modern PC. I’d think a specialized ASIC could do it much much faster.

This code multiples two 128 bit numbers for a 256 bit result.

Estimating the cycles in my head, it looks like a CPU that can do 64 bit multiplies in a few cycles (maybe 5 or so, from here) could do a 128 bit multiply in a few hundred cycles. On a 4GHz machine, that’s a few million multiplies/second.

2^128 is a 38-digit number, which means that optimized 30-digit multiplies would be even faster.

A 30 decimal digit number is about 100 bits. Modern 64 bit processors can multiply two 64 bit numbers to get a 128 bit number in a few clock cycles (pipelining makes an exact answer tricky). So a standard 3 GHz desktop computer can do the multiply in something like one nanosecond.

Also, 30 digits (100 bits) is far too small to be a challenge to factor. My old Mac laptop can factor a random 30 digit number in a few milliseconds.

A 30 digit decimal integer needs about 96 bits to represent it. There are plenty of processors that can perform 128 bit integer arithmetic, and if you want the absolute fastest possible speed, crafting an FPGA to perform the multiplication is trivial.

If we look at almost any an Intel x86 processor produced now, they support 64 bit operands. The imulq instruction multiplies two 64 bit integers to form a 128 bit integer. If your original integers both fitted inside 64 bits (about 20 digits long) you can do what you want with one instruction. Beware, this is not an estimate of the time it would take. Any real code is limited by the time it takes to get the operands to where they can be used (ie into registers) and the time to a single answer (latency) is no the same as the throughput (if you have lots of pairs of numbers and can make use of the pipeline.) But something of the order of say 10 cycles might be a reasonable start to the time taken. On a top end processor that might be of the order a few nanoseconds.

Supercomputers these days aren’t supercomputers because they can do one thing fast. They are supercomputers because they can do a lot of things fast. They are massively parallel.

So a supercomputer can probably multiply a couple of 30-ish digit numbers in about a nanosecond, not much faster than your standard desktop computer. But, a supercomputer can also probably multiply a few million pairs of 30-ish digit numbers in about a nanosecond (total), where a desktop computer is going to need a few million nanoseconds to accomplish the same total work.

To use actual numbers, the Chinese Sunway TaihuLight supercomputer runs at a measly 1.45 GHz. But, where a typical desktop computer these days might have 4 or 8 cores, the Sunway TaihuLight has over 10 million cores. Each individual core won’t be much faster than a typical desktop core, but each core can be working on a different multiplication problem, so it could solve over 10 million multiplication problems, all in about a nanosecond or so, total.

The top 10 supercomputers in the world all have over half a million cores each. That is what makes them fast and powerful, massive parallelism. It’s not that they do each individual multiply, or any other mathematical operation, fast. It’s that they can do so many mathematical operations at once that makes them fast.

Ignoring the OP’s ill-chosen example of a 30 decimal digit number, it’s generally true that for large numbers, multiplication is very much faster than factoring. (This is the basis of several cryptographic protocols, including the widely used RSA public key system.) One of the fastest factoring algorithms, the quadratic sieve, is O(exp(sqrt(n ln n))) where n is the number of digits in the number to be factored. On the other hand, a naive multiplication algorithm is O(n[sup]2[/sup]), and a more sophisticated algorithm, the Karatsuba algorithm, is O(n[sup]1.58[/sup]). In other words, factoring time is exponential in the size of the input while multiplication is polynomial, so for a sufficiently large input, factoring will be vastly slower than multiplication. Currently, factoring a 600 decimal digit (2000 bit) number is far beyond the capabilities of any existing computer or network of computers, but my old Mac laptop can multiply two 300 digit numbers to get a 600 digit result in less than 15 milliseconds.

Not only is multiplication polynomial, as mentioned already good old Schönhage–Strassen multiplication (which is used in practice for reasonably large numbers, say around 10[sup]10[sup]5[/sup][/sup] or more) is O(n log(n) log(log(n))), and one can (in theory) do even better. Karatsuba is good up to around 400-500 digits. It all somewhat depends on the processor architecture.

ETA I don’t know the state of the art, but I do not think it is known yet that factorizing is not possible to do in polynomial time.

No, it has not been proven that there is no polynomial time factorization algorithm, but none is currently known, despite a great deal of research in the topic.

In fact, aside from trivial things like “print a list of all n-digit numbers”, there isn’t any practical problem at all that’s absolutely known to require greater than polynomial time. But there are a bunch of problems where everyone would be really, really surprised if a polynomial solution were possible. And the prize, if one were to find such an algorithm, is literally all the money in the world.

I suppose one way of looking at it would be that if a problem absolutely requires more than polynomial time (e.g., deciding whether two regular expressions are equivalent), then by definition it is not a practical problem, at least not for reasonably large inputs.

You must live in a different world of “practical” from me.

I know a lot of provably exponential time problems that would have been really nice if they were polynomial instead. A lot.

For example, Second Order Logic with a Least Fixed Point Operator is “merely” ExpTime.

And that’s a simple one. Presburger Arithmetic is somewhere between double and triple ExpTime.

If you mess around, for example, with any logic powerful enough to be useful to a decent AI logic, you’re going to be in the double ExpTime at least.

It’s so easy to express things in many logics that represent solutions to problems. But then solving those equations is a whole 'nother thing.

A lot of everyday games when extended to nxn boards are ExpTime complete. E.g., Go, Chess and even Checkers. These just illustrate quickly you can get into ExpTime world with alternating quantifiers.

Fun thing to try: come up with a logic for finite models (i.e. databases) that allows all sorts of reasonable queries but can only express queries that are PolyTime. I.e., make it sort of idiot proof in terms of complexity.