How would software for super fast computers work

Just wanted to add that I actually wrote such a program, or very nearly, with GW-basic on my old PC-XT maybe fourteen years ago. I think mine was just different in that it could work for factoring an entire series of numbers at once, and in that it would report much more factoring information than just the lowest possible factor.

I wasn’t using it for factoring particularly large numbers, actually, as for finding small numbers that had large numbers of factors. Was on something of an odd kick for numbers that could be split as many ways as possible.

Yes, I’m a math geek, we should all know this by now. :smiley:

There is certainly no problem writing software to use the available power, as a number of people have pointer out, a large number of simple but very important problems are “NP Complete” as opposed to “P complete”. If a problem “P complete” the length of time (and/or computing power) required to solve them varies by a polynomial amount. For example the Traveling Salesman problem:

If you don’t need the perfect solution to this (so your answer is not guaranteed to be smallest possible distance, but will be close), the best solution will solve the problem in n^3 time (nnn). This is still non-trivial (ideally you want an algorithm that could work in n or nlog(n) time), it means if your computer can solve for a 100 cities in 0.0001 seconds, it would take a couple of minutes to solve for 10,000 and over 3 years to solve for a million cities. But if you DO need the absolute correct solution then the problem is NP Complete and can only be solved c^n time (actually the best exact algorithm take 2^n2^n time). So to solve it for large numbers is effectively impossible (the numbers rapidly get so large that all the computers ever made, working for the lifetime of the universe will not solve the problem).

An additional complication is idea of parallelism… Supercomputers (and increasingly normal PCs) achieve high performance by having lots of chips doing stuff at once. This makes is harder to write software for them. For example a lot of algorithms rely on having the results of one step before doing the next one. If you had 32 different processors working on an algorithm like this, unless you could find someway of dividing the problem up amongst them (which for a lot of problems is very hard), 31 of them would stand idle, so you would only get 1/32th of your theoretical computational throughput.

As for quantum computers all bets are off. Whatever we’ve learnt about programming conventional computers will likely be useless. They have the potential to crack the NP Complete problem. So you could theoretically solve a problem that mathematically should take the lifetime of the universe in a couple of cycles.

Sorry, fooled by your use of instructions. Applications programs are simple also, by that metric. I do believe we’re getting to the nematode level, though.

I was delighted to discover that 11,111,111,111 has just two factors: 21,649 and 513,239, thanks to a simple REXX exec. I doubt I’d have stumbled across it otherwise. :slight_smile: