Which computer has made the most calculations?

I guess I mean individual supercomputers but I don’t really know that much about the topic. Basically computers are getting faster all the time however some slower computers have been running continuously for decades.

So has the latest supercomputer made more calculations than any other because its so much more advanced or has an older but advanced for its time supercomputer made more calculations because its been operating longer?

If that makes any sense, I’ve just finished a night-shift.

I am not sure this can be answered but I am wondering if there is a difference in what a “calculation” is to a computer.

By that I mean is…

1 + 1 + 1 + 1 + 1 + 1 + 1 + …

…the same calculation difficulty for a computer as:

1 * 2 * 3 * 4 * 5 * 6 * 7 * 8 * …

Which computer has done more calculations?

The computing power of the cutting edge supercomputers, in terms of operations per unit of time, grows exponentially (see, for instance, one chart here), but the advantage of older computers of having been running for a longer period grows linearly, of course. So I would guess that the answer must be one of the most recent leading supercomputers, not an older one.

Supercomputers have tended to go in big jumps at the top end as the big ticket funding comes in.

The answer to the OP’s question is probably a tie between the Fugaku which has been running about two years, and the Summit which is four years old and about half the speed.

Ages ago there was an amusing paper from the computational physics community that pointed out that if you got a multi-year grant (5 years I think) to buy a supercomputer to calculate some important result. The question being what the best tactic is to get the result earliest. The answer was to sit on the money for three years, then buy the computer, and it would finish about four months before a computer bought on day one and running for five years.

Sadly I don’t have a reference to it anymore. Hopefully my recollection of the details is not too badly wrong.

Thats an interesting one.

Its probably one built pretty recently. Based on this graph:

The currently fastest computer in the world can do 1.1 exaFLOPS, 10 years ago it was 17.59 petaFLOPS. So if the 2012 was running for all those ten years the 2022 would still have caught it up in a month or so (if my maths is right)

One complication is that a lot of the current “biggest computers” aren’t precisely a single computer, but a whole bunch of computers networked together. And in some cases, it’s not always the same set of computers networked together: Some of those computers might sometimes be networked with other computers, or used individually.

Sounds like this:

…they built a “stupendous super computer which was so amazingly intelligent that even before the data banks had been connected up it had started from ‘I think therefore I am’ and got as far as the existence of rice pudding and income tax before anyone managed to turn it off…”

Not a hardware guy but my understanding is that a CPU has to run a computation every cycle, even if it’s just a ‘no-op’.

Some operations take multiple cycles. For example, a plus operation might take one cycle where a multiply might take 5. In that case, you can perform more calculations per cycle doing additions than multiplications. But, likewise, you will get the same result doing nothing since the computer still needs to do something every cycle and a no-op is guaranteed to be a single cycle operation.

How many cycles there are per second is just the megahertz value that they advertise for CPUs. If it’s a multi-core processor then it’s going that speed times however many cores there are.

If you add on a graphics card, you will have more hardware performing calculations - however, I’m not sure if they’re actively processing no-ops when not being used (?).

But, anyways, the answer would be that it’s the computer with the most, fastest processors, that’s sits unused the most often - because no-ops are guaranteed to be a 1 cycle calculation.

wikipedia has a list of FLOPs/cycle for various processors, and there are benchmarks you can run to estimate actual performance.

Not sure it’s worth the money to let the computer sit unused :slight_smile:

Hitchhiker’s Guide?

Yep! (Extra characters for nanny bot)

Defining calculations is tough. Is a matrix multiply one or many if there is hardware support for it?

One good way would be to look at the electric bill. A suggestion only slightly in jest.

For very old (or very simple) computers, that was true. Modern processors have “clock gating”, which means that idle parts are not doing anything, even a no-op; and in some cases “power gating”, where the power is completely turned off for that part of the die. Power gating is more efficient than clock gating, but takes longer for that part to get going again, so in practice a hybrid is used. Both are much better than nothing, though.

Supercomputers in the Top-500 are rated by their performance on a standard benchmarks. A benchmarks that is at least vaguely relevant to supercomputers, solving a very large linear equation using Linpack (a standard linear algebra package). Despite rumblings some time ago about an updated metric, this remains the Top 500 score.

In reality the big supercomputers are built to address the problem set they are expected to be used on. You will see quite a variety of different hardware, some add GPUs, some FPGAs, different communication hardware and some custom hardware. For many none of this helps with getting a better Linpack score. In fact, since you are allowed to tune the problem size to match the system, you can pick a problem size where interconnect speed doesn’t matter. You can creat a custom BLAS library, so you might get to make use of acceleration hardware.

But how the system is used in practice varies a great deal. Some supercomputers are funded on the back of a bid by a consortium of different researchers, simply because a bid for a big prestigious system is more likely to succeed than a bunch of bids for individual small systems. Then the system spends its life running as a set of partitions, only ever running as a single unit to run the test code to get a Top 500 ranking. Others may be built for a specific purpose and spend their lives running mostly as a single big system, or at least a few big partitions.

Does anyone know how distributed computing figures in this? Things like SETI or Folding at home where they get millions of people to donate some computing power?

Folding@home is one of the world’s fastest computing systems. With heightened interest in the project as a result of the COVID-19 pandemic,[8] the system achieved a speed of approximately 1.22 exaflops by late March 2020 and reached 2.43 exaflops by April 12, 2020,[9] making it the world’s first exaflop computing system. This level of performance from its large-scale computing network has allowed researchers to run computationally costly atomic-level simulations of protein folding thousands of times longer than formerly achieved. Since its launch on October 1, 2000, Folding@home was involved in the production of 226 scientific research papers.[10] Results from the project’s simulations agree well with experiments.[11][12][13] SOURCE

I vaguely recall some lighthearted discussion about this years ago. If you want a place in the Top 500, you need to run the anointed benchmark. That is a trifle hard for Folding@home et-al.
These activities are more a computational resource than any sort of supercomputer. The worldwide network of smartphones is probably even more capable.

Mind you, there are more than a few machines in the top end of the list that only ever functioned as a true supercomputer when they ran the Linpack benchmark.

Is the distinction important if we want to know what has done the most calculations?

It would depend on whether you count them all as one computer or several. If one computer, then maybe it’s in the running. But I think most people would count them as separate computers who are working on the same task in parallel.

But it’s up to the OP to clarify if multi-device distributive computing counts as a single computer.

I don’t really know enough about the subject to comment either way, it’s a really interesting discussion though! I guess I was considering a single computer but that’s apparently somewhat nebulous anyway.