Which computer has made the most calculations?

Personally (and the OP may have different notions) I would count it as a system that can do the most calculations.

If we nitpick distributed systems then why not nitpick single processor output versus multiprocessor output? Every supercomputer is many processors. Why does it matter if they are scattered around the world or in one room?

The usual logic would be because a processor is not by itself a computer, but merely a component of one. It will still have shared storage, shared memory, shared components, etc.

At least, that is how the term “computer” tends to be used in common language. People don’t tend to refer to multi-computer networks as a single computer. Hence the term “distributive computing” to describe it.

That said, I admit the line is fuzzy, which is why I said it it is up to the OP to clarify. Though, honestly, I’d be surprised if any widely distributed network was really in the running. They use only a fraction of the physical machine, and have huge latency issues.

The distinction is important when I (e.g.) want to calculate the eigenvalues of a huge matrix. If it cannot do that, it is not much of a supercomputer. OTOH there are many “embarassingly parallel” problems that require a huge amount of calculations in total, even if they are not run on a supercomputer.

My dad bought a 286 computer back when that was state of the art. He had a FORTRAN program that he used to linearize 50x50 matrices. He said typically this took 48 hours each run.

A few years later he got a 486DX. The first time, he set the program running, went to make a cup of coffee, and when he got back the program had stopped and the screen was sitting at the prompt. He says he spent an hour trying to diagnose the problem before he looked at the answer file and realized it had completed - in less than 5 minutes. that’s the power of the math coprocessor on the chip.

Computers have only gotten better since then.

Sounds akin to the Wait Calculation for interstellar travel. Not shocking at all that the concept applies across a number of fields

Then…ignore my post, I suppose. :slight_smile:

But, at least I learned something new.

We had a server ranch (bigger than a farm) to do simulations on, with a few thousand processor boards. (It helps when you make them.) Each processor had its own memory, but not its own disk which was shared with everyone else. One computer or several?
In grad school,I was in a seminar about multiprocessor systems. One big issue was interconnect, wires back then. Now the same system is on a chip or on chiplets connected on a substrate, and in a box. Does it being in a box make it a computer while the older systems being in big rooms didn’t qualify?
I don’t have much of an answer, but the question may not even make sense. The fastest computer question is easier since the entire computer - whatever you call it - is working at one calculation. This one is tougher.

I know it well. If you’re testing you have to turn off this feature so that chunks of the processor you’re not testing this second don’t turn off on you. And when a section comes up you have to worry about power droop caused by the chip suddenly sucking more power.

I’m going to bet that a networked bunch of processors functioning as “a computer” and mining cryptocurrency is going to be the winner.

Voyager suggested looking at electric bills, which reminded me:

I read recently that cryptocurrency mining is so compute intensive that the electrical power requirement is a significant burden on the power grid, and that these miners are setting up shop in locations where electricity is relatively cheap.

How about a floppy drive in each machine and an intern to swap the disks around? Sneakernets are a thing, after all!

Yeah, power gating was a much more difficult problem to crack than clock gating. Lots of “analog” problems to solve, like the power droop you mentioned (in my experience, usually called “dee I dee T”, that is, dI/dt, or the change in current over time). Still, the rewards were greater, and it’s all about power efficiency these days.

dI/dT is what is happening, but droop is what you see when you look at the power line - which is supposed to be constant, or at least that’s what I learned in logic design 50 years ago. I proposed in my column once that we don’t teach new logic design students about idealized gates with some voltage coming in, but only show real waveforms, so they never get this initial idea that there are such things as square waves in the real world.
Droop is an issue also when you test whether the chip will meet its rated speed. If the test does not create lots of switching around the site of the path you are testing you can miss some speed paths.
Also around power, part of the test of a processor is to run a speed test and keep dropping the voltage until it fails. You pick the lowest voltage that passes, and put it into a ROM - or you blow fuses. The power supply reads this and delivers the lowest voltage it can to your part which saves power. We also put this information on the 2d barcode on top of the chip, but you can’t see that in the field since it is covered by the heatsink.