My dad told me about a computer he worked on in the late 50’s. In lieu of RAm, it had a magnetic drum and a row of heads, and the drum was broken into cells. Programming consisted of a giant papeer spreadsheet, when you put in an operation. Each operation included the cells of the two operands, and where the result was to be written, and the cell address of the next instruction. Part of the fun programming was verifying the rotational delay of the drum vs. how long the operation took so as to position the next operation code just far enough along the drum so it did not have to wait too long for that cell to roll around to one of the heads.
I don’t recall the model number, but the rotational speed was obviously a big deal. Still, it would be faster than tape. RAM was virtually non-exitent and expensive, for bigger computers core memory was wired by hand, 3 wires for each bit.
Somewhere in my basement I have a copy of the first microcomputer plans published by Radio Electronics, using an 8008 processor. It included an add-on board for 256 bytes of RAM using flip-flops. By 1973 you could buy IIRC an 8-flipflop IC.
I recall reading about one of the first vacuum tube computer computers in the late 1940’s, grad students had a shopping cart full of vacuum tubes and would go back and forth replacing any tubes they found burned out, and the calculations were run three times (and take the average) because of the risk of dropped bits.
Since most floating point operations before math co-processor chips involved math on segments of the floating point number, a simple floating point operation could take a while.
(I.e. if a float number is Exp: first byte-second byte-third byte-fourth byte (So, although it’s more complex than that with the first byte of A being the exponent, etc.) Most computers had an arithmetic module than could receive two numbers (bytes) and do math on them, add or multiply. Also note that “byte” as 8 bits is a fairly recent standard.
So if we represent a floating point as,say, 3 digits - multiply would be 123 x 456 : we multiply 1x4, 1x5, 1x6 then 2x4, 2x5, 2x6 etc- allowing for carries, and then add them with offsets much like grade 4 multiplication. Then for floating point, reconcile the exponents. For addition you need to offset one with respect to the other based on exponents. After each full float calculation, the number had to be rationalized - scientific notation requires the decimal to be after the first significant digit, so shift to get rid of leading zeroes and adjut the exponent. Math modules that could expedite the floating point opeations, to 16 or more bits at a time, were valuable but complex logic.
There’s a whole field of computer math in designing algorithms avoiding the rounding errors creeping in to be significant. But you can see that one simple math operation for floating point (science’s favourite) involves a large number of byte-by-byte and logic operations. Plus with the variety of technoloy in the 50’s, no simple answer as to speed.
(IBM with the 360 series, for example, had decimal calculations where each byte was 2 decimal digits, and any arbitrary length string of bytes could be used to do decimal math - ideal for business calculations to avoid those pesky decimal-to-binary rounding errors that could lose a few cents in translation.