I agree that “floating point” is a bit of a misnomer.
Fixed point is also a bit of a misnomer too. It’s appropriate for any variable that always has the same precision (such as inputs and outputs to a calculation) but intermediate variables or registers change frequently in the course of a calculation. (Of course, to a mathematician, that register or variable is a conceptually different thing at each stage in the calculation. As programmers, we get a bit obsessed with the cups, whereas the mathematician appropriately thinks only of the contents, as it changes from hot water to tea to sweetened tea. Now where’s the cream?)
I did a bit of fixed point coding in assembly, for auto engine control, on 8- and 16-bit computers. A serious PITA! But mostly, a bookkeeping nuisance with few conceptual surprises. The conceptual surprises come when you expect a compiler to do it. I was involved in a project where a new compiler handled fixed-point for you, something vaguely like COBOL or PL1, where you can define the variables and the compiler chooses the precision and range for intermediate results.
Well, it turns out you get to choose between nice tight semantics that produce truly bizarre-looking results from seemingly ordinary math, or looser semantics that usually produce “sensible” results but have bizarre corner cases. What we often don’t realize when we write a math expression is how much we might be demanding of intermediate storage for temporary results.
A similar kind of problem also happens using floating point, too. For example, in some digital audio applications, the mathematician can write out matrix equations that solve a problem (implement a filter, for example). The math can be relatively elegant and produce ideal results, in theory, but when implemented in a straightforward way, the results don’t match. The reason is that the analytic equations might rely on an intermediate term with two near-infinites or near-infinitesimals that divide to produce a number in the normal range. If you don’t have enough bits of precision, it goes south very quickly.
The good news is that floating point (especially double-precision, which is the default for things like C function parameters) pushes most of those cases into applications where fairly serious math is involved, and don’t usually bite us math-challenged folks in the ass too often. Scientists may not be so lucky.