Traditional rounding 'bias'. I don't get it.

Manually tracking the position of the decimal point is equivalent to manually tracking what units you’re using. Which is absolutely essential, no matter what numerical representation system you’re using.

I don’t think you are understanding that there are additional calculations required to make sure the integer operands have been adjusted so that the conceptual decimal points are in the correct position for the operation.

A very simple example:
Value #1=1.23
Value #2=4.56789

To add these together when using integer data types you would need to adjust value #1 by multiplying by 1,000.

With decimal/numeric data types the software/hardware handles that for you.

If you’re tracking the decimal position, you’re using floating point (that’s what “floating” refers to). Fixed point arithmetic has its decimal position hard-coded (that is, “fixed”). To continue an earlier example, 1.2 and 3.4 are stored in tenths-fixed as 12 and 34. Multiplying them goes as x = 12, y = 34, z = (x * y + 5) / 10 = 41. The 5 and 10 are hard-coded for the tenths fixed-point.

Or, for another example, in 100-thousandths, x = 123000, y = 456789, z = (x * y + 50000) / 100000 = 561850, with 50000 and 100000 hard-coded. If you’re storing of the decimal place of each variable separately, you’d be doing floating point.

Now, back to the rounding issue, you’ll notice the pseudo-code I just wrote is implicitly rounding 0.5 up. That may or may not be correct for a particular application. We’d need something more sophisticated to handle rounding halves-to-even.

We’re talking data types in programming languages and db’s.

INT, LONG
vs
FLOAT, DOUBLE
vs
DECIMAL, NUMERIC, NUMBER

If you are using INT or LONG and manually adjusting before/after every calc, the data type is still INT or LONG, it’s not considered a float just because you are manually adjusting.

The point was that business/accounting software is typically not written using INT, LONG, FLOAT, DOUBLE for currency and other values that typically have decimal positions because the data types DECIMAL/NUMERIC/NUMBER exist for that exact purpose and eliminate a significant amount of effort and errors when performing calculations.

Apparently we’re talking at different levels. I’m referring to fixed point vs floating point. That is, the basic kind of arithmetic being used, irrespective of whether it’s implemented in hardware or software.

Or, perhaps you’re conflating arbitrary-precision (for example, SQL decimal type) with fixed-point.

I have no disagreement with

Absolutely correct that most coders should be not be brewing their own implementations. But someone has to decide the scale and precision to support, whether to have fixed and/or float, and then implement a working library on whatever platform is being used. We have to deal with overflow, underflow, and rounding, and the solution varies between applications.

I think we only disagree on the labeling of what is called “fixed point” or “floating point”. It’s not fixed-point if the scale varies; it’s floating if it does.

Chronos and I were talking about data types used in software+db, INT/LONG, FLOAT/DOUBLE, DECIMAL/NUMERIC/NUMBER and the relative amount of coding effort required to perform currency/accounting style computations.

Typically, DECIMAL/NUMERIC/NUMBER data types are considered “fixed” in that they have a “fixed” precision and “fixed” scale.

There are also floating point DECIMAL types, as well as the typical FLOAT/DOUBLE (base 2) types.

I’m not sure what you think I’m conflating, but I will reiterate, the point was that business/accounting software isn’t written using the INT/LONG data types for currency and similar things because we have DECIMAL/NUMERIC/NUMBER at our disposal and it would require more effort to use INT/LONG.

I didn’t really read your post all the way through until after I posted.

Agreed that there are floating decimal types as well as fixed.