If you’re familiar with scientific notation, it’s like putting every number in that format:
1729.03 = 1.72903 x 10^3
-1899012530000000000000000 = -1.899012532 x 10^24
0.000045 = 4.5 x 10^-5
Suppose I only have 20 digits in which to write my numbers. I can get rid of the parts that don’t change, and write them in this way :
(s) (dddddddddd) (s) (eeeeeeee)
The first part (d) is assumed to be of the form d.dddd. The (s) is the sign of the number, or the exponent (I’ll use 0 to mean positive numbers, and 9 for negative ones). My numbers above can then be written as
0 1729030000 0 00000003
9 1899012530 0 00000024
0 4500000000 9 00000005
What if I want to write this number?
30.0000000008 = 3.00000000008 x 10^2
I have a problem. I can’t write it this way, because there’s not enough (d) digits to fit. That’s the problem you run into with loss of precision. Now, the upside of this is that I can write numbers that have more than 20 digits in them. This is useful if I need to work with numbers that might vary over a huge range (+/- 10^10000000 here). Really, though I only have the same total numbers to work with (10^20 for 20 digits), they’re just spread out more.
When I do math, I can add tiny quantities to large ones, and if they’re tiny enough, it doesn’t affect my large number. That’s useful in many engineering applications, but not when those tiny numbers are meaningful (as in the second-counting example). The moral of the story is to never use floating point values unless you have to and know what you’re doing.
(Side note: Computers store in binary format, and so get away with an extra bit of precision. Since a number in scientific notation doesn’t start with a 0, it must start with a 1, and you don’t need to store that 1).