Somewhat archaic question. I’m teaching high school students exponential equations, and logarithms, and starting thinking of how only a few decades ago they would have learned to division by using logarithm tables.
Now doing long division you always know the precision of your result, but how precise is the result if you use a log table?
A quick test in excel seems to indicate that if you use 4 significant digits when converting to log and back you get at least 3 digit precision in the result, but is that universal?
[Ah, I gave a long rant on why I feel “significant digits” are generally a terrible hack for talking about precision and uncertainty, but on second thought, forget it… A bit too tangential from engaging with your actual question.]
Well, the perfect treatment of error would be to have an error distribution for each of your inputs, and to know how all of your operations transform input error distributions into output error distributions. But that would take a really long time, and not always be possible.
Often, it’s good enough to use some summary of your error distribution, such as a mean and standard deviation, or minimum and maximum possible values. But while that works perfectly for addition and subtraction (if you’re assuming the correct shape of distribution), and decently for multiplication and division when all of your numbers are sufficiently far from zero, it can cause problems with other operations (especially when you’re close to zero, or to a point where your operation has a discontinuity).
But in any event, those same problems will also arise when you’re using significant digits, and then some (for instance, the number 1.01 has three significant digits, while 0.99 has only two, despite both having almost exactly the same relative precision).
Go to a museum that will let you play with their 1960’s slide rule. You’ll see that the accuracy diminishes toward the right end of the rule. In other words, the higher the log value, the lower the level of accuracy.
That’s one of the beautiful things about slide rules: They make it obvious that you still have error after your calculations, and approximately how much. You’ll never get a student writing down an answer with nine digits after the decimal point, with a slide rule.
Uh oh … I’m holding my slide rule right now … you’re saying it belongs in a museum … do they pay much for them? It’s a Picket model # 500 Ortho-phase Log Log …
But that’s the result of creating a physical representation of a log scale, no? In a log table you lists as many log values for numbers between 1 and 2 as between 9 and 10, they are squeezed together.
I realised by the way that my OP asks a nonsensical question. I was thinking in a hybrid part log-table/part log function way and used the 10^x function in excel to convert back from log values rather than a log table. Stupid.
I guess there are other interesting questions to be asked about doing math by logarithm table, but I won’t be doing enough of it to really discover them.
In fact, the precision remains a constant percentage of the value represented. In some cases this is a very useful trait…essentially it avoids dragging around precision that doesn’t matter.
Short answer to OP: No, for large values, an accuracy of 4 significant digits on the log can actually result in zero correct significant digits in the final answer. However, knowing the log to 3 decimal places will give you a result to around 3-4 significant digits in the answer.
Long answer:
This might be one of the very few instances where I would actually like to use the coefficient of variation, which I usually hate with a passion.
A good way of approximately formalizing X significant digits is to say that the error due to rounding is in the ball park of 10^-(X-1) of the value itself. Or in other words given a rounding error of e and a true value Y my significant digits will be around 1 - log10(e/Y)
So lets start with a true value of Y and an error e
my estimated value is 10^(Y+e)=(10^Y)*(10^e)
so the error between these values will be about (10^Y)*(10^e-1).
So my significant digits will be approximately 1-log10(10^e-1). So if e is around 10^-3 the number of correct significant digits is around 1-log10(10^1.001-1)=3.63, so you are going to get 3 significant digits and be close on the 4th.
But if the error is larger say 0.5 , then you find that have less than 1 significant digit. So comparing 10^1234.5 to 10^1234, will be off by a factor of 2, and so won’t even get the first digit right.
if you have A ± X % / B ± Y % , whats the new error, and is it difference if you do the division by logs and not by direct division ??
Well the errors add, but if the / step has a really tiny error compared to X and Y, its often ignored. But if / gives Z% error, and you dont simply ignore it, then the error is X + Y + Z %.
So the difference between direct / and division by log subtraction is undefined, it just depends how precise your division by log is…