I know of one country were charges were going to be pressed if US measurements were added to the existing metric, seems the Canadian gov did not like that idea very much.
For a down and dirty conversion from c to f double c then subtract 10% of c then add 32 so 25 c would be 50 - 5=45 +32 = 77.
Is that supposed to be easier than “((C/5)*9)+32 gives you Farenheit” or “(((F-32)/9)*5) gives you Celsius”? Why include percentiles where all you need is basic arithmetic
It is basic arithmetic, but re-arranged into a form that’s easy to do mentally.
Really? Most of the people I know find it a lot more difficult to do percentiles than division. Maybe it’s got to do with the way they’re taught in different places.
Arbitrary percentages, sure. But 10% is easy.
Again, IYE; I know a lot of people who freeze when faced with a % sign (usually the same ones who freeze when faced with fractions). I spent a whole century in one of the circles of Hell trying to prepare a class of them for the exam to enter the local Fire Department; my maternal grandmother was another, as is my mother’s best friend. But like I said it may have to do with when and how they are taught.
The problem here is that the single most important measure throughout the entire world over all of recorded history* is value … there well over a hundred different units of value used today where the conversion rates change daily if not hourly … profoundly incoherent … laying a coherent system over the top of this doesn’t make anything better …
A kilogram is a kilogram is a kilogram everywhere in the world … but a kilogram worth $6[sup]95[/sup] in the USA is worth 100,000 rupiahs in Indonesian … right now … by the time you read this that may well have changed … (and don’t forget the 5% commission to the money-changers) …
- = The very first written documents known are in fact accounting records … recording value exchanged … money is the only thing worth counting …
For a quick and dirty (and much more accurate) conversion, try “Hey Google/Siri/Alexa, what’s xC in Fahrenheit?”
Quicker and dirtier, maybe, but not more accurate.
I think I’m getting Rat Avatar’s point, he’s just not explaining it well.
Imagine two shops. One shop uses inches as a unit, and uses tools calibrated to the nearest 1/16th of an inch, and enters values into their computer system down to the nearest 1/16th of an inch.
The second shop uses centimeters as a unit, and uses tools calibrated to the nearest 1/10th of a centimeter, and enters values into their computer system down to the nearest 1/10th of a centimeter.
If your computer system adds 7/16ths to 9/16ths, it will calculate that value to exactly 1. If your computer system adds 7/10ths to 3/10ths, it will not calculate that value to exactly 1. And this is because the binary representation for halves, fourths, eighths, sixteenths, and so on terminates. Any other binary representation of any other fraction does not terminate. So 5/12ths plus 7/12ths will not equal exactly 1.
However, this problem is easily remedied for the second shop if they switch from measuring things to the nearest 1/10th of a centimeter to measuring it to the nearest millimeter, and entering the the values as millimeters rather than tenths of a centimeter. Then they’ll find that 7mm plus 3mm equals exactly 10mm, even though 0.7cm plus 0.3cm does not exactly equal 1.0 cm.
And note that if the first shop measures things in feet and uses tools calibrated to the nearest inch, they’ll have the exact same problem because binary cannot exactly represent twelfths. So you could exactly represent 1/16ths of a foot, but not 1/12ths of a foot. So much for the computational advantages of non-metric systems.
It certainly is true that decimal numbers have to be carefully handled in computer systems if you have to make comparisons between floating point numbers. If you’re not careful you’ll find that a floating point variable divided by some number and multiplied by the same number doesn’t have the same value it used to. This is not a problem inherent in the metric system, it is a problem inherent in using decimal numbers converted to binary numbers.
You could only avoid the problem using traditional measuring scales if you constrain your system to only accept inputs that can be exactly represented in binary. So when you try to enter 3.13 inches that gets rejected, and only gets accepted if instead you enter 3.125 because that’s exactly 3 and 1/8th. However, you could do the same thing with your metric measures. If exact representation of inputs is important in your system, then you can reject inputs that can’t be exactly represented.
But nobody does that. I’ve never seen a computer system that accepts values in increments of 1/16".
And even if there was, it would be less accurate than your hypothetical metric shop if you ever had to deal with lengths that aren’t exact multiples of 1/16".
If you are working with very specialized hardware that always comes in discrete sizes, it would still be equally easy in metric vs. English. You just program the software to use integers, and represent lengths as integer multiples of the unit length. For example, if you are creating a CAD for LEGO, you’d use 0.04 cm as a base unit, and probably use integers number of base units as lengths.
1/16 is 0.0625, which is a finite value, but 1/12 is the main divider. But trade always happens at larger values, tonnes, kilograms etc…then the goods are subdivided later. 0.0625 + 0.0625 is also finite, non-rounded and doesn’t lose precision because it is Dyadic.
Dyadic vs Decimal in this case is really about the least common denominator, and when I need to mention again, that the SI unit are the most practical for a world standard which IMHO is the most important thing it does cause errors. Especially seeing as most people use calculators, and don’t realize that divisions by 10 are rounding operations.
Dyadic rationals numbers when added, multiplied or subtracted will always be another dyadic, thus with 3 out of 4 there will be no loss in precision. While most users don’t require precision, and so rounding at the point of display doesn’t impact them in normal use cases it is a problem.
Binary-coded Decimal can represent any finite decimal number exactly including 0.1 (but not 1/3) and they are used as software implementations in most modern day critical applications like financial software but it is 1000’s of times slower, and Moore’s Law Is Dead.
I very much doubt that we will get people to change their number base, or to come up with a new non-decimal world wide unit of measure so my hope is for HW support for BCD, as the performance impacts of these software implementations will make some problems I work on (and thus my selfish needs) practically unsolvable, or lead to serious error for normal users.
Every system has limitations and flaws and you have to work around them, that is just reality. But the number of applications that are limited by representation and rounding errors, which can accumulate, due to both decimal monetary systems and measurement systems are a huge PITA for those who have to deal with them.
Because people are emotional, I am not even suggesting people should abandon SI, just that had we counted by 8/12/16/30 these problems would be easier, and for some needs it makes sense to use the customary units to prevent compounding errors even if they are rounded and converted before displaying to the end user. (not an option in money of-course)
But lets be clear, modern CPU instructions like the SMID operations have less precision that an IBM XT with an 8087 provided, and there seems little interest on the part of ARM or Intel to fix these issues, but we will see if they do add in at least hardware support as they can no longer provide dramatic increases in performance for serial operations due to hitting the limits of physics on CPUs over the next few years. Even if they do add in hardware Decimal FP units, finite numbers will maintain precision better than irrational but it will help reduce the quantity of common numbers that are irrational (like 0.1)
No, it is never a problem, because you can always use more significant digits. You always can (and should) use enough significant digits to meet the required precision. If you are working with measurements or machining equipment that has 0.01mm accuracy, it doesn’t matter if 0.333 mm isn’t exactly 1/3 mm.
They’re the only shop in the world to not work in mm, then.
To be clear here, inch systems use 1/12 all the time which is Dyadic and finite, metric systems divide by 10 all the time, which has no representation in binary so it will always result in rounding and a loss in precision.
The limitation I was pointing out is directly related to the system’s BASE units, but as my previous post mentions 1/16 is still a finite number, and will not suffer from rounding.
converting from kg to mg, or km to meters or cm or mm will always have a rounding event and a loss in precision with the default math models, and hardware support in almost all CPUs. This is just a limitation of current computers and one has to resort to software solutions which are slower to deal with this intrinsic representation error.
What are you talking about? These conversion factors are defined as exact values. There is no loss of precision in unit conversions.
You cannot use more significant digets, there is no way to accurately divide by 10 in floating point, 0.1 is just a number that cannot be represented in floating point.
Ignore the 1/3 issue, which is still an issue but unless you use decimal librarys.
0.1 + 0.1 + 0.1 != 0.3 without rounding.
This is a well known limitation of binary floating point due to representation errors.
Once again, this is a problem with computer floating point, 0.1 DOES NOT EXIST on the binary floating point number line, just as PI or E do not on ours.
Please go re-read through my previous cites, this is well known and not even close to being controversial, if you do not agree simply provide a cite that shows that 0.1 is a finite number in floating point.
We are not talking about abstract math. We are talking about physical units; those are only used for real-world measurements and real-world machining. With the correct choice of variable size, you can represent 1/10 with as much precision as you need for your application. If your application calls for 10^-70 accuracy (that’s like measuring the size of the observable universe to the accuracy of the size of an atom), you could do that with 256-bit floating point.