If you really care about error, you do error propagation, not sig figs. In your example, 37 + .5 degrees Celsius converts to 9/5*(37 + .5) + 32 degrees Fahrenheit (which works out to a range of 97.7 to 99.5; check your arithmetic).
Roughly, your final answer can never be more precise than the data or intermediate calculations that you used to get that answer.
I teach math. Occasionally I’ll see a student solve a multi-step problem. They do step one, and get a number like 5.3289500132 on their calculator. Rather than write this entire thing down and punch it into their calculator again, they’ll “abbreviate” it to 5.3 or 5.33. So in a later step that uses the result of that earlier calculation, they’ll punch 5.3 into whatever formula they’re using, and give the result (say, 12.9372) as their final answer to the problem. Except that not all those digits are significant. Some of them are meaningless. You can’t expect an answer that’s accurate to the nearest ten-thousandth if you got it using numbers that have been rounded to the nearest tenth or hundredth.
That said, I’ll admit that I never really got the “rules” for the number of significant figures you get as the result of doing various sorts of calculations. I prefer to think in terms of “this number’s between 12.45 and 12.55” or some such, and avoid rounding intermediate calculations whenever possible.
I never really got the whole “2 * 12 = 20” thing, which is how I was taught it (meaning you can only output up to the lowest number of sig figs you have). Wouldn’t you calibrate it by the increments of your device rather than an arbitrary adherence to how many significant figures? I mean, if you had a theoretical device that measures in increments of 1 wouldn’t 2.<unmeasurable> * 12.<unmeasurable> be of the same precision because they’re both measured in increments of one? I can see how sig figs are useful in sifting out garbage that’s really “fake” precision begotten by imprecise measurements (like getting 2.007896574 from some math starting with comparatively imprecise figures like 2.08 and 120) but some of the outputs seem like garbage begotten simply because of adhering to rules rather than reason.
Or was I taught wrong?
Well, suppose your original measurements are “between 2 and 3” and “between 12 and 13”. Then your output will be “between 24 and 39”. The imprecision in your original figures was 1, yeah, but when you multiply, it can go way up, proportionally to the quantities you were multiplying.
But there’s merit to your idea of trying to calibrate descriptions of uncertainty to the situation, rather than always hitching it onto just how closely numerals happen to fall to nice multiples of powers of 10. So some of what you’re driving at still holds. Sig figs are a crude, one might even say broken, technique, in certain ways. Which is why it’s disappointing that anyone wastes too much time insisting on strict adherence to the rules devised for them. But, life…
Well, that’s why I was trying to give an example that shows that all the numbers don’t really matter. If I use a scale that can give four decimal places to a max of 160.0000 g, I can both accurately and precisely report all seven digits. If I report weighing out 150.0000 g, that means that I weighed out somewhere between 149.9999 g and 150.0001 grams. In practice, getting exactly 150.0000 g of material weighed out could easily take me all day and even then external factors (people walking by, for instance, or air movement in the lab) might throw the numbers off a little bit. But I’m a busy guy and have work to do, and they don’t pay me to spend hours weighing out one starting material. Especially if I am going to measure out something else using a method that gets me maybe two decimal places (which, once again, I may not use.) So instead, I weigh out 150.0x g of material (reporting 150 g means that I could have had 149 to 151 g, but I still want to be reasonably close to 150 g), write down that I put 150 g into my reaction, and get on with my day. Maybe I then need to add 150 mL of a liquid. Well, I don’t have access to a 150 mL syringe, and it’d probably be graduated in 10 mL or 5 mL or at best 1 mL increments anyway. Plus, big syringes are difficult to use. I could get a 1 mL syringe that would allow me to measure out 1.00 mL every time, but then I’d have to do that 150 times and the error is likely to actually be higher once I’ve actually done 150 additions that way. So my best choice is to find a graduated cylinder that can hold all 150 mL while being as close as possible to that number (in practice, it’d be a 250 mL cylinder), carefully measure out the amount of liquid required, write down that I added 150 mL to the reaction, and get on with it.
Bolding mine, thats all I needed I dont get the rules but if I trust the bold part to be correct I understand
You said roughly, so I won’t say this is wrong. But it is not uncommon to average a number of low precision measurements in order to obtain a valid* higher precision result. This is done when the best available instrument can’t supply the needed resolution with a single measurement, or to exploit the economy of crude instruments.
This is taken to extremes in the electronic device called a delta-sigma analog-to-digital converter. Thousands of one-bit resolution (~1/5 decimal digit) measurements are “averaged” (it is actually a digital filtering process that is a bit more complex than a simple average) to get (for example) 20 bit resolution (~6 decimal digits) at the output.
*How valid depends on having enough random noise to create the needed dither in the measurements, and avoiding systematic errors…like the person taking the readings knowing what the result “should” be.
Since this thread is very mathy…
Throughout high school and through my Physics and EE studies, imaginary numbers have never made the slightest lick of sense to me. Square root of negative what? i? j? They’re useful why again?
It finally took this crazy old coot of a professor, an ex MIT guy who worked in early high energy radar development and told many “amusing” stories of hideous deaths by accidents and cancers among his colleagues, to explain it right.
He spent a 3 hour lecture developing from raw philosophical logical principles how “i” should really be treated as a 2x2 rotational complex matrix. Somewhere in that lecture, the clouds parted, choruses sang, children rolled with puppies in meadows of flowers, and it all “clicked”.
Yeah, I’ve been trying to give a similar explanation here (in which thread I’m also discussing e^(ix)). For everyone who said they never really grasped imaginary numbers, perhaps see if that helps shed some light on them.
I have an Aunt who believes to her soul that the definition of a conversation is two people repeating the same contrasting ideas in different combinations of words until one tires and stops talking.
She does not build on an idea, provide any back-up for why she believes it, or even explain why she disbelieves the contrasting idea. She simply repeats in different words exactly what she said before, as if the problem were one of vocabulary.
I think of her as “Tauta-Logic Aunt”.
Pointers in programming. I think I understand the “theory”, but I don’t understand the… point? WHY do I want to bother with referencing and dereferencing and addresses?.. :mad:
Because, in a low-level language like C, how else are you going to be able to manage potentially growing amounts of data? Your program only actually declares some fixed number of variables, so it only directly refers to some fixed quantity of data. If you want to refer to anything else, you have to refer to it indirectly.
That’s a rather abstract description, so let me illustrate: Consider a simple program whose task is the following: the user keeps inputting characters, which your program receives one at a time by calling getchar(). The program should keep track of all the characters input until one of them is a question mark; at that point, the program should print out all the characters ever input.
How do you do this? Well, you’ll need some space to stick your record of all the input in. But you can’t just declare ahead of time an array of 500 characters, because what if the user enters 600 characters? Instead, what you’ll have to do is, on the fly, keep allocating new space to store further data, as the user inputs more and more data. But how do you actually access this new space? It’s new, so it doesn’t correspond to any variable that you declared up-front in your program. So what is it? That’s where pointers come in; when you ask for more space to use to store data in, you’ll do it by calling a system function which returns to you a pointer to the beginning of the new space. To read or write from this new space, you’ll do it by dereferencing this pointer.
[And if you want to read or write from somewhere other than the very beginning of the new space, you’ll do it by first adding the appropriate offset to the pointer, and then dereferencing it]
Of course, there’s another way to solve the problem I mentioned: just store all the data the user types in a file on the hard drive, to be recovered later as needed. But how do you actually access the memory in that file later on? Well, you remember the filename and ask the system for its contents. Which is to say, you use the filename as a kind of pointer and dereference it. In fact, it’s exactly the same idea: pointers are just the same thing as abstract filenames; they just happen, as an architectural detail, to refer to what’s stored in RAM rather than what’s stored on the hard drive.
Suppose you have an array with one billion elements–that’s large, but not unheard of. How many copies of that can you make before you run out of memory? That’s why we have pointers, to deal with things that are too large to be copied every time you want to pass them as a parameter.
I should probably have phrased this as “you can’t just declare ahead of time 500 character variables, because what if…”. It’s worth noting that arrays in C are actually just disguised pointers; when you write someArray[someIndex], this translates into *(someArray + someIndex) [i.e., into dereferencing a pointer with an offset].
Anyway, like I said, think of pointers as just the filenames for what’s stored in RAM.
[I don’t know if I would necessarily motivate pointers as just a way to avoid having to copy large amounts of data. True, that is one thing they do, but that makes it sound like they’re only there for optimization purposes. But they actually open up possibilities in radically larger ways: C without pointers isn’t even Turing-complete; it can only describe programs using bounded amounts of memory (i.e., finite state machines)].
The significant figures discussion reminds me of a funny joke someone posted once. An engineer is asked “how much is two times two?”. He pulls out a slide rule, manipulates it and then says “about four”.