Let’s say a measurement instrument has a temperature coefficient of +2% / °C, with 25 °C being the reference temperature. This can be anything; a gage block, voltmeter, strain gage, etc. (I understand +2% / °C is unrealistically high for most things, but this is just an example. Besides, I’m actually calibrating something that has such a value…)
So here’s my question: what does +2% / °C mean?? I can think of a few different ways of using it.
Here’s an example: Let’s say I have an instrument that reads “123” at 25 °C, with a temperature coefficient of +2% / °C. The temperature then shifts to 30 °C. What’s the new reading?
Here’s one way to calculate it:
The temperature has changed by +5 °C. This means the reading should change by 52% = 10%. The new reading is thus 123 + 0.1123 = 135.3.
Here’s another way to do it:
When the temperature is 25 °C, the value is 123.
When the temperature is 26 °C, the value is 123 + 1230.02 = 125.46.
When the temperature is 27 °C, the value is 125.46 + 125.460.02 = 127.97.
When the temperature is 28 °C, the value is 127.97 + 127.970.02 = 130.53.
When the temperature is 29 °C, the value is 130.53+ 130.530.02 = 133.14.
When the temperature is 30 °C, the value is 133.14+ 133.14*0.02 = 135.80.
Or what if I use a continuous compound formula such as what’s used to compute $$ interest, i.e.
New value = 123 * e[sup]0.02*(30-25)[/sup] = 135.94.
Now I understand there’s not a whole lot of difference between the three answers. But dammit, I wanna know the right way to do it! The other engineers around my workplace just shrug their shoulders.
So again, how is someone supposed to use a temperature coefficient spec such as +2% / °C?