There is one place in everyday life where we use Kelvins and it’s 4-digit numbers, not just 3. Well, it’s not really “everyday” in that you’ll encounter them that often, but it’s something you can see in the grocery store.
It’s in fluorescent lights and LEDs. When these were introduced, they were marked as producing the same amount of light as some X-watt incandescent bulb. But that wasn’t actually good enough. They may have produced the same amount of light, but their spectra were very different. They were very blue and harsh. So manufacturers produced different ones that had a more yellowish and less harsh spectrum. They needed to somehow mark these. What they came up with is a 4-digit Kelvin number, anywhere from 2500 to 5600K. It’s not that the LED/CFL will be that temperature, it’s that the spectrum will be roughly the same as an incandescent bulb whose filament is that temperature.
Well, it’s really 2 significant digits with 2 zeros on the end. Which is very different from using a 3-digit number where the last digit actually matters (i.e. you can feel the difference between 300 and 301).
YES there’s something special about it. It’s zero. Multiplying zero by anything still gives you zero. Zero kelvins is zero feet is zero pounds is zero dollars. Now, don’t you all go just rolling your eyes. It’s true.
The point is, temperature is a thing that can be measured in units, like mass or time. You don’t need a scale. Fahrenheit and Celsius are both scales, like a foot ruler held up against a range of locations. Kelvins are actual things you can add and multiply and divide and so forth. It would never make sense to multiply a Celsius scale temperature by something, but it does make sense to multiply them by things sometimes. There’s no kelvin scale, they’re things.
Is 75 degrees F plus 80 degrees F equal to 155 degrees F? No.
Let’s say I live in the arctic, and the average temperature yesterday was 1 °F. Today the average temperature is 2 °F. Does that mean it’s twice as warm today vs. yesterday? No.
On the other hand, you can compare temperature differences just fine, without having to worry about where the zero point is. Keeping your house at 90 F when it’s 60 F outside takes just as much energy as keeping your house at 70 F when it’s 40 F outside, for instance, because in both cases the temperature difference is 30 F. And either of those will take twice as much energy as keeping a house at 75 when it’s 60 outside, because 30 is twice 15: I just multiplied something measured in Fahrenheit by something and got a meaningful answer.
Conversely, though, “twice as warm” would equate to a temp of about 460F, which may work on a scientific scale, but with humans using ordinary language to describe everyday experiences, 460F does not seem “twice as warm” as 1 F. Thermodynamically, perhaps, but not in any useful way on a human scale.
There are various systems of natural units where various physical constants are equal to 1; for example, a common system, Planck units, sets c, ħ, G, k[sub]B[/sub], and k[sub]e[/sub] all equal to 1, such that lengths in every dimension have the same units, regardless of whether you’re measuring lengths in three-dimensional space, purely time, or some mix of time and space, as when you’re measuring intervals related to something moving relative to you. It’s no more “improper” than measuring vertical and horizontal distances using the same units, but folding time into space like that is… confusing for people accustomed to “distances” and “lengths of time” as opposed to intervals.
Planck units aren’t actually all that common, since they’re only used by people working in quantum-gravity-related fields. But it’s common for physicists working in various fields to set some subset of those constants to 1.
For instance, someone doing particle physics is likely to set c and hbar to 1, since those both come up often in particle physics, but not to set G to 1, since that never comes up. And going all the way to Planck units removes dimensional analysis as a useful tool for discovering mistakes, or making instructive guesses as to functional forms.
And this emphasizes the point others have been making: Units of measure are made for the convenience of people, and, while they’re central to doing science, there’s no single system of units which is “scientific” or is the best in a general sense. When people in the Middle Ages used the barleycorn as a unit of length, they weren’t being perverse, they were just using what they had around them which were, by and large, mostly the same length. Standardize it fully, as it is now at a third of an inch, and it’s just as “scientific” as the meter or the light-second.
Being able to render measurements in terms of numbers is vital. The specific numbers don’t especially matter, except to the extent they’re easy to work with or come out of equations which are easy to work with.
Part of the reason we don’t think in terms of temperature without scales is just that we’re not used to it. If you care what the resistance of a wire is, it’ll be twice as high at 460 F as at 1 F, approximately. That’s because wire resistance, at least for copper wires and other wires made of a single metallic element, is proportional to temperature, just as it is proportional to wire length. A pretty human application of temperature.
Imagine if you had a wire length scale that had its zero at some nonzero amount of wire, and cutting the wire shorter than that would give you a negative length. Scales are just plain weird, when you compare using them to using some simple unit. It’s an unfortunate feature of the history of temperature that we didn’t get a clear conception of its zero for quite some time, and developed systems whose zeros have a different meaning instead.
I think that “twice as hot” and “twice as cold” with reference to weather or the like can actually be interpreted meaningfully, but you have to start by realizing what “hot” and “cold” actually mean in the first place. When we say it’s “hot out”, what we mean is that it’s hotter than the temperature we consider optimum, and likewise, “it’s cold out” means that it’s colder than we consider optimum. So “it’s twice as cold today as it was yesterday” means “today and yesterday were both below optimum temperature, but today is twice as far from optimum as yesterday was”.
In addition there is how hot it would feel which is more than subjective since blackbody radiation from something twice as hot would radiate 16 times as much. If we stood next to a large object at 127C versus 427C we would rightly feel that the latter was heating us more than twice as much, and not merely because we were close in temperature to the former or because the latter is more than twice the temp of the former in our chosen scale.
A related question: do we know if degrees (F or C) are equivalent in some way; i.e., is the difference between 2C and 3C the same as the difference between 97C and 98C?
If so, was this done deliberately somehow? Or did Mr Celsius just divide the distance between a column of mercury at 0C and 100C into 100 equal parts and luck out.
Not sure how to word this to make it a coherent question.
It is pure chance and both are effectively arbitrary although their inventors didn’t know that at the time.
Really Fahrenheit is an improved version of the Rømer scale where a brine solution was set to 0 °R and boiling was set to 60 °R. This 60 number was based off the Sumerian base 60 system for our angle degrees and seconds. Unfortunately it also meant that water froze at 7.5 °R and other inconvenient non-Integer numbers.
Fahrenheit basically took the Rømer scale, multiplied it by 4, and adjusted the values to make it easy to make thermometers (or other reasons lost to time).
Anders Celsius came a bit later and had the advantage of understanding how air pressure effected the freezing point of water and also came after findings that latitude didn’t impact the freezing point. The brine solution that is zero degrees under Rømer and Fahrenheit was one of the few fairly reliable testable points at the time those scales were developed.
Note while Celsius is named after the above man with that name his scale actually had freezing at 100 and boiling at 0 degrees, *Carl Linnaeus *flipped that around in time for it to have some acceptance before the French Revolution happened and the Metric system started to be developed and the idea of decimal units had become trendy.
One more note to show just how confusing this is, even above people have stated 0C is where water freezes, which is incorrect, it is the temperature that ice melts at.
Also note that while it is “acceptable for use with the SI system” Celsius is not a SI scale.
Not clear why it is “pure chance”. To translate Batano’s question, it is why the alcohol mixtures and mercury used in pioneering early thermometers have an approximately linear thermal expansion with temperature so that the degrees are uniform. Perhaps they “lucked out” over the relatively limited range of their thermometers?
Charles’s Law and similar arose in the 18th century, but people like Fahrenheit did not have a comprehensive understanding of thermodynamic temperature as we know it AFAIK.
A lot of things are linear with temperature, or linear with temperature differences. Thermal expansion of most materials over most temperature ranges, the volume of a gas at fixed pressure or pressure at a fixed volume, heat flow through a material, etc. Celsius and Fahrenheit based their scales on the volume of a cylinder of mercury or alcohol, so it was inevitable that the volume of mercury or alcohol would be linear on such a scale, but most of the other phenomena for which measurement of temperature is relevant are also linear.
In other words, yes, in very real and meaningful ways, the degree from 99 to 100 is the same as the degree from 9 to 10.
Unless I overlooked it, no mention has been made of Bose-Einstein Condensate.
At a billionth of a degree above absolute zero, matter (more or less) stops vibrating…position and momentum both go to zero…the Uncertainty Principle becomes certain…thus matter stops behaving like a particle and more like a wave.
To clarify, temperature relates to an exclusively macroscopic property; the equipartition of kinetic energy across all degrees of freedom.
First problem is you cannot interact with any component of BEC without removing the “particles” from the condensate, changing condensate. Uncertainty is still there, you just have to know which way to look at it to be certain about that uncertainty. In fact due to the positional uncertainty going up as momentum uncertainty is constrained, the waves of the bosons overlap and it is that uncertainty in their position which allows for this and thus the single wave function.
At the quantum level the Uncertainty Principle still holds, if we ever find a case where that isn’t true without relativistic effects, QM as a theory will be proven untrue.
Note that some types of BEC can exist even at room temperature like the surface plasmon polaritons produced here.
It is important to not confuse macroscopic properties of an ensemble like temperature with coherent matter waves. Making BECs out of most atomic gases may require temperatures a fraction of a degree above 0 K but other materials can form BECs at much warmer temperatures.