Wikipedia lists the physical constants and their uncertainties here :
My question is which of these physical constants (excluding the ones that have a defined value) have the least and most uncertainty in their value on percent basis ?
Isn’t the speed of light known to be exactly 299,792,458 metres/second since the metre itself is defined as he distance light travels in 1/299,792,458 seconds?
Yes, but that just shifts the burden onto uncertainty in the value of the meter. The experiments are still the same, and still have error in them.
G, meanwhile (Newton’s constant) also has a great deal of uncertainty, at about 10^-4, and is perhaps more relevant to us than the weak mixing angle. Or perhaps not: G by itself is very seldom put to practical use; it’s almost always seen multiplied by the mass of some astronomical body such as the Earth or Sun. But that product is generally quite well known (implying a corresponding uncertainty in the mass by itself).
Yes, exactly my point. There are experiments we used to perform to measure the speed of light. These experiments inherently have some error in them. Nowadays, we’ve defined the value of the speed of light, but we still perform those old experiments, or ones like them. Nowadays we’re performing those experiments to determine the meter. But they still have error in them.
I know the Hubble Constant isn’t a “physical constant” in a strict sense, but when I was in college, I was taught it was between 50 and 100 (km/s/Mpc). Now it’s more like “about 70”.
As a child I had a 1970s Guinness Book of Records and I’m sure I remember reading in the section about “biggest numbers” that there was some insanely huge number which was the upper bound for some constant or other, and included a note that the actual value of said constant had actually been found to be 3, or 12, or something extremely modest.
The lower bound on the actual value has now been sharpened to 13. (The upper bound has also been significantly sharpened, but nevermind that)
Anyway, it should be noted that both Graham’s number and the value it bounds are mathematical constants related to a mathematical problem, not physical ones.
(It’s also worth noting that Graham’s number was never the tightest upper bound known (for the solution to the problem which originally motivated its consideration); it was just something Ronald Graham found easier to explain to Martin Gardner than the tighter upper bound he knew. [That having been said, the tighter upper bound is still quite large. But, then, it’s not hard to come up with mathematical problems with a large range of uncertainty in their answer. For example: How many digits of sqrt(2) precede any occurrence of the string <your favorite digit-string not discovered yet in the decimal expansion of sqrt(2)>? Infinite uncertainty!])
Also, I don’t think anyone ever really had any good reason to suspect the value was 6 (or, now, 13); I think it was just the lowest value that hadn’t yet been ruled out.
That’s a little different, in that it’s not actually known that sqrt(2) is normal, and thus that the question has any (finite) answer at all. And if you really want to get down to it, there are plenty of problems in physics for which that’s true, too. I mean, I could argue that the inverse of the photon mass has infinite uncertainty.
Although, come to think of it… It’s definitely known that the lightest neutrino has a nonzero mass, but there is no known lower bound for it. So I suppose that the inverse of the lightest neutrino mass is known to be a number, but also has infinite uncertainty.
A nitpick: the lightest neutrino can have zero mass. The second lightest can’t. (Of course, your broader point is unaffected by this nit.)
For those exact constants, eltro102 is correct that the best answer to that question comes from the “relative standard uncertainty” column.
More generally, though, it’s not answerable since you have freedom in what you call your constants. As an example perhaps less extreme than Chronos’s, consider some physical constant that is an angle t that has to lie between 0 degrees and 180 degrees. Let’s then say that we measure t very precisely to be 90.00 +/- 0.01 degrees. That means we know the value of t to one part in 11,000.
But what if the angle t shows up in our equations inside of a sin() function? We could just as well have defined a quantity x equal to sin(t), and called x our fundamental constant. The range of x that our same measurement allows is x = 1 +/- 0.000000015. (An easy, if only statistically approximate, way to get this is just to calculate sin(90.01) and sin(89.99).) That’s about one part in 7 billion.
So, do we know this physical constant to 0.01% precision or 0.0000015% precision? A harmless transformation of the variable makes all the difference.
For the reasons above, there’s no universal answer to this (or, rather, no universal way to decide which parameters you’re asking about). However, for any particular measurement or constant you might choose, there will definitely be something interesting to say about why the uncertainty is what it is. The story will likely involve the details of the experiment’s design and its challenges.
Yes, that’s fair. A better example is the question of what the third digit of sqrt(2) in base [your favorite number both impressively large as a range of uncertainty and suitably large that no one has calculated anything about this third digit] is.
Mind you, if we were proper Bayesians, we would model all our ignorance as probability distributions, and likely want our measurement of uncertainty to take these relative probabilities into account. Accordingly, instead of simply looking at the span of possible values, we might consider the standard deviation or such things. So, depending on our probabilistic model of the inverse neutrino mass, we might still claim the uncertainty in it to be finite (but we could always still reparametrize it some other way into infinity).
If it were massless, how could it take part in oscillations? Neutrino oscillation has a characteristic frequency to it, but massless objects are timeless. But assuming there’s an answer to that, this isn’t a nit, and it does affect my point: The second-lightest neutrino does have a lower bound to its mass, and so its inverse would not have infinite uncertainty.
Alternately, suppose that the quantity of interest is neither t nor sin(t), but tan(t)? Though one then gets back to the issue that the quantity of interest might not itself be finite.