Is the gravitational force G constant

A few comments on context, interpretation, and more…

First, on the journalism aspect of the discussion…

The measurement is, in itself, very impressive, as have been all the modern precision laboratory measurements of G. Because of it’s impressiveness, it deserves an article in New Scientist or anywhere else. It’s a solid result.

As always, one must tread carefully when extracting context from popular science articles since the articles will uniformly overemphasize exciting points even if speculative and unsupported, as science is a bit dull most of the time and doesn’t make for a good read. New Scientist is above average about choosing what science to cover (contrast: Discovery Channel, say), but when it covers that solid science, it tends to overemphasize speculation as much as the next source. I’m not knocking it as a source because of this. I’m just saying that one should remember that the exciting points and contexts laid out are there only to provide a narrative foundation on which to share the hard science. The exciting points and contexts may not themselves be hard science.

As for Fermilab Today covering it: Fermilab has a public relations office that includes a couple of science writers, but they can’t fill every day’s newsletter with home-brew stuff. Often they directly invite articles from scientists about new results (I’ve written or edited a few of these myself), but even still they must sometimes link to external sources to fill column-inches. In this case, the New Scientist article is, as mentioned above, covering a solid piece of physics and it is a very readable article to boot, so it is an appropriate thing for the lab to have included for its daily readers.

On the measurement itself…

These measurements of G are… hard. I’ve had the good fortune of being shown around a couple of precision G set-ups, and the thing you take away is how astonishingly difficult to control they can be. You have to worry about things like how long it’s been since the last rainfall, as the ground outside the building will have varying amounts of water in it. (This doesn’t happen to be an issue for the BIPM measurement.)

The New Scientist article implies (via its narrative flow) that there is a healthy and consistent accepted average value for G and this new BIPM measurement disagrees with it and that that’s unexpected. To be sure, the article points out that these measurements are “notoriously unreliable”, but it segues away from that quickly, burying the lead. A more appropriate spin is: “There have been lots of measurements over the years from different groups with different approaches. These have all had tiny error bars, but they’ve been all over the map with respect to each other, disagreeing with one another by much more than the error bars explain. Historically, in experimental physics, this has uniformly meant there is a hidden difficulty in performing the measurements that the community has yet to understand. This new measurement from BIPM maintains the situation, but of course that’s all it could do, given the disagreement already present. Hopefully things will be better understood as the experimental endeavors continue.”

Note that I did not say this measurement is incorrect. This new measurement uses a freshly constructed apparatus (although the same methods) as their earlier result and has two ways of extracting G information that eliminates some (but not all) potential sources of “unknown unknowns”. It also has the benefit of all the experiments before it, in terms of what has worked and what hasn’t. But, for better or for worse, there are discrepancies among all the measurements, so no single new measurement can close the book, even it they’ve finally “got it right” because there is no way to tell right now who got it right. Only time will tell.

Everything I said above is a bit dull to most lay readers. The publication in Physical Review Letters talks only about this stuff, naturally. New Scientist wants to give its readers a little more fun (and who can blame them?), so the NS article spends some time on rather speculative and, let’s say, “underbaked” ideas about G changing. This ties in well with the narrative the article establishes early on, but this narrative must be understood as an artistic backdrop (with some shades of science) on which the writers are sharing the underlying, impressive experimental result.

An independent implementation of the BIPM methods would be very enlightening, as will the continued cross-checking of the other equally valid but equally disparate measurements from recent years.

Thanks. That was interesting but I’m not really getting a feel for the extent to which the noose is being tightened around G or the extent to which the idea of it being variable rather than constant is complete speculation. Can you help with that?

Figure 3 from the BIPM paper[sup]1[/sup] gives a good summary of the modern situation. It shows the results of twelve different measurements since 1982. Many of these are purported to be as precise or more precise than the latest BIPM measurement, but you can see how all over the map they are. CODATA, bless its heart, had to report something even though the historical results are in violent disagreement. That white-dotted result is CODATA’s recommended average. They just ignored the blatant disagreements entirely and took the spread in the measured values as a estimate of the error on the average. This is statistically horrid, but, again, you have to have some value to work with elsewhere.

In decades past, people looked for temporal patterns in the results – daily, monthly, yearly, *semi-*yearly variations. With the full suite of measurements, nothing like this works.

As for less “data-driven” ideas on variations in G: I haven’t seen any substantive work on this, and in a reasonable (though not exhaustive) search just now, I didn’t find anything either. This is for variations on human timescales. There is lots of theoretical work being done exploring possible long timescale variations, usually toward explaining away other aspects of the cosmological picture (e.g., dark matter). These ideas haven’t really come into their own yet, lacking real experimental need or consistency with the models. For the possible short-term variations, though, statements like “Either the experiments aren’t quite there yet, or G is changing” are certainly logically sound – those are the only two possibilities – but they aren’t too scientifically resonant.

[sup]1[/sup] Quinn, Parks, Speake, and Davis, Physical Review Letters 111, 101102 (2013)

On a tangent, is there any review literature on the historical measurements of fundamental quantities? It would be interesting to compare the graph for G with that of, say, the speed of light, the fine structure constant, etc., from the times where the measurements of those quantities could not be accomplished with present day accuracy. I remember seeing a graph of the value of some quantity over the years—I think it was perhaps e/m—which started out much smaller than today’s accepted value, and only gradually crept upwards, but that could be a garbled recollection.

Another point: G by itself is a very difficult quantity to measure, but the product of G and the mass of any given celestial body (most often the Sun or the Earth) is much easier to measure, and you don’t see this sort of disagreement in measurements of those quantities.

I’ve never come across a comprehensive article, but I bet one’s out there. If you look for any particular constant, you can usually find something. Here’s one for Planck’s constant, for instance. The PDG has a page for curiosity’s sake in the introduction to the Particle Data Book. The quantities there aren’t of general interest, but some important features can be seen (for instance, error bars in complex measurements are not always what they seem).

Apparently G is notoriously hard to measure on earth. Is it a good candidate for a space-based microgravity environment? At once eliminates rainfall, differential solar heating, wind, elevators and other earth-based stuff that can cause tiny errors in measurement.

True – but I wonder if the space environment won’t introduce confounding factors of its own (e.g. solar wind, cosmic rays, and perhaps even micrometeors).

There are ways to deal with those. Basically, you build a shell around your experiment, and put stationkeeping rockets on your shell, not on your experiment itself. If micrometeors or the solar wind or whatever knock your shell off-center, you use the rockets to correct for it before it hits the experiment inside.

The tradeoff, as with everything done in space, is money. Putting anything in space is expensive, and maybe you can better use that money by building fifty copies of your experiment on Earth, or something of that sort. If you can even get that much money at all, which you probably can’t.