Gravity at large scales - measurement?

The question is inspired by Quantum mechanics, where the laws we find intuitive at our scale of perception are not applicable at the quantum level.

  1. From Newton’s gravitational laws, F= GM1M2/R^2. Now I remember reading in a book somewhere where they measured the uncertainty in the exponent of R (i.e. 2), from measured data, and it was very very close to 2. My question is, what’s the largest measurement in R that we have performed to validate this exponent ?

  2. Parallel to the QM laws which are very different in the very small scale, is there any reason to believe that the Gravity laws are very different in the very large scale ? (Very large as in two different galaxy spacing)

I’m not a physicist, but I am a metrologist, and so I’d state that the uncertainty in the gravitational equation would come from ‘G’ rather than r and any uncertainty in ‘r’ would be due to the measurement of ‘r’. But maybe a physicist will have a different perspective.

At any rate, due to the SI being redefined and becoming official later on this year, most of these universal constants will have zero uncertainty, meaning the units will be defined according to set values for the constants and all uncertainties will be shifted to things like measuring ‘r’ in your equation.

It’s well known that the visible matter in galaxies (if that’s all there is) does not conform to Newtonian dynamics.

MOND was a proposal to try to explain this. But it’s not a good explanation, for reasons laid out here:

Dark matter is a far better explanation, since dark matter is required for other reasons, notably to explain CMB anisotropy. See here:

Large scales… currently we have measurements of lunar and planetary orbits; in the future we should be able to add improved gravitational-wave astronomy to that. One should not focus exclusively on long-range experiments, though, especially since quantum effects are profoundly interesting.

While gravity might deviate from our models over very large ranges, nobody expects that that variation would take the form of a different exponent. The simplest sort of long-range variation would be due to a nonzero mass of the graviton, which would mean that there should be an exponential falloff as well as the inverse-square one. And the best current bounds on the mass of the graviton are actually based on gravitational waves, not on orbits. Deviations in the value of the exponent, if they exist, are mostly expected to show up on extremely short length scales (fractions of a millimeter or less), where it’s very difficult to measure gravity (but some intrepid experimentalists do it anyway).

The inverse-square law makes sense given that the area of a 3-sphere is proportional to radius-squared in a flat Euclidean space.

But what if space is curved? Would the (r[sup]-2[/sup]) relationship become more complicated? Does Einstein’s General Theory already encompass this?

Yes, the general two-body problem is not so simple… I have seen some animations of merging black holes, though, thanks to numerical simulation.

Asaph Hall, a prominent astronomer in the 19th century, proposed altering the exponent to explain various deviations observed from Newton’s laws = but even before Einstein came along, this idea had been discredited by observations that wouldn’t fit with the alteration of the exponent. See here The Cambridge Companion to Newton - Google Books

While Newton’s gravitational laws like F= GM1M2/R^2 are used in Relativistic operations when they are accurate enough, due to Perturbation theory there are limits and exceptions when only G is really carried over when you need more accuracy than the classical model provides.

Remember that while a useful concept, gravity is a fictitious force or an apparent force that is due to being in an accelerated reference frame. There is also a limit you hit because gravity under Newtonian Physics acts instantaneously across the entire universe while we now know that it is limited by the speed of light or the speed of causality. When gravity is not used to simplify the math, it is the curvature of space that is the best accepted current model.

The General Relativity model works all the way up to black holes and neutron stars, and to an accuracy that if someone asked the distance from you to the moon, the error would be less then the distance from the top of your head to your eyebrows. Where it does fall down is when the numbers get very very very small, like trying to map the effects to the scale past our local group. But to be honest that probably has more to do with computers inaccuracies with really really small numbers than anything fundamental.

As others have mentioned, the most accurate tests will not be using the Newtonian model, but the largest test for the equivalence principle is probably this one fromJuly 4th in Nature.

While QM and GR are in conflict related to the quantization of gravity, and neither theory is *complete *due to this disagreement, these disagreeing theories are sufficient enough to explain why gold is the color it is, and why your wedding ring doesn’t explode when you wash your hands. This is directly related to Gold having more S-orbitals, and relativistic contraction impacting the energy levels for absorption which for most metals is in the UV range but is shifted to our visible range due to the curvature of spacetime (but it can answered in SR with less math). The tools work in useful ways and both GR and QFT are the most accurate models we have ever developed, and that link above pretty much destroyed the MOND theories by putting experimental tests results into way too narrow of a band for them to work. (that had already been done btw)

We do need a unifying theory, but the changes it will make to the changes to the answers of the math we already use very small.

Note on my link to that Nature article, neutron stars and white dwarfs are extreme examples, showing that the theory holds even past the neutron degeneracy pressure. Which directly relates to the constraints tightening on Modified Newtonian dynamics.

Issues with the n-body problem are primarily limited today by:

  1. A lack of modern visits to outer planets like Neptune so we lack an accurate mass model limiting our ability to track the solar system barycenter with an accuracy less than 1 Meter.
  2. A lack of an analytical solution, and the choice to use conics to reduce computational costs in the numerical versions. (this is really the only practical option due to computational costs)
  3. A limitation on compute power and current accuracies of existing CPUs and the computational costs of non-hardware accelerated alternatives.
  4. An incomplete inventory of objects.

While it is common for people to claim that GR is a tiny impact the first term is actually the second biggest factor after the tidal forces in the n-body problem, and these largest terms are hidden in modifications of the classical forms through tools like the Post-Newtonian expansions which directly maps to the use of perturbation theory, and only adding complexity when it is required.

As Moore’s law has ended, and we are no longer increasing individual CPU core performance at an exponential rate, this may limit our ability to model the entire solar system within the centimeter range, but that should be reachable with a few more probes to the outer solar system. So hope for some breakthroughs.

Moore’s Law hasn’t ended; it’s just shifted more towards parallel computations, rather than series. Some problems are tough to solve in parallel, but most real-world big computational problems are easy to parallelize.

We are close to the end or past the end depending on how willing you are to change the terms it uses,

https://www.nature.com/articles/s41928-017-0005-9

And there are limits to speedup from parallel operations, which will break it even with more cores. And without overall feature size (which they changed the definition of to keep the press positive) costs will not go down at the same rate.

Even if we can get down to single-electron devices, the rule will only hold for a decade or so more. If we do hit 5nm in around 2022, which most calculations think won’t provide a significant advantage over 7nm, it will end even under the modern version that ignored the performance/cost.

As the needs like I mentioned above require shared data, and thus locks, which are really just race conditions the latency will be a huge limiter. While methods like HMB2, physically close to the die will help a bit it will not be the exponential changes in performance although the application I was talking about would benefit from a larger mantissa for FP, or hardware decimal support like Power9 provides.

But the per core performance improvements have been pretty minimal for a while now, and as orbital mechanics needs to use complex numbers some new features like FMA are not an option. Complex numbers are non-associative and quaternions are a non-commutative non-associative which breaks both the parallel options as well as a large number of the SMID operations.

Note that even the accelerometer in your cell phone uses quaternions, as does orbital and celestial mechanics.

While this is an example of bad assumptions by the team that makes dotnet, Skylake-X has non-exclusive cache in part due to the scaling issues related to Amdahl’s law that I linked to above. While Amdahl’s law is pessimistic, it is very problematic with datasets that need to reference the output of other calculations. These needs, like the n-body problem tend to hit scalability limits in 4-8 threads just due to lock contention and latency.

But parallelization has a fairly hard limit if you can’t break it into independant chunks of work.

If the exponent in the Newtonian gravity equation were other than 2, the orbit in a 2-body problem would not be an ellipse. I think this was known even before Newton (Robert Hooke, IIRC).

Unfortunately, in the quantum realm and in solar systems that are slightly more complex then ours orbits are not an ellipse. But it is very handy when the problem reduces to that assumption.

Exponents very close to 2 would result in orbits that are very close to ellipses. Get close enough, and you might not be able to tell the difference, at least until you observed for long enough, or got better measurements.

There was the Pioneer Effect. The two Pioneer spacecraft were (are) moving a bit off compared to Newtonian calculations. While the issue was still up in the air several explanations were proposed, including variations of Newtonian gravity.

Note that the effect became noticeable once they were about 20 AU out. So far enough out we didn’t have a lot of very precise data (including mass) on many such objects but close enough we could actually measure speed, distance quite accurately.

OTOH, most such proposals were shot down due to our knowledge of such things as Neptune’s moons where the data, while not as accurate, was still long term enough to rule out a bunch of things.

(Spoiler: it was thermal effects.)

There is sort of a trade-off of where we might expect such anomalies. So 20 AU with quite precise knowledge of all the parameters is apparently in the “we’re fine here” zone. But going out to interstellar or intergalactic distances, the precision of data is going to be progressively worse.

That’s why exponents other than 2 were proposed to explain the orbit of Mercury, which isn’t quite an ellipse.

While not at the galactic scale, but more related than is easy to convey, this paper was just published yesterday.

Detection of the gravitational redshift in the orbit of the star S2 near the Galactic centre massive black hole

They claim that they expect to have a 5 sigma result in the next few years.

While it is a bummer that ESA’s Euclid launch was delayed to 2021, it will hopefully answer more questions at the galactic scale.

But note that the result from the above paper are inconsistent with pure Newtonian models, but that was the the expected result.

A new study out:

Here is the article from the Astrophysical Journal:

https://iopscience.iop.org/article/10.3847/1538-4357/ace101/pdf