Yesterday’s Fermilab newsletter referenced this article in New Scientist.
More at link
Yesterday’s Fermilab newsletter referenced this article in New Scientist.
More at link
From the New Scientist article
So same setup produces similar results. I’m calling experimental error until 2 or 3 other teams reproduce the same thing.
Yes. I did read the whole article too. Can’t say I really followed what they were getting at, but then that’s why this is in GQ.
Also I would point out that the Fermilab newsletter is written by the people who run one of the world’s premier facilities for particle physics research. Even with the Tevatron gone, they still have one of the most powerful accelerators I believe - although I don’t actually know what the rating of the main injector is. I do know that it’s more than a half mile in diameter - if that gives you any idea though.
So for them to reference an article, you should probably take it seriously.
Who said I didn’t? It’s interesting but then again so was superluminal neutrinos and I was equally skeptical about that.
Besides any article coming from the IBWM dealing with G is going to get mentioned in an institutional newsletter. That says nothing about the validity of the claim. Take the Fermi newsletter article aboutfaster than light neutrinos from October 2011. You’ll note that neutrinos are still not faster than light.
Your statement was: “I’m calling experimental error until 2 or 3 other teams reproduce the same thing.” That sounds a lot like prejudging things to me.
And comparing this to the neutrino situation is completely bogus. That was ONE experiment. Here we have hundreds using a variety of methods. So the experiments have already been done. Maybe not precisely the same experiments that are the subject of the article, but G has been measured countless times, so the analogy is completely inapposite.
As for Fermilab referencing the neutrino fiasco, that’s all they did - reference it. And that was completely appropriate - as is this. Neither reference was an endorsement but only an indication that these are things that are of scientific significance, which they are/were - or at least in the neutrino case, would certainly have been were it not merely and issue of bad measurement - which seems to be very unlikely in this case given the vast number of experiments already done.
So variables won’t, and constants aren’t-gotcha.
You continue to misunderstand me.
First off the neutrino experiment was not a fiasco at all. It was an experiment repeated over and over again by a single team that resulted in consistently faster than light velocities. They worked to eliminate every possible variable and then released their findings to the wider community to investigate. Turns out others noticed a systemic issue. I’d call that an shining example of personal integrity and validation of the scientific method.
In this instance we have fundamental constant and a team that has 2 values both on the high end of that constant. So I’m intrigued but my default assumption is experimental error.
Maybe I am misunderstanding you but I’m quite sure that you’re one of the few people who doesn’t see that as a fiasco.
As for the values, the one reported by the BIPM was higher but another was lower, so I don’t know which 2 you’re talking about.
And the article itself concedes that it may be experimental error. As was noted in an earlier quote, the official estimate of G is an estimate, so that’s not very remarkable. This is why I’m posing the question here. I don’t need someone to tell me what is already in the article. I’m looking for additional background.
So your question was what exactly? Is G constant? The answer is yes, as far as we know though the measuring of the value is difficult and recently a team has a larger value that needs to be corroborated by other teams.
Or was your question "Is there evidence of changing G when we look back in time?, “Is there evidence of changing G is the orbital decay rate of neutron star pairs?” or maybe “Is there a consistent growth in the experimental value of G over time?”
The short answer is that it is always, always, always, always experimental error until a dozen separate teams in disparate places using different methods provide a new answer.
That has not happened.
It’s good science, and it’s worth keeping an eye on, but it’s almost certainly experimental error of some sort or another. Not only do the vast majority of experiments all (roughly, to within known random error) agree with each other (and disagree with this result), but there are also other lines of research that show that, whatever the value of G is, it’s constant or very close to it.
If G varies with time, then the orbits of planets would vary with time, too, as would things like the luminosity of stars. Worse, the effects are in the same direction, so you’d get both the Earth being closer to the Sun and the Sun being hotter, at the same time (or vice-versa, of course, if the change is the other way). There are a variety of ways to tell what the temperature of the Earth was in the past, and they’ve never been even close to consistent with what varying-G models would predict (you tend to get things like the Earth being above the boiling point of water a billion years in the past, which is very blatantly not true).
My question is the question in the article and I’m sorry neither of you are able to understand it. 1. Is it possible that G is not really constant?
2. Why has it been so difficult to measure more precisely?
3. What are the implications for either
3a. a force that independently oscillates in intensity or
3b. changes intensity due to a fifth force or gauge field.
If it oscillates but only by parts per million, how would that be true. Oscillations, by their very nature eventually cancel out, don’t they? So if they’re very small and average to zero, I’m having a hard time understanding how they would affect things on a cosmological scale.
If it oscillates, then you wouldn’t get secular effects like the Earth boiling, but you’d expect to see some sort of resonance with systems with the same period. Just talking solar system tests, I’d expect to see a band of distances from the Sun with a suspiciously low object density, since objects at that distance would be knocked out by the resonance. Or maybe it’d have to be a galactic test instead of a solar system one, depending on the period. Granted, it’s possible that there is such a band and we just haven’t noticed it yet.
Alternately, it might vary around a central value, but in a non-periodic way. But that’s a lot harder to explain what underlying mechanism would cause that to happen.
Oh, that’s an easy question to answer. G is small enough that the forces between any objects you can fit in an Earthbound laboratory are extremely weak, so you have to very precisely account for all of the other extremely weak forces in your experiment. A little excess static charge, or a little air resistance from a vacuum chamber that’s not quite good enough, can ruin the whole measurement, as well as any number of other possible error sources.
Thanks anyway.
Yes, but you’re assuming universal oscillation and I don’t think that’s the assumption in the article. I got the impression that the variations they’re looking for are local. If the oscillations are local, those considerations wouldn’t appertain.
Yeah, that’s what I figured. But even so, with Josephson junction devices, interferometers etc., it seems to me that we ought to be capable of sufficient precision that this shouldn’t be such an issue. But I’ll take your word for it. Thanks.
never mind.
No problem. Still, betting on hypothetical maybe forces vs. experimental error seems to be a poor choice. We’ll see once a few more differing teams run the experiment.
I’m not “betting” on anything. I respect the people at Fermilab and pay attention when they point me to something. I find this possibility intriguing. If you don’t, that’s fine.