Can a theory be entirely wrong and still make correct predictions?

The biggest difficulty is that mass isn’t necessarily additive.

Consider, for example, a system which consists of an electron and a positron, sitting right next to each other, at rest in some reference frame. An electron has a mass of 511 keV/c[sup]2[/sup], and a positron has the same. The entire system of electron + positron has a mass of 1022 keV/c[sup]2[/sup]. Now, after a short time, this system will undergo an interaction, and turn from an electron and a positron into a pair of photons moving in opposite directions. If you look at a system consisting of a single photon, you’ll find that it has zero mass. The temptation, then, is to say that since each individual photon has no mass, the total system consisting of both photons therefore also has no mass. But you can’t get the mass of the total system just by adding together the masses of the subsystems: A pair of photons moving in opposite directions actually does have a mass. In this case, the mass of the pair of photons is 1022 keV/c[sup]2[/sup], just like the initial system.

I remember taking philosophy classes there was a great one Socrates/Aristotle? that postulated that matter was made up of very small particles shaped like a Z and because of the shape they all hooked together to for what appears to us to be solid matter. Hence, the forerunner of atoms etc.

The motion of the planets can only be modeled using a geocentric reference frame by using a lot of patched together and ad hoc approximations. Using a heliocentric reference frame, however, allows you to use Kelper’s Laws of Planetary Motion, which are so simple that there are only three of them of one sentence each and requiring only a rudimentary knowledge of trigonometry, or more generally (using calculus) Newton’s Laws of Motion and Gravitation.

These are themselves, of course, approximations to a more general theory (General Relativity) which accounts for the way space is distorted in the presence of large masses, but is sufficient for to account for observations and trajectories out to about twelve decimal places.

One or more tenants of a theory can be fundamentally wrong and yet give approximately right answers; better yet, a theory can be based on mathematics that work out or a model that is tuned to provide correct results within a certain regime, and make accurate predictions within that regime, and still be wrong. Others, like (say) the central dogma of molecular biology (“Genetic information is transferred only from nucleic acid to protein, not protein to nucleic acid or protein to protein,”), might be generally accurate but frequently violated by exceptions to the rule.

This is why no theory in science is ever complete; all theories are subject to falsification. It’s just that some have been tested to an extensive degree without any sign of violation.

Stranger

It may be easier to say that momentum is always conserved (at least, in a perfectly elastic system) and back out the total mass of the system or any component within therefrom.

Stranger

the OP isn’t well-worded. What is meant by “entirely wrong”? Almost any theory will be correct some of the time-or it wouldn’t even be considered as a possible answer to the hypothesis. Do you mean that the theory provides a complete explanation of the problem? I would interpret that to mean that the theory is correct. Any theory that explains the problem is tentatively correct until it can’t explain one observation. Then it isn’t. So in this sense almost all theories that turn out to be wrong satisfy what I think is the OP’s question.

I wrote an article about this a while back. In the early years of spectroscopy in the 19th century they discovered the discrete spectral lines associated with each element, and saw how they could be used to identify each element (many of the names of the elemens so discovered were based upon the observed spectra, in fact, like “Rubidium”. )

The problem was that there was no theory explaining why the lines were where they were. In many cases the spacing seemed completely random. In others, like Hydrogen, they seemed to suggest some sort of order, but nobody could figure out what it was. In the 1860s, various people discovered apparent regularities – Alexander Mitscherlich of the University of Berlin noted a regular progression in the series barium chloride, barium iodide, and barium bromide. Francis Lecoq de Boisboudron (an unaffiliated scientist) noted a geometrical progression in the lines of potassium. The alkali metals, in fact, were tantalizingly similar to hydrogen in some ways, but unlike in others.

But it was George Johnstone Stoney, professor of Natural Philosophy at Queen’s University in Dublin, who claimed to have finally found the sought-after explanation. Or at least part of it*. Like many scientists, he expected the spectral lines to resemble some sort of vibrating system, with a fundamental frequency and a series of overtones. Once you found your fundamental, it was just a matter of foubling it or tripling it, or finding some other multiple to find the overtones, and these ought to correspond to the observed lines. The hitch is, no one had succeeded in finding such a fundamental and overtones for any of the spectra. Not even hydrogen, the simplest and most regular. So how did Stoney succeed?

He observed that the four visible hydrogen lines were at (in round numbers. Please note that Stoney was more careful than this – he corrected for the wavelengths in vacuum) 4102 Angstroms, 4342 A, 4862 A, and 6564A. 4102 isn’t a good fundamental, but Stoney somehow discovered that the first, third, and fourth of these are in a ratio of almost precisely 20:27:32. Maddeningly, the second one doesn’t fit the pattern. But the others do.

What about the “missing” orders? Why aren’t there lines at 21 and 22 and 23 and so forth? Stoney decided that some sort of “interference” – like the interference observed by Thomas Young in his famous experiment – was responsible for making those other orders invisible. For some reason, only selected lines showed strongly.

Seeking to bolster his case, Stoney collaborated with J. Emerson Reynolds to study another system – chromochloric anhydride** – to see if it could be fit. They were able to find a fundamental frequency that, with proper overtones, allowed them to fit an astonishing 31 lines of the spectrum.

The problem was that they could only do this by using absurdly high overtones (one was the 733rd) and ignoring vast numbers of others that didn’t fit. The “interference” theory didn’t seem convincing. Some were convinced, but others, like Franz A.F. Schuster of the University of Manchester, were not. He calculated the probability that a fundamental and overtones chosen at random could fit observed spectra, and found that it generally yielded a pretty good fit – certainly as good as the ones being made with Stoney’s theory. Schuster’s paper, “On Harmonic Ratios in the Spectra of Gases”, pretty much killed any further efforts to fit spectral lines in this way.

But along the way, Stoney’s work, and that of other researchers, had fit the spectral lines for at least four materials and therefore predicted other spectral lines. Of course, the vast bulk of these didn’t fit anything (even by the theory, which didn’t expect them all to fit). But if you used Stoney’s theory to fit, say, the first and third lines of hydrogen, you can get a correct prediction for the fourth. That seems to pretty much fit the OP’s requirements.

The real solution came from the work of Johann Jacob Balmer of Basel a few years later. As the story is usually told, this was a humble teacher of mathematics at a girl’s school who went in to the work without any preconceptions about the nature of the relationship. Thus unhampered by ideas of overtones, he empirically discovered the now-famous Balmer Formula that was later explained by Rydberg and by Bohr theory.

As is usually the case with “the usual story”, this one isn’t really correct. Balmer was, in addition to teaching at a girl’s school, also a lecturer at the University of Basel. Far from being unaware of an uninfluenced by previous work, he explicitly cites Stoney’s work as his inspiration. What Balmer did observe, however, was that those hydrogen line ratios were too damned close to be purely coincidental. He observed that you could express the raios better in fractional form as 9/5, 4/3, and 9/8. These are pretty small numbers – even smaller than Stoney had used. Moreover, if you expressed it as a fraction, and let the numbers get just a shade bigger, you could fit that stubborn second line with 25/21 and get a fit as good as the others. Then he noticed that you could re-express these ratios as:
9/5 16/12 25/21 36/32
Now the numerators were all perfect squares of 3, 4, 5, and 6. And that looked like a proper series of harmonics. Moreover, the denominators were all exactly 4 less than their numerators. And that showed a pattern. He extended the pattern to different values, and found that he could fit newly-discovered lines in the ultraviolet as well.

*Stoney’s real claim to fame is his work on the electron, which almost no one seems to remember these days. But he’s the one who named the particle.

**I haven’t been able to learn why they used such an odd material. Why not one of the alkali metal spectra, which were much simpler than most other spectra? Why not Sodium, already in use as a standard?

Immanuel Velikovsky predicted radio waves from Jupiter and high temperatures on Venus. His theories involved Venus emerging from Jupiter, suffering a near collision with Mars, and then settling into its present day orbit. Top Cash Earning Games in India 2023 | Best Online Games to earn real money . To this day, his 1940 classic, Worlds in Collision is hotly debated by thousands of nutters.

Atomic theory was developed by Democritus, who came before Socrates. It was essentially only a philosophy for about 2000 years, when evidence showed it to (more or less by luck) be correct in some details. The philosophical question remained, though, since it concerns itself with whether the universe has fundamentally indivisible particles.

Thanks, marvelous story.

Defining a center is completely arbitrary. There’s no reason to suggest that heliocentrism is any more correct than geocentrism.
Ptolmeic models of the motions of the stars remain the most accurate, simple method of predicting what the heavens will look like.

Half right. There’s nothing to say that Tycho’s model, for instance, where the Sun and Moon go around the Earth but everything else goes around the Sun, is wrong, so in that sense, geocentricism is still viable.

However, if we’re looking at simple methods, the simplest version of the Ptolmeic model doesn’t match the heavens at all remotely closely, while the simplest version of the Copernican model can at least qualitatively explain all of the observations (why retrograde motion occurs, why a planet is always brightest when it’s retrograde, why it’s always opposite the Sun when it’s retrograde, why some planets never go retrograde, and why the planets that never go retrograde always appear relatively close to the Sun in the sky). The Ptolemaic model never had any explanation for why a retrograde planet should be opposite the Sun, and was only able to explain the other observations using epicycles that the simplest version of the Copernican model lacked.

I think in scientist-speak, that would be a hypothesis, not a theory. (And if people didn’t wait at least 20 minutes after eating ice cream before they went swimming…)

No explanation, perhaps, but it’s simple to figure it out. I’ll take my copy of The Almagest, you take a copy of De Revolutionibus, and we’ll figure out where Venus is going to rise six weeks hence. I bet I get my answer first.

(well, given that I still remember how to do base-60 arithmetic as quickly as I could ten years ago)

It may not be “wrong”, some still think it makes an OK “rule”. We’ll see.

There’s *“luminiferous aether” *which did explain a lot. Not enough, however.

There’s the “Presidential death in office if elected in years divisable in 20”, which worked fine for about twice. The odd coincidence was first noted after Harding but before FDR, thus it “predicted” FDR and JFK. But failed after that. It *back predicted *5 deaths, but that’s easy to do. (This is why Nostradamous has never predicted anything at all, but has a fantastic “back-predicting” record.:dubious: Of course, many Nostradamous quatrains have the benefit of being written AFTER the event, by modern writers))

The “luminiferous aether” didn’t really explain anything; it was an ad hoc fix to accommodate the need for a medium for the apparent wave nature of light. It failed to make any predictions, and in fact, the Michelson-Morley experiments were originally performed not to debunk it but to find the velocity of the Earth relative to the presumably invariant medium.

Isaac Newton created a corpuscular theory of light (based upon the writings of natural philosopher Pierre Gassendi) which preemptively addressed what would become many of the problems with classical wave theory; meanwhile, René Descartes proposed a “plenum” theory that light was a disturbance in the underlying medium of space which may be, in broad strokes, ultimately correct.

What about laws that were originally thought to be wrong or ad hoc but ended up being correct? Max Planck originally thought that the quantization of electromagnetic radiation in developing his eponymous law was a mathematical formalism, and Einstein’s Nobel Prize-winning explanation of the photoelectric effect failed to convince him that light was actually quantized. And yet, he is legitimately the grandfather of quantum theory, a concept he never fully accepted.

Stranger

The ancient chinese made all kinds of mistakes, but wound up with somepretty good stuff. Take their medicine-they stumbled upon some effective pain-reduction methods 9acupuncture0, with a totally wrong theory of the human body. Or ceramics; the chinese perfected ceramics technology in ancient times 9and made some of the world’s best porcelein) without knowing a thing about modern chemistry.