I remember once reading that scientists had long calculated that the sun (and presumably the other stars as well) could not simply be a giant ball of conventionally burning hydrogen. So, what did they think it was?
I remember reading about this… not sure how historically accurate it is, but what I remember was…
-
The model of the sun as ‘giant ball of burning coal or the equivalent.’ Before it was universally accepted that the earth and the solar system were more than ten thousand years old at most (as literal interpretation of the bible would put it,) this was considered quite feasible.
-
The model of the sun generating energy through gravitational contraction. This actually allows for considerably longer lifetimes of the sun and earth… on the order of tens of millions of years, maybe. But even that wasn’t enough once geologists started to come up with definite signs that the earth had been around for BILLIONS of years.
-
For a time, I think, there was simply no known energy source that could account for it… after the time requirement was clear, and before the discovery of the first nuclear reactions.
There was pretty much just a century - from the mid-1800s to the mid 1900s at the latest - where the question could be seriously addressed. Before then, people did obviously wonder about the issue, while realising that all they had was speculation. Nor was it even obvious exactly what had to be explained, since neither notions about the Sun nor about the physics of heat were particularly firm. Thus, even in the late 1700s, Herschel didn’t even believe that the Sun was hot: he thought it was a cold, possibly inhabited, solid globe surrounded by a luminous atmosphere.
In stages, various necessary pieces of the puzzle became available. By 1800, there were reasonable determinations of how heavy the Sun had to be. Then thermodynamics, and the conservation of energy in particular, came onto the scene. And that’s really when physicists were able to start thinking about the question. The first proposals involved lots of people assuming that it was indeed just something like a giant pile of coal burning. Though this was actually never sufficient to explain Biblical chronologies. For the first person to actually calculate how long the Sun could shine for if it were a chemical reaction was William Thomson (henceforth Kelvin, though he wasn’t ennobled until later) in 1862 and he came up with a maximum age thereby of 3000 years. Which is shorter than anybody required at the time. Those suggesting that chemical means sufficed had always either been unable to quantify the idea or had not bothered to do so.
As is often the way, Kelvin was actually doing the calculation because he didn’t believe in chemistry as an explanation. For there was already another proposal on the table. This had been tentatively suggested by various people, with Helmholtz having pushed it more strongly. This was the idea that the Sun’s energy derives from gravitational contraction. Over the next few decades there were to be various versions of this idea. Was the Sun just contracting or was there still stuff falling onto it from the rest of the solar system? Kelvin’s assumptions could also be challenged on entirely reasonable grounds (for instance, how rigid was the Sun?) and so the debate, even amongst physicists, never reached consensus. Though it is true to say that most writers on the subject in the late 19th century believe that some version of the Helmholtz-Kelvin idea was likely to pan out.
The wider implications of this part of the debate were, however, less important than usually portrayed. Kelvin’s major estimates of the maximum age of the Sun based on contraction were not entirely unliberal by the standards of the day. In that same 1862 paper, he concluded that it was “probable” that the Sun had shone for less than 100 million years and “almost certain” that it was less than 500 million years. In hindsight, he was wrong by about 2 orders of magnitude, but a look at the numerous late 19th century geological estimates of the age of the Earth handily complied in Table 2.1 of G. Brent Dalrymple’s excellent The Age of the Earth (Stanford, 1991) shows that most of these were in the 10-500 million year range anyway. Now it is true that the likes of Tait and Newcombe claimed to screw Kelvin’s argument down to about 20 million years and that Kelvin was sympathetic to their efforts, but these contributions carried less weight. Kelvin was also able to come up with significantly tighter numbers for the age of the Earth. What’s then more usually picked up on is Kelvin’s conflict over these latter limits with Darwin’s numbers in The Origin of Species, though I argued in this old post that Darwin’s original point was somewhat more tentative than it’s normally presented as.
Although dominant, the Helmholtz-Kelvin idea wasn’t quite the only one on the table in the late 19th century. One adaption of it was by James Croll. He wanted the age of the Earth to be about 100 million years and so was uncomfortable with the Tait-Newcombe tightning. His observation was that their models all assumed that the material forming the Sun was cold. If it were hot, then the timescales could be longer. Thus he had the Sun form from the collision of two stars. These were obviously incandescent to begin with and hence had extra energy to bring to the party. Meanwhile, the Plumian Professor at Cambridge, James Challis, suggested that the Sun’s energy derived from some vague interaction of its mass with the aether. A more marginal figure, the chemist William Mattieu Williams, came up with a scheme involving a cyclical interaction between the Sun and the surrounding stars. This was particularly weak by modern (or even contemporary) thermodynamic standards, but both Alfred Wallace and Charles Lyell seem to have been taken in by it for a while.
I’m not sure there was ever a period before Rutherford where many people were absolutely sure that the Sun was older than could be explained in Helmholtz-Kelvin fashion. The debate was never that clear cut. There was always wiggle room. Though some did suggest that some unknown energy source was going to have to be the explanation. As it turned out, the discovery of radioactivity both sort of explained the Sun’s energy source and relatively quickly forced geologists and astronomers to accept longer timescales.
Though radioactivity was not quite the end of the story, even allowing for the details to take decades to work out. During the 1920s, Niels Bohr had quite a persistent obsession with the notion that energy conservation might break down on either atomic or nuclear scales. That this might explain stellar energies was an attractive idea, at least once you accepted the premise. Hence, even in 1929, one finds him dragging the question of how the Sun shines as evidence that the conservation law might break down in nuclei. In the event, the actual problem he was worrying about (the beta decay spectrum) was solved by Pauli’s neutrino and then the solar question was cleared up in the 1930s.
Thanks all, with an extra special thank-you to bonzer. I appreciate the trouble you took to spell out such an articulate and thorough reply.
OK, I have to ask. How the hell did you know all this? I’m assuming you knew some of this beforehand – it does not read like the summation of your readings since seeing the question.
By having read lots of stuff on the history of physics and astronomy over the years. Though Joe Burchfield’s Lord Kelvin and the Age of the Earth (Chicago, 1975; 1990) specifically has a fair amount about the arguments around the Helmholtz-Kelvin-type models of the Sun and that’s where I’d come across the details about the Croll, Challis and Williams alternatives.