How do calculate the energy release of an A-bomb from first principles (eg, Trinity)

I was thinking about the infinitismal, but real energy created/released (I don’t think I really understand the difference) when, on the rare occasion, a neutrino zaps into my retina. (Neutrinos just came up in a thread; neutrinos zapping retinas is discussed at length in an old thread.)

So then: you can add up those events, and after a while they add up to real money, as they say. Is that how the physicists at Los Alamos did it, but with the what-all atomic events going on in that reaction?

At Trinity, they notoriously weren’t sure - there was a betting pool. At one point, they considered the possibility that it might ignite the atmosphere, although I think they were at least fairly sure this couldn’t happen. Fermi had side bets going on the destruction of the entire state.

I think after a few basic empirical factors were sorted out, simple A-bombs could then be simulated accurately.

But they then messed up on the predicted yield of one of the early H-bomb tests, Castle Bravo, missing one reaction path completely.

How did he plan to collect his winnings?

I seem to recall that one of them scattered some papers about, so that he could measure how far they blew and calculate the energy release from that.

Also Fermi. Hence “Fermi estimation”.

A bit of a joker and a smart fellow, Enrico.

The amount of energy created by fission can be both calculated and measured. Calculated by the change in mass of the products vs the original atom’s mass, and measured by boiling water in a reactor. The problem during Trinity was the unknown efficiency of the bomb - how much of the mass of the bomb actually underwent fission. So during the Trinity test, they set up many different instruments to measure the force of the blast. During later tests (“Mike,” for example), a rough estimate of the magnitude of the explosion could be determined by the size of the fireball, which one scientist did by using a quarter held at arm’s length.

Before anyone actually made any atomic explosions, they had only very roughly an idea how long criticality would be maintained such that the chain reaction would continue, and how much would be consumed during that time period. They knew that the energy from the explosion would cause itself to blow apart eventually such that it was extremely unlikely that even most of the material that was fissile actually would undergo fission (counting by atom; most likely it involved nuclei in every region of the critical mass, just not all of them), but it’s telling that the yield was “predicted” the best by the last person who entered the betting pool and had to take the last choice (so really he wasn’t right, everyone else was wrong).

Thermonuclear weapons do not produce their additional energy through fusion as much as through additional fissioning of U-238 that requires the fast neutrons from the fusion reactions in order to fission. Still in these designs, they will blow themselves up before consuming a majority of the fuel. It would require building the entire device atom-by-atom to be able to ensure fissioning of almost all the fuel, and the required amount of work/expense to do that is prohibitive compared to just mashing it all together with conventional explosives and letting most of it go to waste. Maybe someday we’ll have nanotech advanced far enough to make a significant explosion that uses all of its fuel, but it won’t be any time soon. (Even then, it probably wouldn’t be good enough because in room temperature stable conditions you’d have to have those pesky electrons to keep everything organized, and their spatial probability distribution suggests that they might be able to mess things up beyond predictability thresholds - but I don’t know enough about the process to make solid claims about this sort of thing)

G. I. Taylor famously published rather accurate estimates of the Trinity test’s yield using only the series of time-stamped still images published by Life magazine (example image at that link). The yield was still classified at the time.

Taylor’s calculation was rather involved, but you can get a pretty good answer using the single image in the Life magazine link above and some dimensional analysis. To wit…

Things we know: At time t=0.025 seconds after detonation the shockwave reaches a radius r=130 meters, give or take. En route it was pushing air out of the way, and air has a density d=1.2 kg/m[sup]3[/sup]. (Instead of the density of air you might find it more natural to think in terms of the total mass of the air displaced, but this mass is simply the air’s density times the volume under that hemispherical shockwave, both of which we already know. So, we can work with density or total mass displaced.)

We want to calculate the energy released by the bomb. Energy has SI units of kg/m[sup]2[/sup]/s[sup]2[/sup], so we need an expression involving t, r, and d that gives us the right units(*). There is only one way to do so:

E = d r[sup]5[/sup] / t[sup]2[/sup] .

Assuming any missing numerical (unitless) factors are of order unity (i.e., not too much bigger or smaller than 1, a common – and commonly good – assumption), we can just run the numbers. Google can do the unit conversions for you to get to “kilotons TNT”:

Google: (1.2 kg/m^3)*(130 m)^5/(0.025 s)^2 in kilotons TNT = 17 ktons TNT.

The actual yield is estimated at 21 kton TNT.
(*) self-nitpick: we need an expression that has the correct “dimensions”. Units are a matter of convention.

Neat. (To say the least.)

But back to the “first principles” type of answer: is that (crudely spelled out in OP) remotely how the thing was estimated beforehand?

I haven’t seen my copy of Rhodes in a dog’s age.

I wasn’t sure what you meant in the OP. Estimating the yield before actually detonating the bomb is primarily an engineering question. The physics of fission reactions, neutron moderation, etc., were known to a decent enough level by the end of the Manhattan project. The driving question was how much of the reactants would actually react before the whole thing was blown to smithereens. That’s a tough engineering question, but it’s philosophically no different from other engineering questions.

Or are you after how the energy release of a single particular nuclear fission reaction is known (never minding how many of those you get when the bomb goes off)? Some of this has been said upthread, but more can be said. I’m just not sure which question you’re asking.

Of course, it’s always ambiguous just what “first principles” actually means, since there are always other principles before the ones you consider “first”. As Sagan once said, to bake an apple pie from scratch, you must first create the Universe.

They almost certainly knew what the absolute upper-bound was- with a known quantity of fissile material, if you assume every atom fissions, you’ll be able to calculate the absolute maximum amount of possible energy released.

But like others have said, knowing just how much of the pit would actually fission is something they couldn’t predict. They could (and probably did) do calculations like “If 1% of the U235 inthe bomb fissions, yield will be X kilotons. IF 2%, then Y kilotons” and so forth. That was relatively easy- they knew how fission worked, as they already could see it in action in various experiments. The only trick was getting it to actually go prompt critical, and there was a whole lot of variables and uncertainty surrounding the bomb itself. For example, if the pit wasn’t perfectly spherical or was not uniformly dense, that would have an effect on the yield. So would imperfect explosive lenses or detonation timing.

For example, Little Boy was only 2% efficient- 2% of the uranium actually fissioned (an amount the size of a peppercorn), while Trinity and Little Boy were more efficient-

After a while, they got a fairly good handle on just how much might actually fission, but at first, they had no idea.

As an example, the “2nd generation” Fat Man-type bombs used a “levitated” core, where there was a gap between the core and the “pusher” (tamper). This resulted in significantly greater compression of the core, and more than doubled the efficiency. (The analogy is pushing a nail into a board with a hammer, verses striking the nail with the hammer).

Right, and they couldn’t really know ahead of time how well that might work, other than knowing that it would be more efficient.