Please explain the benign criticality of the Chicago Pile-1

I was reading up on the Chicago Pile-1. The pile achieved critical mass, but according to this site generated so little power that it was too weak to power even a single light bulb.

It’s not clear to me:
a) how they knew the pile achieved critical mass
and b) how the pile could go critical, but generate so little power. (I thought that critical mass = blowing up real good.)

Kindly overcome my ignorance on this matter.

When you’re sub-critical, you occasionally have a spontaneous fission, but it usually doesn’t do anything (though it might occasionally trigger another fission event, depending on how close to criticality you are). When you’re just exactly critical, each fission event will, on average, trigger one more, so you don’t have to wait around for it to happen spontaneously… but it’s still just one at a time. When you’re supercritical, each fission event triggers, on average, more than one other fission event, and so the amount of fission grows exponentially with time. But if you’re only slightly supercritical, the rate at which it grows will be very slow, and you might still have a chance to reverse the process. When you’re far enough supercritical, for a long enough time, you get enough heat released to produce structural damage. Do this on accident, and the structural damage from the heat probably happens slow enough that you wreck the conditions causing the criticality, and you get a fizzle: Not something you want to be right next to, but hardly an Earth-shattering kaboom.

If you want to blow up real good, then you need to arrange to go very supercritical, very quickly, such that the reaction rate can grow extremely large before the heat has a chance to produce a mere small explosion.

Thank you! So when they say the material in a nuclear bomb reaches critical mass, they really mean supercritical mass?

Now, given the very small amount of energy of the Chicago Pile 1 given the huge size of the pile, how did they know it was critical?

They were measuring the radiation coming off of it. If you turn the dial and the radiation increases while you’re turning the dial but then stays constant when you stop turning, you’re subcritical. If you turn the dial and the radiation continues increasing even after you let go of the dial, then you’re supercritical.

The classic examples.

In these cases, the unfortunate operators separated the critical mass manually, but even had that not happened there would have been some self-limiting physical effect, possibly up to the hemispheres heating up enough to just melt.

To sum up what Chronos says above, achieving ‘criticality’ doesn’t mean blowing up, it means becoming self-sustaining. There are plenty of things that can go wrong with a nuclear reactor, but exploding like an A-bomb is absolutely not one of them. In terms of accidents, even though the reactor at Chernobyl did ‘explode’, knocking the massive multi-ton lid off the pile, it in no way could be called a ‘nuclear’ explosion. It was, in fact, just a steam explosion (albeit an extremely radioactively dirty one!)

By design, the pile had a time constant that was measured in minutes, so there was ample time to observe the build up of neutron flux and shut it down before it got dangerously high. Fermi and his team kept pulling out the control rods and monitoring the neutron flux on a chart recorder to see if the curve was curving downward (exponentially reaching an asymptote) or curving upward (rising exponentially toward infinity). Ultimately, they reached criticality and watched the curve rise at an ever increasing rate. Fermi did calculations to verify that the curve was exponential before lowering the control rods after 28 minutes of supercritical operation.

The First Pile

This is key to remember. It’s actually extremely difficult to set off a nuclear bomb by accident – although this one in N. Carolina actually came scarily close to exploding. :eek:

To add to what others have already said, the fact that a nuclear reactor is critical has nothing to with the amount of power it is generating. It simply means that exactly as many neutrons are being produced by fission events as are being utilized to keep the reaction self-sustaining. However, the number of fission events that are actually occurring per given period of time could be a lot, or just a few. The former generates more power and heat than the latter.

To increase the power of a reactor that is critical but at a very low power level, it is necessary to make the reactor slightly supercritical. This is done during reactor startups.

Interestingly, the power level of a reactor varies by many, many orders of magnitude. When the reactor first goes critical during a startup, it is at a power level that is something like 12-14 orders of magnitude smaller than the power level that actually generates noticeable heat.

The Chicago Pile-1 stayed at this very low power level. They did not allow it to increase its power level to the point that any significant heat would be generated.

There’s slightly more to it than that. A key difference lies in delayed neutrons. Without them, a reactor that was just slightly supercritical would ramp up to meltdown power much too quickly to allow control by conventional electromechanical systems.

Well, yes, of course there’s more to it than that. I don’t see that anything I said there is wrong, though.

I was a physics major at the University of Chicago, and there was a time when they had a display in the physics building of the charts of output of the energy (I think) of the pile as they removed the rods. At some point, it very clearly went exponential, then was shut down.

It was a pretty cool chart, as there were marks on it for when they changed scale and (I think) when rods were removed. It was very clear when the exponential rise started, indicating the self sustaining reaction.

That chart is shown in the link I provided above: The First Pile