Why So Many Nuclear Tests?

I’ve just been watching this video by Japanese artist Isao Hashimoto. The video is a time-lapse representation of every nuclear explosion on Earth between 1945 and 1998.

It’s 16 minutes long but according to this the United States exploded 1032 nuclear devices and the Soviets 715. Why was it necessary to conduct so many tests? Was this just Cold War posturing or was there a scientific reason behind each one? Surely if the UK and China could attain nuclear capability from just 45 each (albeit, no doubt, with some help from their more nuclear capable friends) why couldn’t the US and Russia?

Extra question: is the 45 tests each figure for China and the UK merely a coincidence?
Thanks!

It really only takes 1 successful test to acquire a nuclear capability, just to see if your design works. Most of the tests were to study weapon effects on various things, and to help refine designs to be more efficient.

Only one test was necessary to “attain nuclear capability”, for the US at least - the Trinity test prior to the bombings of Hiroshima and Nagasaki.

Subsequent tests were used to test new weapon designs. Literally hundreds of designs were developed, to achieve various goals. Some weapons were designed to have a configurable yield, meaning the operator could configure them to produce a certain amount of explosive power. Some were designed to be fired out of artillery cannons, or submarines. Some were designed to be launched into space and then survive re-entry. Some were designed to be especially compact or lightweight, others designed to have a certain shape so that they could fit in a re-entry vehicle with the proper weight distribution. And so forth.

All of these designs required testing in order to achieve confidence that they would work properly. Remember that essentially all weapons were designed before the computer age, when predictive capabilities were fairly low compared to what we have now. And regardless of how much computer power you have, there’s no substitute for experiment - dragging a bomb out into the Nevada desert, pointing a bunch of diagnostic equipment at it, and then watching what happens when it goes off.

As computers became more and more widely used, data from nuclear tests was used to calibrate the computer models used to design new weapons, and model the performance of existing ones.

None of this is unusual for the development of any class of device, weapon or otherwise. You wouldn’t be surprised if you heard that, in total, the US had conducted 1032 tests of air-to-air missiles from 1945 to 1990, or that the Soviets had conducted 715 tests of various kinds of parachute. Nuclear weapons are not any different.

Interesting video.

I had no idea the UK tested their nukes in Australia and also Canada (?), or that France performed tests on the African continent. Neither that the US performed so many tests near Japan (which seems quite insensitive).

Isn’t subsequent testing also supposed to be necessary, to check if older weapons are still good? Plutonium decays with time, and there is some question (isn’t there?) about how long a weapon can go without replacing the Pu and still remain viable.

(In fact, one of the “strategies” of people who oppose nuclear weapons is to oppose testing, in hopes of increasing the uncertainty of the weapons’ viability, in order to reduce the likelihood of anyone actually using one! The game-theoretic response, alas, might simply be for a war-maker to launch twice as many, figuring that if a few are duds, he’s compensated for that…)

Trinopus

The weapons labs refer to this as “stockpile stewardship”. They burn through billions of dollars each year developing computer simulations of how warheads age, and performing sub-critical nuclear tests to calibrate the computer simulations (meaning tests in which a nuclear explosion does not occur).

See the following:
IBM RoadRunner Supercomputer
LANL’s DAHRT Facility
LLNL’s National Ignition Facility

Why so nuke sites? Because it’s the only way to be sure.

Intersting video, and that none are in South America.

Since nobody else has posted it, Cecil Speaks regarding nuclear testing.

The US would not share its atomic data with Britain, thanks to the perceived leakiness of British security, so we had to discover the stuff ourselves with our own tests.

Agreed, interesting stuff. For those who are interested, I dug up some further Wikipedia links about the tests in more “unusual” locations:[ul][]UK tests in Australia: see the article about Maralinga and the links therein.[]US test in 1955 off the San Diego coast: Operation Wigwam[]US nuclear tests in the South Atlantic in 1958: Operation Argus[]French tests in Algeria: Saharan experiments centers[]UK tests in the US: 1958 US-UK Mutual Defence Agreement[]US tests in the Aleutian Islands: Amchitka[/ul]Nothing in Canada, though the USSR did once accidentally crash a nuclear-powered satellite there.

Ahh. I was just surprised to see the pink blip on the North American continent and made a guess.

Plutonium has a half life of 200+ thousand years. In 200 years, about 5/100 of a percent will have decayed. That will make 0 difference in the bomb working. The age factor is because of all the other materials in the bomb. They have to have precise physical properties, and stability in long-term storage is far down on the list. Testing old ones is more about “does this metal piece corroding a bit cause a problem?” or “does this plastic bit oxidizing change its required properties?”

What I find amazing is the Soviet Union did over 700 tests, yet antinuclear activists blamed so many problems on Chernobyl. For that matter, the western United States should have been depopulated from our tests.

Anything you see from the US, UK, or Soviet Union after 1963 (though not from France or China) is an underground test. Underground tests, of course, are much less likely—though not guaranteed—to have an effect on people on the surface.

Although the composition of plutonium may remain the same, even subtle changes in the crystalline structure can render nuclear grade material more sensitive to decay or create localized pockets of greater neutron capture which may result in a partial or complete “fizzle” (incomplete detonation). Highly enriched uranium is especially sensitive to this due to the slow rate of the supercritical reaction, but plutonium, which always has small proportions of [sup]24-[/sup]Pu (with a similar thermal neutron capture cross section but much higher rate of decay) can result in both unintentional detonation or “poisoning” the pit by breeding [sup]241[/sup]Pu and other radioactive isotopes with lower yield. The modulated neutron initiator is also a component that undergoes aging degradation. It is true that the non-nuclear materials are also subject to age-related degradation, especially the chemical explosive lenses, and exploding bridgewire detonators or slapper detonators, as well as non-reactive structural polymers. Mechanical corrosion is not typically a concern as the weapons are stored and processed in a controlled environment and materials are selected for low sensitivity to electrovalent interaction.

However, aging surveillance is mostly performed on subscale articles and, when Arrhenius relationships can be established, temperature-accelerated aging. Aging trends for nuclear weapons require many more test articles and much longer time frames than are typically useful for establishing aging trends, especially establishing the threshold after which weapons may “fall off a cliff” in terms of stability or reliability, which generally can’t be established analytically with any precision and have to be established by letting the weapons achieve that age, which is obviously problematic. Testing of weapons is usually done on new or modified weapon designs to ground analytical simulations and provide refinement for estimates of yield. This was especially critical with the first boosted fission and thermonuclear fusion devices, as the rate of reaction was far in excess of the ability to simulate reactions computationally in the 'Fifties and early 'Sixties. (The reasons for this are complex, but basically fission reactions can be calculated on a purely stochastic basis, while fusion reactions require more direct simulation “random walk” type approach that is much more computationally intensive.) By the 'Seventies and the advent of scalar supercomputers it was much easier to simulate fusion reactions, and today you can run sophisticated nuclear “hydrocodes” capable of modeling a fusion device on a desktop computer in a few hours.

Prior to modern computing capability, the only way to obtain any realistic estimates of yield and reliability were to perform testing. Plus, aboveground testing was visually impressive and made for good propaganda in the early days of the Cold War.

Stranger

I know that, but if you look at the timeline, there were a couple of hundred tests done by the Soviet Union before the test ban treaty and over three hundred done by the United States.

The radiation and nuclear material involved in a sustained reaction is actually quite large compared to a nuclear blast. Nuclear bombs work because they release all of their energy in an instant. A kiloton of energy is 4.184 Terajoules, and each reactor at Chernobyl was putting out 3.2 gigawatts of thermal power, so the energy equivalent of a Hiroshima bomb every 4 Hours 45 minutes.

(Not very) fun fact: Japanese were the first victims of the atomic bomb and the hydrogen bomb.

IIRC the US tests did kill quite a few shep in the Nevada - Utah area, and the consensus is they also killed John Wayne. The fliming of his Genghi Kahn epic (The Conqueror) was going on downwind from one of the tests, and odds are it casued the cancer which did him in a few decades later.

The last few French tests in the South Pacific were also done underground.