Nitpick: Moderators and control rods are two very different things. The moderator (usually water, or graphite if you’re an idiot) slows down the neutrons, but perhaps counterintuitively, that speeds up the reaction, because uranium atoms have an easier time absorbing slow neutrons than fast neutrons.
The control rods (usually cadmium), meanwhile, don’t slow down the neutrons; they absorb them entirely, meaning that they’re not available to be absorbed by the uranium. So you can heat up a reactor either by removing more control rods, or by adding more moderator.
One of the (many, many) things that went wrong at Chernobyl was that the control rods were (for some unfathomable reason) tipped with graphite. So when things started going pear-shaped and they wanted to shut down the reactor RIGHT NOW, they dropped in all the control rods… except before the rods slowed down the reaction, they sped it up even further. Not for very long (just long enough for them to finish falling in), but long enough for things to go from “this is bad” to “we’re all dead”.
Aside, I have to admire this construction. And it definitely applies to so many scenarios where something is fucked up, but due to a bad fix, it gets so much worse. The SNAFU to FUBAR continuum as it were.
Excellent post, and thank you for taking the time to produce it!
Yes, I get it now. With a star, it’s own gravity does the “squeezing” and produces the fusion whereas, in a fusion reactor, we have to artificially produce that process.
Tendentious. Not implausible on its face, but factually misleading. Sigh.
Let’s roll the tape.
Wiki: " The reprocessed uranium, also known as the spent fuel material, can in principle also be re-used as fuel, but that is only economical when uranium supply is low and prices are high."
Me: Familiar pattern of big guv policy being blamed for the underlying weak economics of nuclear power.
Wiki: " Reprocessing of civilian fuel has long been employed at the COGEMA La Hague site in France, the Sellafield site in the United Kingdom, the Mayak Chemical Combine in Russia, and at sites such as the Tokai plant in Japan, the Tarapur plant in India, and briefly at the West Valley Reprocessing Plant in the United States."
Me: Huh, that’s interesting. I wonder what happened at West Valley, the US’s only site ever used for nuclear reprocessing.
Wiki: “The plant reprocessed spent reactor fuel at the site from 1966 to 1972… Escalating regulation required plant modifications which were deemed uneconomic by Nuclear Fuel Services, who ceased all operations at the facility in 1976… the West Valley Demonstration Project property was described as “arguably Western New York’s most toxic location” in 2013.[3]”
Me: So the plant wasn’t economic and remains a toxic waste site today. Oh. What did the politicians do about this?
Wiki: In October 1976,[9] concern of nuclear weapons proliferation (especially after India demonstrated nuclear weapons capabilities using reprocessing technology) led President Gerald Ford to issue a Presidential directive to indefinitely suspend the commercial reprocessing and recycling of plutonium in the U.S.
Me: That’s right, President Gerald Ford (R). Then what?
Wiki: On 7 April 1977, President Jimmy Carter banned the reprocessing of commercial reactor spent nuclear fuel. The key issue driving this policy was the risk of nuclear weapons proliferation by diversion of plutonium from the civilian fuel cycle, and to encourage other nations to follow the US lead.[10][11][12] After that, only countries that already had large investments in reprocessing infrastructure continued to reprocess spent nuclear fuel.
Me: So the policy wasn’t wholly unsuccessful, to the extent that the number of reprocessing facilities didn’t spread internationally. Ah, who am I kidding? This is nuclear power, notorious for lousy underlying economics
Wiki: President Reagan lifted the ban in 1981, but did not provide the substantial subsidy that would have been necessary to start up commercial reprocessing.[13]
Me: So nothing happened after Reagan lifted the ban. Until:
In March 1999, the U.S. Department of Energy (DOE) reversed its policy and signed a contract with a consortium of Duke Energy, COGEMA, and Stone & Webster (DCS) to design and operate a mixed oxide (MOX) fuel fabrication facility. Site preparation at the Savannah River Site (South Carolina) began in October 2005.[14] In 2011 the New York Times reported “…11 years after the government awarded a construction contract, the cost of the project has soared to nearly $5 billion. The vast concrete and steel structure is a half-finished hulk, and the government has yet to find a single customer, despite offers of lucrative subsidies.” TVA (currently the most likely customer) said in April 2011 that it would delay a decision until it could see how MOX fuel performed in the nuclear accident at Fukushima Daiichi.[15]
Me: So we can thank Bill Clinton (and later George Bush) for this massive nuclear reprocessing white elephant. That said,
Wiki: There were proposed private reprocessing facilities in Barnwell, NC (refused in 1977 by Carter - concurring GAO report here: https://www.gao.gov/assets/emd-78-97.pdf ) and Morris, IL (refused in 1975 under Ford, proposed again in 2007 and 2013) that were never permitted to operate. The US military hosted a reprocessing site in Savannah River, SC from 1952-2002 and Hanford from 1944-1988.
Me: So we have one private facility affected by Carter’s ban (as well as the government’s refusal to provide additional subsidies) and the GAO concluded that: “(1) Federal funding of short-term research activities at the Barnwell reprocessing plant should continue until the completion of a major international study of alternative fuel cycle technologies and (2) the Department of Energy should not build a Government financed spent fuel storage facility until other alternatives are fully explored and the work of an interagency task force on waste management is completed.” Emphasis added.
I hope not! (at least the Editorial by Luisa Chiesa) The technical papers themselves require at least a working knowledge of superconductivity and their use in magnets.
TLDR in place of the papers (bearing in mind this is not my area):
One of the keys to practical fusion is magnetic field containment with a significant field using multiple superconducting magnets (nothing else is even remotely in the ballpark).
CFS (among others) took the risk of designing around HTS magnets (~90 Kelvin Tc REBCO where the RE stands for Rare Earth, like Yttrium). In comparison, the magnets in the LHC are NbTi and the magnets in ITER (the mega fusion project being established in France) are NbTi and a riskier metal, Nb3Sn. All of these are considered LTS (Tc <25 Kelvin).
Because of the difference in superconducting properties, HTS magnets to achieve the same magnetic field as ITER are much smaller and require much less cooling power. But HTS isn’t metallic, which makes manufacturing and assembling magnets much more sporty. The MIT study confirms feasibility of HTS magnets for fusion. It remains to be seen if these lead to practical fusion.
A much better lay person’s explanation is at the MIT site
ETA: Oh, and for those of you frustrated by paywalls, all of these papers are open access.
Climate reporter David Roberts interviews Jigar Shah, head of the Department of Energy’s Loan Programs Office, on nuclear power.
(1) The loan office isn’t pushing nuclear or any other sort of energy, but they are responding to demand for it. Tech companies want to build out their server farms for AI and other reasons, and they are committed to being carbon neutral. So we’re going to be doing a lot of solar, a lot of wind, maybe some geothermal. But the modeling also says we’re going to need a few nukes.
(2) Shah groks learning curves, in a way that we discussed above. Emphasis added:
Let me take us back through time just slightly, because, you know, frankly, I’ve learned a lot in this job. When you think about what nuclear did wrong in the United States, and we can compare it to Korea and other places, is that we have 92 operating reactors today in this country and no four of them are the same. So we decided that every single nuclear plant should be a snowflake. It’s like the original snowflake. So we never got good at a design and made it stable. Compare that to Canada, which has the CANDU reactor, and every one of their reactors is a CANDU reactor and they’re all the same.
Me: Nuclear construction has only been economical under programs that picked a single or a couple of designs and ran with them. This was a mistake made back during the 1970s by the US nuclear power industry: “the fatal flaw was that all of the reactor makers created this sort of, like, menu of options. And a lot of utilities thought that that meant that they should actually select different features off a menu.”
The Canadians and the French grasped the importance of standardization. Maybe the US could have figured this out as well, but we stopped building new designs after the 1970s… until Vogtle.
So basically, the way nuclear works is you always want to build two or four, not one — for a variety of reasons. Some are resiliency based, right, in terms of if something has to go down or whatever. But the other reason is because at every single site, you have certain security costs and site costs and other things, and so you want to share that over other things. And then the last issue is that you always get a cheaper price between one and four.
So the first one is always more expensive. At every site, even when we’re building the 30th nuclear reactor, it’ll be the same that the new site the first one will be more expensive, the second will be cheaper, the third one is cheaper, the fourth one’s cheaper. So it is objectively the case that if you went to a 1200 MW coal site, which by definition only has 1200 infrastructure around it, and you built one AP1000, that would be more expensive than building four 300 MW reactors, because the second, third and fourth one will be cheaper than the first one.
And so it’s better to do it that way, and then you get the resiliency benefits, et cetera. So if you’re going to build the AP1000, you need a minimum of 2200 MW of infrastructure around that site.
I’ll add that Noah Smith pointed out that after a few nuclear plants, the French encountered diseconomies of scale - the plants got more expensive. So while the first is always more expensive, we should hold off on comparisons between the fifth and the 10th.
(3) SMRs. They aren’t especially small in practice. Nor are they modular. But!..
(4) The Inflation Reduction Act is really pro-nuke:
The IRA says if you actually are building a nuclear reactor, you get a 30% tax credit. Then if it’s in an existing energy community, which all nuclear reactor sites are, you get a bonus 10%. And then if you have majority domestic content, you get another 10%. So, a lot of these reactors are going to get a 50% tax credit.
It’s sorta disappointing how long it’s taken for HTS superconductors to penetrate industry. I’m not really sure what the holdup has been. We can see now that they’re appropriate for fusion, which various startups have been claiming for a while (in fact, they’ve been claiming that HTS is probably the only technology that could lead to cost-effective fusion, due to the non-linear effect of field strength).
But they have yet to significantly penetrate MRI machines, either, or particle colliders. I’m aware that they have different characteristics, necessitating some design changes–but it seems to be that finding workarounds would have an outsized impact. And then there are the applications that don’t currently use superconductors in any real capacity, like power distribution or generation.
I’m not sure why it’s taken so long for such a promising technology to make an impact. It would be one thing if it were tried and it turned out to be a dud–but that doesn’t seem to be the case. It’s just been difficult, I guess.
They’re ceramics. Imagine how you would wind (precisely) and contact something that is an insulating ceramic at room temperature without fracturing or even introducing dislocations. Then figure out how to design a complex magnetic field using an anisotropic conductor (which HTS is) that needs to be epitaxial to access full superconducting properties. Then figure out how to manufacture Km of this as a “cable” at costs that make it less expensive than just drawing and winding NbTi.
It’s taken a long time and all the steps have been hard. And there are still obstacles to be overcome.
It’s still a little odd. One might think that, if Canada can support one reactor design, then the US (being much larger and richer) could support a dozen just as effectively.
Ok, 92 different designs is a bit much. But there aren’t that many manufacturers: there’s Westinghouse, GE, Babcock & Wilcox, and a few others. Surely they could have standardized internally, even if they weren’t willing to work together on a common design. Strange how that didn’t happen.
Well, this is a case where the obvious approach is also the one that works: manufacture it as a ribbon. That solves the flexing problem and allows manufacturing via epitaxial growth. Just a thin film on a flexible substrate.
Obviously this glosses over the difficulties in the actual process, but they’ve been making HTS ribbon for decades now. And it still hasn’t penetrated fairly obvious applications.
Not that strange. Most the decisions were made during the 1960s and early 1970s, before things became obvious. Then interest rates shot up in the early 1980s which killed the US nuclear program, along with massive cost over-runs on existing construction projects.
The bizarre part is that the pro-nuke community and the US nuclear industry never grasped the importance of standardization, even by the 1990s, 2000s, 2010s, and I would argue today insofar as the Nuke Bros are concerned. But the US government appears to get it now, which makes me suspect the nuke industry does as well.
Poland gets it: they are building about 4 nukes IIRC. I’m not sure Britain does, judging from its struggles with Hinkley Point.
Did they really not grasp it, or were they unable to do anything about it?
It’s not bizarre that the US regulators might have been unable to enforce standardization, since that’s a political process, and political things don’t often follow rational analysis. It’s also not bizarre that the various industry members would find themselves unwilling to work together–they’re competitors, after all, and although industry members can sometimes form standards when they perceive it’s in their interests, it’s not a sure thing. It sometimes requires one large member to impose their will on the rest (how it often works in the computer industry).
It’s also not bizarre that the “pro-nuke community” or “nuke bros” (way to poison the well) would have been unable to do anything, since they have no political or economic power.
But what is bizarre is that none of those manufacturers figured out for themselves early on that standardization might give them a competitive advantage. Maybe they counted on too much continued growth, and felt that standardizing early on a suboptimal design might be problematic–they wanted to iterate first. But it seems they should still have learned the lesson fairly early on (like within the first couple of decades).
Utilities are natural monopolies that don’t compete with one another. (Or they were until de-regulation, where power production was separated from the natural monopoly of power distribution). Utilities could have stuck with building tried and true designs rather than the latest and the greatest. But frankly, I think that’s a lot to expect from a natural monopolist making decisions in 1960-1975.
Regulators had no remit to enforce standardization - that’s not a mistake. Regulators are in charge of safety, not economic returns.
It is bizarre that the pro-nuke community has never grasped the importance of standardization. They’ve even struggled with the poor economics of nuclear construction. It’s been hippie-punching all the way down since Three Mile Island.
I’m not sure about manufacturers such as GE or Westinghouse - maybe they left money on the table. That’s an interesting observation. Still, CANDU and EDF did grasp this, so it’s not like all of them had their heads in the sand.
It’s not inconceivable that they could have done otherwise. Why are there so few commercial aircraft base models? Partly because type certification is very expensive. There are zillions of 737 variants but they’re all 737s. And that’s in part due to how the FAA operates, even though they’re also in charge of safety rather than economics.
At any rate, that’s not what we got out of the NRC. But maybe we could have in an alternate universe.
My recollection is that it was the regulators that stood in the way of standardization by not allowing certification of a single design to be used to expedite the approval process. I remember proposals for standardized designs going back to the 90’s. The developers wanted to have a design certified, and then to allow a certified design to go through an expedited regulatory process that would not have to revisit the design but only site-specific matters.
But they were forced to recertify every aspect of construction with every new build, with changes to the design part of the regulatory mix if necessary, which blew apart the cost savings you could get from standardizing. In fact, you couldn’t really standardize much at all.
It is also my understanding (though it may be faulty) that DoD had a fairly heavy had in guiding design development: they preferred plants that could be used to make Pu for their things that go flash-boom, which tended toward certain design patterns. INL had a MSBR that was inherently safe (power loss simply caused it to naturally shut down on its own), but no commercial venture ever used that design (it did use molten sodium, which really likes O2, so that part of a reactor would have to be built very robustly). A plant that is not at risk of a meltdown would have gained much more public acceptance and much less NIMBYism.
“Design certification” appears to be the moral equivalent of a type certificate. But you can see from the page that there are very few certified designs, and they’re relatively recent. The oldest ones are from the 90s. As best I can tell, most of the reactors in the US aren’t of these designs.
The design certificate also only lasts 15-30 years, which IMO seems on the low side. Why isn’t the certificate forever? Or at least a long time, like a century. I don’t think FAA type certificates expire. The 737 one was issued in 1967.
It looks like the NRC has streamlined things a tiny bit, but probably not enough. We do have standardized reactor designs like the AP1000 but the per-site costs are still very high.
Regarding standardization of nuclear power plants, note that the U.S. Navy nuclear power program pretty much figured this out. They built a handful of different shore-based prototype reactors and power plants (later used for training and testing purposes), followed by dozens of near-identical operational plants for ships and submarines in the fleet.
As a side note, the U.S. Navy nuclear power program has also been very safe, with no significant incidents or accidents to date. It consequently has a very good reputation for safety when people think about it at all. I always remember my first day at Naval Nuclear Power School, when the head of the school welcomed us all and to emphasize the importance of our upcoming training recounted an anecdote in which protesters reportedly hiked out to a desert in California to protest the construction of a nuclear power plant some distance away from population centers, then drove back to their homes in San Diego, completely ignoring the dozens of operational nuclear power plants floating in the harbor aboard various U.S. Navy ships and submarines.