How much specific power can a space-based nuclear reactor realistically produce?

By specific power, I mean kilowatts/kilogram.

In the excellent book “The Martian” by Andy Weir, he posits a nuclear-electric spacecraft used to cycle between Earth and Mars. This idea has been mentioned so often it is basically a cliche.

For this technology to be viable, extremely high specific-power is needed from the nuclear reactor. Something on the order of a kilowatt/kilogram.

The “reactor” is the entire heat engine, not just the core, and also the electric generator. So I am referring to the reactor core, the biological shielding, the control system, the working fluid, the generator, the radiators…

Basically, a big hunk of machinery that you attach to your ship and electric power comes out of it for a few years.

This is a very complex problem that would require a detailed design study. Has anyone done such a study for an extremely high performance reactor design? Is it even possible to get such performance?

Intuitively, I know fission bombs work, so I know the core of the reactor itself could be relatively small and light. You can create a shadow and use the 1/r^2 law to minimize the mass of the biological shielding. You can use mercury or liquid sodium for the working fluid to minimize the amount needed. You could use superconducting wire to wind the generator, saving you about 200 times the wire mass. (of course, now that needs a cooling system, but space is inherently cold so not much of one). Radiator mass is a series of question marks : I don’t know what can be done to maximize radiated power for the least mass.

So it sounds doable, but is it? Or does material science stop such a power generator from being built?

Stranger on a Train, you are being paged…

The answer to this question isn’t simple, or at least, the simple answer is at best in complete. (I am assuming by “nuclear reactor” the o.p. means a nuclear fission reactor, not the more passive and lower performing radioisotope thermoelectric generator or a hypothetical fusion reactor for which we do not have any good models or empirical data to estimate scaling and power output.) The United States has deployed exactly one space-based nuclear fission power supply, as part of the Space Nuclear Auxiliary Power (SNAP) program. The SNAP-10A operated for about a month and a half, exceeding the design output of 500 W(e) by about 20% before a failure in the power regulating system caused it to shut down. The SNAP-10A reactor was metallic salt (NaK) cooled reactor with a semi-passive failsafe design; loss of integrity or deliberate command would cause the beryllium reflectors which maintained the criticality threshold to separate from the craft and the neutron flux to drop sufficiently that the spacecraft would just down. Since it used a metallic salt coolant loop, no pressurization or pressure-volumne heat cycle engine was required, and the system generated electricity via thermoelectric conversion between the heated fluid and the space background. This isn’t terribly efficient but it is simple, reliable, and easily made to be essentially failsafe. Most later concepts for space-based nuclear electric conversion assume using an outer loop with a high temperature gas (typically helium) and using a Sterling or Brayton cycle to attain high thermal conversion efficiencies. However, despite some research programs, no other nuclear reactors have been flown by the US, and most of the research on the use of nuclear fission for space-based applications has focused on nuclear thermal or nuclear electric propulsion, which has a very different set of requirements (addressed below) than just energy production for instruments or habitats.

The former Soviet Union had more expensive experience with nuclear reactors in the Upravlyaemy Sputnik Aktivnyj (NATO designation: RORSAT) and some of the Kosmos-18XX series satellites. These were also metallic cooled thermoelectric units. The reliability of these units varied (likely because of materials quality control issues). Because of the problems experienced by the Soviets and economic troubles with the fall of the Soviet Union, the Russian Federation has largely abandoned the efforts, though they have made the searched done on the TOPAZ-2 system available to the international technical community. Most of their technical expertise has since fled the country to other more lucrative opportunities.

Overall, the specific energy output of existing space-based reactors is between 0.001 and 0.005 kW(e) per kg. Note that this disappointing figure is largely because of the very poor thermal efficiency of around 0.01-0.02. Using more modern thermoelectric conversion and higher operating temperatures, using efficiencies approaching 0.15 or better are possible. The US SAFE-400 design claims a thermal conversion efficiency of around 0.25 (25% efficiency). That sounds absurdly high to me but if true would put it in the ballpark of significantly more complex heat engines. Regardless, it is certainly safe to assert that the output achieved by existing systems is nowhere near the potential maximum output, and with a not-inconsiderable effort into developing the engineering and materials expertise much greater efficiencies could be achieved.

All of this refers to the production of electricity from a nuclear thermal source, and in this regard, the power source and its control system can be considered essentially a black box, especially if the demand cycle is fairly constant, e.g. maintaining a steady power draw for instruments or to support a habitat. However, when it comes to propulsion systems, the hideous inefficiency of thermal electric propulsion for reasonable thrust levels (e.g. those necessary to propel a crewed vehicle or large instrument platform versus a small interplanetary probe) almost certainly dictates the direct use of reactor thermal output to heat the propellant. This, in turn, requires that are are able to finely control the thermal output over a wide range, provide sufficient cooling to keep the system from eating itself, and build the reactor into the load-bearing thrust structure of the vessel. In this case, it is no longer a black box; it is an integral part of your propulsion system and the power output for other functions is almost incidental. At the same time, it has to be highly reliable and essentially maintenance free, which dictates as much robustness and simplicity as possible, while still achieving sufficient mass efficiency and overall performance to be worth the trouble. This is a far more complex set of requirements, but the driving consideration is very likely the thermal management, i.e. getting rid of all of the waste heat generated by such a system.

How do you get rid of waste heat? Well, while space is “cold” in terms of the background temperature, the lack of a working fluid means that there is no way to reject waste heat via convection, as is done with terrestrial reactions, including those in mobile applications like nuclear submarines and naval surface vessels. This means that all of the experience and design that we’ve put into such compact, high performance systems has little applicability in the space environment. (In addition, submarine applications of nuclear fission reactors are often built around using natural convection in order to minimize pump noise; in space, the lack of gravity means that natural convection systems are much more difficult–although not impossible–to implement, so you can’t just take a S9G and plunk it into your spacecraft.)

Radiation from your reactor is really a pretty easy problem to solve; water and waste products can be used to line the view factor between the habitat and the reactor, and as long as the reactor is designed for zero maintenance there is no reason to have any connection between the habitat section and the reactor core, so they can be separated by as much distance as possible. One possible solution is to have the habitat and associated functions on one end of a tether and the reactor/propulsion section on the other, and have them be structurally attached during impulsive phases but then widely separated and used as a counterweight for ballistic flight, thus managing the radiation exposure of the crew. (You still have to mitigate the impingement of high energy cosmic radiation, coronal fluxes, and in the case of a mission to Jupiter, the radiation belts in orbit, but that is a problem regardless of your power source.)

One possible way to deal with thermal issues is to treat the reactor as an expendable component; design it as a module to survive throughout the impulsive maneuver, and then eject it where it can melt down or radiate to its core’s content. Of course, this doesn’t help you with a power supply, but it does address the need to get high specific output for a short period versus a long continuous output through the mission duration. Another, possibly more efficient, is to have a distributed “core”, e.j. a fission fragment reactor, where the coolant is also a carrier for the fissile or fissionable materials and used as part of the propellant, being ejected out into space. With the correct design, a high degree of scalability should be possible without having multiple systems or great complexity.

I think it is not really possible at this point to estimate what the potential output could be of a space-based nuclear fission system based on the state of the art. This work was largely abandoned in the 'Seventies, and only tentatively supported in the 'Eighties and early 'Nineties by the Strategic Defense Initiative, only to again be essentially reduced to toy studies and low priority research by LLNL and LANL since. A lot of the expertise and pretty much all of the practical test systems have been lost or mothballed such that implementing such a system will have a steep learning curve, but if the US had constantly pursued nuclear propulsion and space-based nuclear power such systems could have been viable by the late 'Eighties.

Certainly, space-based nuclear fission power generation is the sin qua non of any crewed missions beyond the orbit of Mars and realistically probably necessary for a sustainable infrastructure to explore at Mars orbit given the low solar flux and issues with solar power generation on the surface of Mars. Developing this capability–along with the ability to extract energy resources in situ–is one of the cornerstones of a future space-based infrastructure for exploitation of space resources and crewed habitation and exploration, and the solutions for the existing state of the art–which demand complex systems, highly processed fuels, and very limited capabilities–do not represent the potential capability and necessary requirements for future use.

The International Atomic Energy Agency’s report on “The Role of Nuclear Power and Nuclear Propulsion in the Peaceful Exploration of Space” has an extensive discussion on the history, state of the art, and future potential for space-based fission power generation.

Stranger

Why does a nuclear-electric system for a spaceship need a high power-to-weight ratio? If it’s heavy for the amount of power it generates you just have very low acceleration; for a journey that’s going to last weeks or months anyway that’s not a big problem. I don’t know what thrust electric systems typically get per kilowatt but an acceleration of only one-thousandth of a G would add up to about 800 m/s a day

Ion engines have a very high specific impulse (that is, the efficiency as measured by the amount of propellant expended) owing to the high effective exhaust velocity, but in order to achieve those efficiencies the engine has to operate at very high temperatures. This means applying some kind of impedance heating to the propellant medium (via direct resistance, microwave heating, inductance heating, et cetera) which tends to be really inefficient because a lot of the heat ends up going into the chamber materials or causing electrode breakdown, so you have to carry a larger power source. Because of the inhernetly very low thrust developed by such engines, you have to have a very large number of engines, which means that the inert (dead) mass of the vehicle–the mass you have to carry through operation and therefore takes away directly from useable payload mass–is very high.

And just because you have a good propellant specific impulse doesn’t mean the overall system efficiency is good; you may be able to get a lot more total impulse from a massive bank of ion engines than from a shorter burn chemical or nuclear thermal rocket, but if the tradeoff is that your vehicle mass increases by a factor of 10 or 100, the result may be less total delta-V for the same propellant mass despite the inherent propellant efficiency of the propulsion system. The VASIMR concept is intended to bridge the gap by getting higher effective thrust, but the powerful magnetic field requirements to achieve sufficient containment also require a very powerful power plant and generate an even larger amount of waste heat than arcjets and electrodeless plasma thrusters, and on the same order as magnetoplasmadynamic thrusters. Using these in practical large-scale vehicles would require multi-megawatt power sources which are just too large to be practically carried to orbit via any existing or proposed orbital launch systems.

One way or another, you still have to be able to carry all the “juice” (propellant) you need to go from Orbit A to Orbit b. If you expend propellant constantly but more slowly but it turns out that you can’t carry enough to achieve the necessary delta-V, you’ll miss your target and end up slowly orbiting between planets.

Stranger

So you’re saying that a spectacularly heavy and inefficient power plant might end up giving less thrust than the same tonnage of chemical propellant? Ok, that makes sense. In fact, I could see that the “investment” in a nuclear power source might not pay off unless the vehicle was intended to be refueled and reused for multiple trips.

No. I’m saying that even if the total impulse delivered by an ion system is greater than a chemical nor nuclear thermal rocket (owing to the ability to operated for a longer duration at a lower thrust level) but the size of the propulsion system causes the spacecraft to mass significantly more than the less propulsively efficient system, the realized delta-V will be less for the same expenditure of energy.

Stranger

Stranger : The root of my question is the following. I am aware that something like nuclear-salt water is by far the best method. Even if the proposed design specs for such an engine are hugely optimistic, and a usable design would be a much heavier engine with much lower thrust, it’s still the perfect combination of good thrust and good ISP.

But, if you wanted to do nuclear-electric anyway, using VASIMR (which I recall is tested at around 60% efficiency) or large scale ion or other methods, how much electric power generation per kilogram of reactor is actually feasible? Note that in this case, the design is not constrained by the need for reliability : presumably, you have developed tele-presence robots that can go on EVAs to repair and maintain the electric pumps and valves and so on. Maybe you even save mass by not fully cladding the fuel and letting fission fragments get into the coolant, so the entire heat engine is highly radioactive.

This article states that 1 kilowatt/kilogram level of performance is basic a fantasy. http://www.spacenews.com/article/vasimr-hoax

Having had a few nights to sleep on the problem, it seems to me that the driving factor is the mass/heat dissipated of your thermal radiators. We could probably come up with an intelligent estimate by taking the kilograms/watts thermal dissipated, and tripling that estimate.

So, if someone found a design somewhere for a liquid sodium radiator that dissipates 500 watts thermal per kilogram of radiator mass (including the mass of the coolant in the radiator channels), and we have a highly efficient heat engine that is 50% efficient (I’ve read about a form of “combined cycle” heat engine for a gas core nuclear reactor), then the full up system might provide 83 watts of electric power/kilogram.

There are mission profiles where this kind of engine would be useful. The problem with using your reactor to directly heat the fuel is that you get ISPs of 1000 at best.

If articles on the subject are correct, dual stage 4-grid thrusters are a scalable design that could be scaled to use megawatts of power, providing ISPs tested at 19,300 in the prototype thruster…

That’s enormously better. For long-duration missions to the outer planets, this would give you the total delta-V to do it more rapidly than a Hoffman transfer and without needing insane mass ratios.

Also, a fundamental problem with a fission fragment ejecting engine is that it would be very difficult to safely test on earth. You could readily test all the components of a high-power nuclear reactor on earth (using a non-nuclear heater), and you could readily test a big ion engine in a vacuum chamber.

All of these things are of course basically science fiction in that while they are technically possible using mostly existing techniques, the funding and interest isn’t there to develop them. The point of the question is to ask what could you do if the money was there.

Do you really need an Apollo style “all up in one go” for such a system? I mean, Earth orbit rendevouz is fairly well practiced now. You send the components up in multiple launches, the crew module and the propulsion modules separately.

Given the much greater crew, provision, power, and radiation shielding requirements, any vehicle for crewed interplanetary exploration would almost certainly have to be assembled and fueled in orbit. Indeed, a genuinely robust exploration mission would actually send separate flights of modules for provisions, propellant, and habitats in order to assure that a failure of any individual mission segment (excepting the actual crewed vehicle) would not doom the entire mission to failure.

Stranger

How do you cool it off? I mean, the radiation of heat is too slow relative to the amount of heat generated.

Any internal highly energetic power source is going to require either massive radiators or some kind of active cooling system using a fluid to absorb and reject waste heat via evaporation (which will increase the amount of consumables carried). There is really no physical way around this problem; you can’t just engineer yourself out of the corner of thermodynamics through cleverness, unless you can somehow locate the power source outside of your system (e.g. nuclear pulse propulsion) where you can limit the waste heat that is intrinsic to the system.

Stranger

Mir (though not the ISS) did recycle urine and air humidity. Maybe that can be used for cooling, the cooling system is also the recycle system with the impurities being blown off?

Come on. A reactor wastes a lot of heat. The initial temperature of the urine is body temp, and you are going to recycle it as drinking water, right? The air temp is comfortable and you are going to take a tiny amount of steam to replace condensed water. A tiny amount of energy used relative to the amount wasted by the reactor.

The only way for energy to leave the system in space is through radiation. It’s far too slow of a mechanism.

Well, to be specific, it isn’t rate per se that is the limiting factor. The rate at which radiation is emitted is the ratio of tempeatures from the hot reservoir to the cold reservoir to the exponent of 4. Assuming the cold reservoir to be the microwave background of interstelllar space at ~2.7 Kelvin compared to waste heat which is in the thousands if not tens of thousands Kelvin, the specific rate (or power transfer) is quite high. However, the volume of power throughput of waste heat is limited to the outward-facing aspect of radiative surfaces (and of course the emissive properties of that surface as well as the internal capacity to direct heat from the source to the radiators). So, unless your ship has giant fins or a huge spherical surface of radiators around it, you will be llmited by the amount of heat that can be rejected. Of course, these radiators and the systems to move heat to them all take up mass that detracts from payload and propellant as well as requiring structural support under thrust and whatever maintenance is required, so while you could scale up your radiative surface to any necessary area, the reality is that practical limitations of how much surface area and mass you can afford will probably dominate the amount of power you can generate regardless of how compact or energetic you make a power source.

Stranger

As you note, though, there is no theoretical limit here: the Stefan–Boltzmann law says that black bodies radiate power proportional to the fourth power of temperature. Any area of radiator can reject any amount of power as long as you can make it hot enough. For example, a radiator a square meter in area can emit a megawatt of power at around 2000 K.

Of course, this reduces your efficiency, since any thermodynamic engine has efficiency based on the difference between the hot and “cold” side. Finding a happy middle ground that works with available materials is, I suspect, an unsolved engineering problem.

Although increasing the temperature differential (by means of a heat pump or similar reversion of the normal refrigeration cycle) increases the overall efficiency, it also increases the total amount of energy to be rejected by dint of the additional work being done. There may be some benefit in doing this to overcome some of the internal inefficiencies of the power-source-to-radiator loop(s), but it doesn’t ultimately improve the throughput of the radiator. So, if your internal loop is as efficient as it can practically be, there is probably negligible benefit in terms of improvement of waste power throughput by adding a heat pump cycle, and you are better off either reducing power consumption or increasing the size of your radiators rather than trying to game the laws of thermodynamics.

Stranger

An interesting point, though not really where I was going with that. My point was really just that the cold side will increase in temperature until it reaches equilibrium, and there is always some temperature that can handle the load no matter what the size of the radiator. If efficiency and materials science weren’t a problem, you could get away with a very small radiator indeed.

Of course, efficiency and materials are problematic, but it does seem like Stefan–Boltzmann works in our favor: we do at least have materials that can survive at thousands of Kelvin, and at that temperature the radiative transfer is very impressive; more than enough for a large reactor. Still, engineering such a system is a challenge.

How effective would something like Freeman Dyson’s proposal for a nuclear shock wave riding craft be?

Perhaps we’re talking past one another, but assuming that your power generation system operates at a certain temperature range and produces waste heat at a specific temperature, there is only a certain amount of heat energy you can transfer to the external environment (e.g. the interplanetary background) for a given outward facing radiative surface area. If you elect to add an additional heat pump cycle to increase the temperature in order to make the radiator more radaitively efficient for a given area, you will also be adding additional energy into the system, which is fine when you have plenty of potential cooling capacity to make use of (e.g. forced convection in a power plant using a river for transferring waste heat) but is probably only minimally effective when radiating heat to the space background. The only reason you’d really want a heat pump cycle is to make the internal transfer from your power source through your cooling system more efficient, but in the end, you are still going to have to have a certain amount of radiative surface for a given amount of waste heat.

Since the heat generation is all external, the concerns about rejecting waste heat from the system are vastly reduced, and in fact, ideally you’d convert most of the incoming energy to hot exhaust, the expansion of which is what drives the vehicle by impingement on the pusher plate. Most designs assume some kind of consumable ablative layer (such as oil) which would carry away excess heat, and then allowing enough duration between shock intervals to allow the system to radiate down to a temperature that the pusher structure can withstand. Since the habitat and other sections are well forward of the pusher and is shadowed by the plate, the amount of heat transferred forward can be minimized. Of course, you still have to provide some means to generate the necessary power for the habitat and other critical systems, but siting the massive power generation required for propulsion outside your spacecraft alieviates the biggest part of the problem.

Stranger

Nuclear Pulse Propulsion? Daedalus/Longshot/Orion?

Most simulations shows it works well, but I sense moderate handwaving considering things like shockplate erosion, heat management, and effective impulse (assuming greater directional efficiency in the explosives than can actually be engineered IRL)…

But it goes like a bat out of hell in Kerbal Space.

ETA: Or what Stranger said.