At first sight, the answer seems obvious, given that any amount of fuel used during descent is more than no fuel at all. On the other hand, with an ocean splashdown you have to send a ship out to recover the spacecraft. That also takes fuel, but does it take anywhere near the amount of fuel required to recover a booster that doesn’t reach orbit?
This is a somewhat more complicated question to answer than it may seem. The amount of propellant necessary to effect a return-to-launch-site maneuver for the first stage is surprisingly small (it depends upon launch azimuth and direction but upon running sensitivity studies I found that it is around 5% of the total propellant loadout); however, that is all propellant that has to be carried up to the point of first stage main engine cutoff (MECO), so it increases the total propellant loadout by whatever proportion is necessary depending upon the separation altitude, payload mass, vehicle inert mass, et cetera. It almost certainly consumes more fuel than a maritime recovery effort, but provided that it is successful it returns the vehicle without exposure to sea water, the potential for damage during impact, and the extra cost and complexity of a decelerator system. If the intent is rapid reuse of the stage, return to a land site (either the launch site or a downrange site) makes a lot of sense and has been intensively studied.
However, for conventional launch applications the fiscal case for stage reuse has never been persuasive; aside from the costs of qualifying hardware for repeated launch, the actual savings are quite small as the hardware cost itself isn’t that large a fraction of the overall cost of the launch. It isn’t something that is not technically viable but just isn’t financially necessary unless you have a launch rate that is in the many dozens per year. For SpaceX, reusability is about being able to maintain a launch tempo without increasing the throughput volume of Falcon 9 Stage 1 manufacture rather than any particular saving in launch costs.
Wow, such a great detailed answer, and so quickly.
Now that you mention it, I do recall reading somewhere that overall reusability aspect of the Space Shuttle fell short of its promise fiscally, and perhaps also tragically given the loss of life. However, I’m not sure to what extent the Challenger and Columbia disasters can be attributed specifically to failure on reuse, or some other root cause.
Challenger no, the failure of the O-ring seals in the booster is not an element that is related to reuse. Columbia - probably yes. Obviously they need a heat shield to reenter somehow, whether or not the vehicle is reused. But:
In contrast with previous US spacecraft, which had used ablative heat shields, the reusability of the orbiter required a multi-use heat shield.
Well, sort of. The Space Transportation System (STS, colloquially “Space Shuttle”) was better referred to as refurbishable rather than reusable insofar as many systems including the propulsion and thermal protection elements required substantial work between flights. For the RS-25 Space Shuttle Main Engine this initially meant a full rebuild every other flight but eventually they got better. The thermal protection tiles on the bottom of the Orbiter Vehicle (OV) required servicing after every flight replacing up to 10% of them, and because each one was a unique configuration this was a laborious process.
Eventually NASA reduced the number of tiles by replacing some in lower heating areas with thermal blankets but the tiles in high heating areas on the bottom of the OV suffered damage due to falling debris, primarily ice and frozen insulation off of the External Tank (ET) during launch. This damage condition had been observed since the earliest Shuttle flights but was eventually judged to be ‘normal’ (as in “normalization of deviance”) and like the O-ring blowby that was observed in the Shuttle Solid Rocket Boosters became an accepted condition even though it was outside of the design intent. In fact on STS-26, which was the Return-To-Flight mission after the loss of Challenger on STS-51-L, the damage was so extensive that there was grave concern at NASA as to whether Discovery would survive reentry. It did, and while this kind of damage continued to be tracked it was reduced to being an unbriefed standing risk on subsequent missions.
On ST-107, the extensive damage to the underside of Columbia was noted and there was even discussion about spacewalk for a repair, but there were both no procedures nor materials on board suitable to effect repairs. However, what was missed at the time is that there was damage to one of the reinforced carbon-carbon (RCC) caps on the leading edge of the wing. These are not refurbished parts and in fact there were no replacements, and while they were visually inspected there was no aging surveillance program to assess their integrity due to aging or repeated exposure to the thermal environment. Testing post-Columbia, using a spare from Enterprise showed that a piece of ice-bearing foam material from the ET forward ramp area could impact the RCC section and do enough damage to allow hot plasma to jet in and destabilize the wing structure during reentry, causing the OV to go into an uncontrolled spin or tumble and subsequent breakup.
With aging of the RCC material on Columbia it might have actually fractured the panel completely. However, this is really more of a fundamental design issue per se than an aging issue, as these panels were never designed to withstand impact from debris. In general, STS was an exceptionally vulnerable system that proved to be costly to operate and never met the flight rate to achieve any kind of breakover value that would make it cheaper than a single-use vehicle, nor was it regularly used to retrieve satellites from space and return them to ground for repair and refurbishment; the capability was only used on one mission after which retrieval operations were deemed to be too risky notwithstanding the relatively few satellites the Shuttles could even reach to recover because of its altitude and azimuth limitations.
I do recall reading that one thing slowing down the reuse of the shuttle fleet was that when an orbiter returned, they were reduced to scavenging parts out of it to get the next-in-line shuttle ready to launch. The idea of a land and re-launch fleet never actually materialized.
I suspect that, no, the cost of the rocket is significant. Allegedly SpaceX costs a bit more than half what a throw-away launch costs. Other costs - validating the hardware, the launch monitoring, etc. have to be done whether the vehicle is reusable or not. Countdown tests and standard preparations are the same. I would argue, too - the nice thing about multi-use vehicles is that they can be tested. How often do you run that high-speed cryo-pump in a real world test if the rocket only launches once? (Maybe once or twice on the test stand?)
Not to mention the benefits of doing a recovery and tear-down of a used part to see how it performed in a real-world environment - was it 5 minutes away from failure, or was the design good for multiple launches? Hard to do that analysis for something sitting 500 fathoms down.
The preparations and monitoring the flight possibly cost less if spread over multiple launches - you need to pay that mission control crew 12 months a year whether they monitor a launch a month or 3 a week.
To get the booster to return to to the Cape requires more fuel than landing downwind on the ship. Of course, if you are using the two boosters trapped to the side of the third center (heavy lift) then you can carry that extra fuel since otherwise you would need 3 landing barges. (IIRC they only have 2). There have been situations where they “threw away” a booster to get the extra umph needed for a heavier load.
But keep in mind, a booster after MECO is basically a giant empty can with the high air resistance/light weight that implies. It fires a burst after separation to slow down so it doesn’t overheat and burn up when it gets back to the more dense atmosphere, and then it’s a very light object falling at a few hundred mph.
The fabrication cost of a liquid propellant rocket is between 10% and 20% of total launch cost. By far the largest costs are all the labor that goes into integration, verification and acceptance testing, processing, All systems other than consumable or single use systems are acceptance tested; engines, for instance, are acceptance tested, and at least the first stage is commonly ‘hot fire’ tested as an integrated system. All of this is on top of development and qualification testing which is an extensive series of tests often punctuated by design modifications. The common notion that SpaceX is the only company that tests their hardware before flight or is the first to consider reusability is complete fallacy perpetuated by ‘space enthusiasts’.
Propellant costs are such an insignificant portion of the cost of a flight that it isn’t even a consideration. Propellant costs for a typical LOX/RP-1 vehicle are less than 1% of total costs even assuming multiple loads and unloads with substantial boil-off of liquid oxygen.
Yeah, I don’t think anyone believes that other rocket makers don’t check quality and performance during all stages of manufacture and assembly. With serious, in-depth checks. My point was that only SpaceX can do a post-launch teardown to verify the effect that a real-world launch has had. Plus, by reusing their equipment, they can skip a lot of the process that goes into validating during manufacture, when they use the equipment for the second, third, … ninth time. All that " labor that goes into integration, verification and acceptance testing, processing" - some of that is redundant in re-use.
(I kind of wonder too - if a design is intended to serve a dozen times, is it likely more robust than a single-use component?)
I’m sure that other companies have considered re-use, but to quote Dogbert - “let me check my contract… nope, I get paid the same regardless.” There was no incentive.
Just to be thorough in your fuel comparison, you might wish to consider the different types of fuel used. The returning stage is not using the same sort of fuel as the ship.
Blue Origin is doing reuse. So far they’ve only launched suborbital, but they keep using the same rocket. Well, they’re on the fourth rocket, but, except for the first one (crashed on initial flight), they’ve all been launched multiple times. I understand they’ll be reusing their orbital rocket once that gets going.
Blue Origin like SpaceX is building more on spec for future applications, rather than like Boeing - “Build us a launch system, send us the bill for whatever it takes.”
Very, very roughly speaking:
- A large ocean-going tugboat burns about 150 gal/hr while pulling a barge (i.e., the SpaceX drone landing ships)
- The ships go out about 350 nautical miles for a typical launch, or 700 nm total
- They cruise at about 5 knots, or 140 hrs of round-trip travel time
- So they burn around 21,000 gal of diesel
- Diesel has a density of about 310 gal/ton, so we have 68 tons total
- Reuse by droneship landing has roughly a 15% performance hit
- The Falcon 9 has a total RP-1 (kerosene) propellant load of about 150 tons
- Therefore we waste about 22 t for reuse
- Diesel and kerosene are pretty much the same
68 > 22, so the net fuel use is worse for the barge. However–the numbers here are so rough that I can’t say that with any confidence. It’s really just “a few tens of tons either way”.
But as said above, fuel is so cheap that this isn’t really the defining factor.