Continuing discussion of SpaceX launches [edited title]

Why is there so much rigidity in making design trades, then? Sure, I can see that some payloads would be extra delicate, but for run of the mill missions, why do most stick with the baseline?

1135 W/m^2 isn’t even very much. It’s less than solar irradiance. It’s obviously possible to build payloads that accept an order of magnitude higher heating, and there are clear launch mass benefits to doing so. But something is stopping that trade from happening in most cases.

150 kft does sound very early. That’s well under MECO altitude for a Falcon 9. As best I can tell, typical F9 missions jettison the fairings somewhat above 100 km, while for Starlink missions it’s a bit less. It’s a difference in mission time of about 12 seconds.

The solar irradiance at Earth orbit of 1361 W/m2 (mean value) only occurs from one direction, so most surfaces only see partial incidence, and a spacecraft on orbit can be oriented so as to protect delicate instruments or rotated in a ‘BBQ roll’ to allow heated surfaces to radiate away. A spacecraft being heated by ram pressure or radiation from shock fronts, however, is exposed in every outward facing direction so the heating on ascent can easily be many times that from solar incidence on orbit. In addition, because spacecraft and satellites are not neat aerodynamic shapes you can have multiple shock fronts interacting to create radiative amplification such that the payload experiences intense localized heating. A satellite on orbit will also have any cooling systems operating whereas these are generally not operating during ascent. All of this combined with the fact that atmospheric density is not a linear function but drops exponentially means that there are some pretty hard minimums about where a fairing can be deployed depending on the type and complexity of the payload.

It is easy to say that spacecraft should just be designed and qualified to meet some arbitrarily robust thermal criteria but the reality is that in many cases this isn’t possible; particularly spacecraft that have delicate instruments,are carrying propellants or coolants that have to be maintained below a certain temperature, or have complex mechanisms that are sensitive to coefficient of thermal expansion (CTE) differences between exposed surfaces on the exterior and protected surfaces on the interior. And the more physically or functionally complex a spacecraft is, the more likely that it will have some very strict ascent heating limits. Thermal analysis and thermal control are major areas of spacecraft system design and analysis, often consuming many thousands of person-hours of analysis effort, and it often isn’t possible to completely replicate operating conditions in ground testing so additional uncertainty has to be added to analytical predictions.

This really isn’t the simplistic issue you seem to think it is, particularly when mitigations for thermal exceedences generally involve adding more mass, either of passive thermal insulation or active cooling systems that also increase power requirements and design complexity.

Stranger

I appreciate the info, but I feel like you’re missing my fundamental question: why do we see so little variation in the fairing jettison threshold? Either they happen at the nominal time, or may be delayed for sensitive payloads. It’s only Starlink where it happens early, where somehow they found that 10x the flux was compatible with their design, or required only small changes.

I looked at some other payload guides. The Atlas V:

Typically, the PLF is jettisoned when the 3-sigma free molecular heat flux falls below 1,135 W/m2
(360 Btu/ft2-hr). For sensitive SC, PLF jettison can be delayed to reduce the heat flux with minor performance loss.

Rocket Lab Electron:

A standard mission will experience free molecular heating around 1135 W/m^2 at fairing deployment.

Ariane 5:

Aerothermal flux at fairing jettison and second aerothermal flux less or equal to 1135 W/m2

Well, dang. Looks like 1,135 W/m^2 is an industry standard (where did that number come from, anyway?).

An answer of “it’s complicated” just makes the question more acute. The first four launch providers that came to mind all use the same 1135 W/m^2 number. It can’t be that with all these complicated factors, every payload just happens to be right at that value. It must be that for whatever reason, the parties that might be motivated to relax the requirement aren’t getting a benefit from it (cost or otherwise), and therefore the launch providers don’t provide the service. Or maybe everybody is too risk averse. Or something else; I dunno. But SpaceX decided to design their sats to handle 10+ kW/m^2 and use the extra performance to squeeze in a few more. They shouldn’t be the only ones.

At this point, I’m not sure I even believe the fairing is being jettisoned at these values. I believe it’s at a point where it’s no worse than these values (the Ariane guide implies that), but at 1135 W/m^2 on the dot, regardless of the rocket design? It’s strange.

That is correct; the 1,135 W/m2 is a threshold, not some kind of trigger point where fairing separation has to happen, and it isn’t as if it a quality that is measured in during the flight; the ascent analysis is done to assure that the heating rate will be no more than this threshold value at the time of fairing deployment, which depends on the molecular density at altitude and the dynamic pressure. Because the rocket is still in powered flight it is accelerating and dynamic pressure is changing, and each vehicle will have a different heating ascent profile depending on the mission trajectory.

As to where the 1,135 W/m2 value comes from I do not know the history (I thought it might be in GEVS because it is used as a de facto standard throughout the industry but I don’t see it in there) but it is presumably a threshold such that the typical satellite and spacecraft does not require any special design features to cope with the ascent heating. Spacecraft already have to design for static acceleration and whatever acoustic, shock, and vibration dynamics that the payload system doesn’t isolate out even though these are environments it will see once in its life for a few minutes and never again. Designing a payload to also be capable of bearing much higher ascent heating loads that are well beyond what it will experience on-orbit isn’t generally sensible or necessary, especially if it compromises functionality or adds significantly to overall weight that, unlike the fairing, has to be carried all the way to orbit.

And no, most payloaders do not want to take some significant risk of accepting a higher heating rate because not only do most satellites cost many tens to hundreds of millions of dollars, the time it takes to build new spacecraft and manifest them is a major schedule impact for most programs and commercial customers. The notion of the spacecraft “trading” the risk of failure to gain a few thousand lbf-sec of impulse is frankly kind of bizarre; unless the launch is running to some extreme limit of total impulse it makes no sense at all to worry about deploying the fairing a few seconds earlier, and if the mission trajectory is running that close to the vehicle capability it really means that someone needs to go back and sharpen their pencil on optimizing the trajectory to offer more margin or else to put the payload on a different vehicle.

It isn’t as if trajectories aren’t designed with a healthy margin, and the space launch contractor will typically run a Monte Carlo simulation of tens of thousands of simulated flights to capture all possibly variability to ensure that the vehicle will be stable and able to achieve the desired orbital insertion even if multiple parameters are at some extreme edge of their statistical variability. To have a vehicle which performed nominally fail to achieve the designated orbit is virtually unheard of, so trying to optimize fairing deployment for some minimum possible time is a pretty high-risk-for-low-payback proposition. On the other hand, there is a very long history from the early days of spaceflight (and even in more recent decades) of electronics, instrumentation, or other components failing because of excess temperature and thermal loads, so spacecraft designers tend to be conservative as possible given the mass limitations because it isn’t as if you can go up to orbit and fix it later.On the few missions where I’ve seen really early fairing deployments it was because the payload was specifically designed for that condition. (I’ll leave it to your imagination to consider why that would be.)

I don’t follow Starlink or SpaceX any more than I have to so I cannot speak to the details of why they want an earlier deployment but I know that they are recovering fairings so I suspect this is an effort to make downrange recovery and refurbishment of fairings easier rather than any real performance advantage in flight.

Stranger

Upon review, 1,135 W/m2 is ~0.01 BTU/ft 2 (which is the traditional English unit of heat flux),so I assume that this was identified as a threshold value for ascent heating as a round number that would be within spacecraft thermal design limits such that there would be no special considerations to the spacecraft design for ascent. Can many spacecraft survive a higher heating rate? Perhaps, but it isn’t really worth the risk or the engineering effort to validate increasing the threshold to save a few thousand lbf-sec of impulse. Frankly, if you wanted to reduce conservatism you should push back in the 30% to 50% uncertainty that is typical on aerodynamic heating models, especially if you have multiple flights with skin thermocouple data on comparable trajectories.

Stranger

Thanks; that looks sensible to me (though it’s actually 0.1 BTU/ft^2-s). In fact you can see they mention in the Atlas guide a conversion to 360 Btu/ft^2-hr, which I should have recognized as 0.1 if you convert to seconds.

Anyway, that makes it worse from my perspective. Someone back in the day picked a round number with no significant digits–it’s just an order of magnitude in their preferred unit. And wonderfully, we now have a figure with four significant digits showing up in everybody’s payload guide. It’s like all those stupid articles that say “22.046226218 pounds” when you know that the original figure was an approximate 10 kg.

I don’t think there could be a better example of a requirement that gets handed down through the ages, without anyone really questioning if it still makes sense or what the complete purpose is. Note that I am not saying that the engineers aren’t doing great work in their thermal analyses. I’m just saying that they’re targeting a requirement that hasn’t been reevaluated in decades.

Maybe I’m reading too much into the language, but the Atlas doc is definitely phrased as if it were a trigger (worked out in the ascent analysis, as you mention, but still a trigger). The Electron doc is also phrased as though the expected value will be 1135 W/m^2. The Falcon 9 doc could be read either way, I suppose. The Ariane doc is the only one that’s explicit about it being an upper bound on heating.

If it’s only ever an upper bound… I’d say it’s bad phrasing.

That could well be, and might be a more important reason.

Still, recall that, launching so many sats at once, Starlink benefits from even small optimizations. Excluding test flights, they’ve launched batches of anywhere from 46 to 60 sats. 300 kg more payload means one more satellite. They’ve also done rideshares, kicking off some of their own satellites to fit a few external customers on. Their rideshares are typically smaller yet, under 100 kg.

Well, the Starlink launches are. But because of reusability, the risk isn’t about making orbit, but whether the booster lands successfully (we’ve already seen one case of this, though it was a mid-flight engine failure). While SpaceX certainly doesn’t want to destroy their fleet, they can certainly trade the benefits of squeezing another satellite or two on a given launch (whether their own or a rideshare) vs. the slightly increased probability of a landing failure.

As it happens, SpaceX is also using their life leaders on the launches, so to some extent they’re risking used-up boosters. All these things end up in the economic trade.

At any rate, I suppose I have my answer. It’s first of all only relevant for megaconstellations, since singleton launches don’t benefit from excess payload capacity if they are already under the limit. It’s only if there’s a possibility to squeeze more payloads (or should I say “more payload”, since it’s almost a mass noun in this case?) onto the launch that there’s a benefit, which would only be the case if they’re already right at the mass limit, which would only be the case if they’ve already transferred some risk from loss of payload to loss of booster.

It does mean that other megaconstellations will be at an even greater cost disadvantage compared to SpaceX unless they can also make these kinds of trades.

That’s one of the things I mentioned originally (though I had no idea of the actual gap). This sounds like just the sort of information that would be helpful to publish. SpaceX has their thermal models, but now has actual data for higher heating levels. I’m sure some parties would be interested. Maybe it’s a little too proprietary, but it seems like the kind of thing that “good citizen” companies do, if only to burnish their image.

One more, from the Soyuz payload guide (from Ariane):

This seems pretty explicit that they actually target a flux of 1135 W/m^2. They do also say that higher flux exposures can be accommodated.

According to their sample profile, it goes down to 200 W/m^2 within 20 seconds. That’s under 7 kJ heat soak for a 10 m^2 payload. It’s pretty much nothing. Less than the energy in a phone battery.

It isn’t that there isn’t “without anyone really questioning if it still makes sense or what the complete purpose is”; everybody involved understands what the requirement represents. It is that spacecraft designers who ideally want to design their vehicle to just the environments that it will experience on orbit to minimize the amount of weight given to structure and thermal protection already have to design for the unavoidable acceleration during thrust and whatever dynamics (shock, vibration, acoustics) they cannot protect the payload against upon ascent. Also designing for an aeroheating event it will see exactly once in its life with enough margin to have confidence that nothing will fail is an additional burden that is easily rectified by not deploying the fairing until the aeroheating from free-molecular interactions drops to that threshold.

I suppose you could sharpen your pencil and figure out some slightly more optimal threshold but even if you increased the allowable payload aeroheating by half an order of magnitude it would only mean a few seconds earlier in flight because that heating isn’t just determined by altitude but also dynamic pressure due to the momentum of the vehicle, and it drops precipitously because the vehicle is busy flying out of the atmosphere. Like many other margins, factors of safety, and environment limits, it comes from decades of experience in what works, usually by first experiencing repeated failures.

An example of that is smallsat launcher startups trying to reduce the qualification and acceptance (screening) margins on shock and vibration because they believe that +6 dB is “too conservative”, i.e. results in too many qualification test failures. It is, in fact, conservative but that is because the variability in the robustness of components to survive those environments is highly variable and not easily sussed out in analysis. When contractors start running actual tests after having ‘qualified’ some component to only a +3 dB margin and find that they are experiencing failures in acceptance because their actual components aren’t as robust as they though they were. But the fact that there is a strong statistical basis for the these margins (and a lot of theory to back it up if anyone goes back into the NASA and AIAA archives, or even just reads Appendix B of SMC-S-016) is lost on the startups who think they’ve found the special sauce in trimming out unnecessary fat only to find themselves treading down a well-worn path of past failures.

There is a lot of this in the self-proclaimed ‘NewSpace!’ arena, and about 80% of it is just semantic bullshit or contractors not understanding why running on the edge of a cliff is not a good plan, This isn’t to say that startups challenging the way things are done isn’t valuable; SpaceX has done a lot to develop horizontal integration (integrating the payload with the launch vehicle lying on its side and then using a strongback to raise it up before fueling) which definitely reduces labor hours and lifting operations, although to be clear it isn’t as if SpaceX came up with this idea from whole cloth; the Russians have been doing this since the infancy of their space launch programs, and in fact the only reason the US industry never adopted it was because space launch systems and procedures were based upon delicate liquid or heavy solid propellant ICBMs.

If SpaceX is willing to design Starlink satellites to withstand higher ascent heating rates (and bear the potential risk), then good for them. But there are very good reasons why spacecraft designers in general don’t want to do that because the tradeoff of gaining a tiny fraction of a percent of downrange impulse is not worth the additional cost, mass, or risk for their payloads. And frankly, I question that this is the reason that SpaceX is doing it either. Modern fairings are extremely optimized for low mass; again, I suspect this is to make fairing recovery easier for SpaceX than any real gains on payload capacity or capability to hit specific parameters to LEO.

Stranger

Could be a little of column A, a little of column B. The fairings have parachutes, cold gas thrusters, avionics, and a few other bits now. Probably not a huge mass increase, but a few hundred kilos wouldn’t surprise me. Either way, I agree that the reduced downrange distance is certainly a benefit.

The Rocket Lab Neutron has its fairing attached to the first stage. I wonder if they’ll continue supporting the 1135 W/m^2 figure. If so, they might have to fly a more lofted trajectory, or maybe have a relatively late staging. I think their second stage is relatively small, so that’s a definite possibility. On the other hand, they want to use return-to-landing-site, which might imply a lofted trajectory (reduced boostback cost). Still, could be that some customers will have to accept a higher thermal load since a late jettison isn’t an option.

Did some quick research on the NTRS. Found this:

DELTA-A SCIENTIFIC AND APPLICATION SATELLITE LAUNCH VEHICLE.
Publication date: September 1969

Fairing jettison time is dictated by the free molecular heating rate that can be tolerated by the spacecraft. Normally, the heating rate is held below 0.1 BTU/Ft^2-sec. or about equivalent to the solar heating rate to the spacecraft. Aerodynamic heating of the fairing is controlled by application of ablative materials to hold the fairing internal temperature to below 450’F.

First, that confirms the “0.1 BTU/ft^2-s” as the origin of the value. Second, the rate hasn’t been reevaluated for over 50 years. And finally, it’s not a coincidence that it’s roughly equal to solar irradiance–it was chosen specifically to match.

That’s probably not the origin, either; just the first I could find. They’re already stating it as if it were an informal standard.

Which origin makes sense of course. Every spacecraft has to withstand solar irradiance for its lifetime in orbit.

The whole point of setting the fairing jettison standard there was to ensure spacecraft didn’t need to be up-armored thermally for their one brief ride up to orbit.

Perhaps some detailed analysis of some modern or simple payloads would show they inherently have more thermal tolerance than that for unrelated design reasons, coincidence really. IOW, they have excess thermal tolerance “for free”. In that case the launcher could take advantage of that excess capacity by jettisoning the fairing early, with whatever performance gains that may deliver.

I noticed the solar coincidence in an earlier post but didn’t guess that it might have been the cause. Stranger correctly points out that heating calculations are a lot more complicated than just looking at the average rate–but on the other hand, in 1969 their modeling wasn’t exactly as advanced as it is today. It definitely looks like someone, somewhere thought that getting in the ballpark would be good enough, so they took the solar value and rounded down to a nice even number.

This kind of thing just amuses me to death, though. We have this super exact-looking value, 1135 W/m^2, that everyone in the industry uses. They’re all just copying the number from each other or from their previous rockets. And it came from some ancient decision–probably made in 10 seconds after a bit of mental math–and it just got copied around, eventually converted into sensible units by someone else that doesn’t know what significant digits are, and then copied around again, for several decades. Sure, there’s nothing wrong with it, per se, other than leaving some performance on the table, but still.

You might have missed the earlier article, but it talks about how SpaceX is getting some benefit from earlier jettison. Stranger thinks it’s for easier fairing recovery instead of performance, and he might be right about that. I did notice something I missed earlier (from the article):

“On Starlink missions, we incrementally started deploying the fairing earlier and allowing higher and higher heating,” Edwards says. “Now we’re at the point where it’s more than 10 times the heating that is typically allowed on an external customer mission.”

So they’ve been stepping it up slowly, which of course makes sense. If a payload can handle 1135 W/m^2 easily, then surely it can take 1500. And if it can take 1500 without issue, surely it can take 2000. Etc. And of course they can monitor the temperatures and if the solar arrays have been damaged and so on. Unless there’s some serious non-linear effects going on, they should be able to handle much higher heating rates, because the molecular heating just doesn’t go on for very long, and the satellites are largely big blocks of aluminum. Yet another advantage to a high flight rate.

Absolutely.

And to launching a bunch of smallsats at once. They might lose the top one on the outside, but the rest of the “corncob” is healthy.

I’m not sure it was even math, so much as “Hey, how much thermal flux does it get if we leave it outside in the sun? Fred, could you pop down the hall to the mission meteorologists and ask 'em that?”

I was giving the meteorologists the benefit of the doubt in not using such a ridiculous unit as BTU/ft^2-s, and that some conversion would have to be done.

I hope refrigeration engineers weren’t involved. If so, the answer would have been in “feet”. Feet what? Feet of ice melted through in a 24 hour period.

At least it’s not furlongs of ice melted per fortnight.

Today will be the first launch of a Falcon Heavy since 2019. (Assuming weather and technical factors cooperate).

Nice to see Falcon Heavy flying again. They actually have a bunch of flights lined up, but the payloads have been lagging.

NASA has released some more details about Starship progress. I had a laugh at this slide:

Gotta love aerospace euphemisms: “July 11 high-energy event”. That was when they detonated a large cloud of methane and oxygen.

Correction: Tomorrow, November 1st. Not today, October 31.

Got ahead of myself.

It’s interesting to see how well SpaceX is outperforming ULA’s Delta IV Heavy offering in price, while being comparable in capability. This launch is interesting because it’s a heavy-lift direct-to-geosynchronous launch.

I wasn’t up early enough to watch it live, but here’s the replay (starting at T-0:10):

Those twin booster landings never get old.

No second-stage coverage this time. Sooper-sekrit government stuff.