Quickest time a Single-Stage-To-Orbit futuristic fighter-jet take could get into orbit?

The argument, such as it is, is that SSTO will greatly reduce complexity. And hence improve reliability and cost.

As the esteemed Stranger has so often said, the major cost in current space ops is the army of people needed to not only do all the work, but triple check all the work. Every single component, connector, or gizmo that isn’t on the vehicle results in exponentially fewer man-hours needed to process it.

The economic leverage of orbital access at $1mil per 50kg launch or $100K or $10K per 50kg are huge. And it can’t happen until you wring out almost all the high-skilled labor. Which can’t come out until the component count approaches 1 as a limit.

Consider the labor man-hours needed to solder up the equivalent of a 1GB RAM chip made from discrete transistors. That scales even less well than Tsiolkovsky. Who’s justifiably famous for really crappy scaling.

I can only assume from this statement and your prior posts that you have no background or experience in structural engineering whatsoever, and complaining that my objections to your ill-defined concept of a 30 km tall, 200 km long steel truss structure are “ridiculous” is like a smoker patient telling his cardiologist that the latter is uninformed about the impact of tobacco on heart disease. I do have an extensive background in structural mechanics, and while I don’t typically work on large buildings or civil structures I have worked extensively on the design and analysis of very large mobile systems and large flight structures where ground loading and modal dynamics are serious concerns and the structure is often desired to be as light as material strength and element connections will allow, so I have some modest experience to bring to the table on the topic of the limits of structural strength.

I’ll reiterate and expand upon the issues with this massive trust concept, which comprise the following:

[ul]
[li]Footings: Footings or foundation are what anchor and ultimately accept the bearing and lateral loads on a ground-based structure. For most large buildings and massive civil works like dams, the foundation is assumed to be essentially rigid in comparison to the building or civil structure, and is reinforced accordingly. With large span structures such as bridges, footings are separate but generally tied down to some kind of bedrock with the bearing loads being distributed into the surrounding soil that is compacted and reinforced by various means, and the structure is designed to accept the necessary compliance for ground movement and seismic loading. Ultimate bearing strengths can be found in the various AASHTO construction design standards, but the ultimate bearing strength for unconfined basalt is around 50 ksi (~350 MPa), with a tensile strength of about 2 ksi (~15 MPa) and effective shear load resistance of 10 ksi (~70 MPa); however, this assumed a homogenous, unfractured mass of bedrock the length and width of this structure that doesn’t exist anywhere on dry land; realistically for a massive structure it would have to be considered a fraction of this even before applying safety factors and design margin. Steel masses about 0.29 lb[SUB]m[/SUB]/in[SUP]3[/SUP] (7850 kg/m[SUP]2[/SUP]. Assuming in effective uniform fill factor of 2.5%, the bearing stress at the base, assuming it is uniformly distributed is about 29 MPa.[/li][li]Wind loads: the wind loading on a 30 km high wall would be enormous. Furthermore, it woudn’t see a steady loading all the way up but would experience variation in load and direction at different altitudes (wind shear) that will change throughout the day and in different seasons resulting in highly cyclical loads. You might think an open truss structure would be less affected by wind which can pass through it but in fact the mesh effect will actually create local turbulence which will apply highly variable load to individual sections, contributing to…[/li][li]Modal dynamics: Most smaller steel truss bridges and small buildings are assumed to be essentially rigid in all but large seismic events that produce low frequent base vibration conditions. The wind and non-seismic dynamic loads are generally inconsequential to the building structure even when they tear off roof or side panels. For larger skyscrapers and suspension bridges, dynamic loads are of greater concern and the structure has to be extensively analyzed to assure that applied loads do not couple to fundamental structural frequencies resulting in destructive resonance, and that the structure has both sufficient strength and compliance to accept loads in a linear elastic region without being subject to excessive fatigue or overstressing individual connections. Very large structures or those optimized for lightest possible weight will have many more structural modes and will respond more readily to outside impulses, and combinations of adjacent resonance modes that may be individually survivable can result in dramatic overstress or unstable modal behavior, causing the structure to literally tear itself apart. A truss structure on the scale of kilometers in height, having tens or hundreds of millions of individual elements and connections, will have so many individual modes that it would literally be impossible to analyze it accurately using finite element analysis, and probably impossible to design to avoid overlapping resonant modes in adjacent areas. [/li][li]Static and dynamic stresses: Any structure under load is subject to stress, and has to be design so that both the material and geometry is able to accept and distribute the stress in a predictable manner so as to assure that mechanical capabilities are not exceeded. That means that the structure has to both protect the weakest areas more prone to overstress (typically connections, fillets, and other singular geometric features) and provide sufficient compliance to allow the structure to distribute stresses away from high stress areas such as boundary and load application points. Large structures also have to be tolerant of the failure of individual elements (single failure point integrity) so that localized fatigue, overstress, or defects don’t create a cascading failure condition that causes the entire structure to fail. On an upright structure the scale and complexity of this concept I can’t imagine how it would even be possible to assure single failure point integrity to any degree of confidence.[/li][/ul]

In any case, if one desired to build such a massive structure, a static steel truss structure is absolutely not the way to approach it. A catenary-supported and reinforced structure using stored energy elements to provide damping and mediate stress distribution through the structure would be the only way to make this work, and even then would require high strength carbon or synthetic fiber tensile elements to keep the weight to a practical minimum, and the complexity of actually building such a structure under tension is virtually unimaginable. This is not an issue of economics, or even the basic logistics of having enough material and labor to construct a structure of this kind; it is physically unrealizable even if no particular element of it violates basic laws of physics.

Yeah, that’s the theory according to some people. Unfortunately, there are some technology thresholds that have to be achieved in materials science, propulsion efficiency, robustness of hydraulic valves, et cetera before that notion can be credibly justified. While two and three stage vehicles are definitely more complicated, the benefit of having to carry only a fraction of the initial inert mass of propellant storage systems far outweighs the cost and complexity of multiple stages and the extra mass of interstage and separation systems. I honestly don’t expect that to change any time soon.

Reusable single state to orbit (RSSTO) offer the potential for commercial airplane-like spaceflight, but we have yet to develop propulsion systems which can operate in repeated ascents without regular servicing and testing. (Yes, SpaceX has put individual engines through multiple static fire tests accumulating many thousands of seconds of runtime between teardowns; this is not the same thing as flying an integrated stage with nine engines and a complex propellant feed system.) Thus far, attempts at RSSTOs haven’t gotten further than conceptual studies and suborbital proof of concept demonstrations, but there is an incremental path toward RSSTO vehicles with modest improvements in materials and propulsion technology. However, a large cargo carrying RSSTO is probably not going to be operating in the next few decades, and I’m dubious about the economics of partially or fully reusable two stage to orbit (TSTO) vehicles such as the Falcon 9 reducing the costs of spaceflight by anything even approaching an order of magnitude. RSSTOs are plausible in a future where the improved performance of pulse or continuous wave detonation engines with altitude compensating nozzles are a mature technology, advanced thermal protection systems are robust enough to survive orbital reentry without repair or refurbishment, and delicate components such as valves and composite overwrapped pressure vessels are robust enough to survive hundreds of hours of flight time without servicing or inspection, but we aren’t any where close to there at the moment on any of these.

There are concepts for much simpler bulk cargo rockets with relaxed reliability requirements and “shipyard grade” geometric tolerances and construction details such as Bob Truax’s Sea Dragon concept which are potentially viable and worth consideration because the offer the potential for both reduced operating costs and much larger economies of scale in terms of cost per unit payload mass, but there is currently no one looking to buy multi-hundred ton launch vehicles and no one interested in investing the few billion dollars it would require to develop such a system even if the operating costs are in the hundreds of dollars per kilogram payload to orbit.

Stranger

Here’s a neat SciShow video about the construction of a 1KM tall building and some discussion on how high you can go.

Interesting, I didn’t know that, thank you.

Thanks for the rest of the details. As I said: “such as it is”.

IOW, it’s valid economic analysis, but that makes it but one of a dozenish necessary conditions leading to large scale economically viable spaceflight. In and of itself it’s very far from a sufficient condition.

The advantage (if you’re a space promoter or a space huckster) is that this particular condition is in the economic realm and hence understandable to non-engineers in general and to business / finance people in particular.

Arguing engineering only goes so far, even with other scientifically minded folks. Winning the argument on business grounds is where the real money is to be made. Whether you’re a promoter with a legit idea (e.g. Boeing. Musk?) or a pure huckster fleecing the rubes (Mars One).
Again I’m not advocating for SSTO=cheap being true. I’m just explaining to the crowd why it’s a perennial favorite that gets trotted out every time somebody with enough money gets a gleam in his eye.

So, you know, if you could trust robots to conduct the inspections and rocket assembly, you could also solve this whole launch problem without any other new techniques.

The Russians launch over land, and their first stages slam back into Kazakhstan. So in principle, you could just have a robotic vehicle go pick up the expended stages and truck them back to a factory. They get shredded, separated into constituent components, and eventually filtered to feedstock. Then, in principle, if the individual components are all made to tight enough tolerances, the machines just put it together without error and you launch a brand new rocket. Only cost is energy. *

If you can get the quality control tight enough - the ultimate would be atomically perfect - each new rocket is the same as every other rocket you launched, and there wouldn’t be these launch holds where a valve is sticky on the telemetry. Only unusual environmental conditions could cause the rocket to fail.

I know we are nowhere close to this point, I’m simply saying that the space access problem is really a manufacturing problem. Solve that, and the methods we already know work are fine.

  • in practice, you aren’t getting the second stage back intact, so this is a simplification.

Other problems with SSTO: (1) It drives the technical risk and development cost ever higher, which in turn typically increases complexity and adversely impacts serviceability, (2) Production quantities are generally very low which swamps cost savings from re-use (3) Costly manufacturing lines and R&D resources (all requiring lots of people) must be maintained to issue one-off fixes for a tiny operational fleet.

We see this in many areas such as automobiles. A 2017 Formula One engine is 1600 cc (97.6 cubic inches) yet produces up to 1,000 horsepower. On average it lasts about 7 operational hours. The vehicle weighs about 700 kg (1,543 lbs). The 2017 car pulls close to 8 g in corners. Unfortunately it costs about $10 million and requires a huge team to maintain it.

Re the supposed savings from airline-like operation, former Space Shuttle program manager John Shannon called reusability a myth. To paraphrase him: When a vehicle is made in very small numbers, you cannot just shut down all manufacturing capacity after it’s built. The vehicle and subsystems require ongoing R&D, fixes, testing, etc. Parts wear out, you have failures, design issues become apparent during use. You essentially have to keep the production line available, even if nothing is being manufactured, along with the entire associated industrial infrastructure. Then you must buy “one of” pieces to fix things which is expensive.

In hindsight this is common sense. It was commonly argued that if you built a 747, flew it one passenger-carrying flight, then threw it away it would be expensive. However if you only built four 747s, and had built no similar aircraft before them to spread the development cost and risk, despite reusing them those four airplanes would be incredibly expensive to operate.

That is the situation an SSTO faces. The rocket equation forces it up into exotic territory, which in turn drives up development cost and risk, then re-use keeps the fleet size low. It is like trying to make a cheap Formula One car which has a 100,000 mile warranty and requires little maintenance – virtually impossible.

From the point of view of NASA, though, the Space Shuttle project sounded really good. It was supposed to be reusable - to Congress, who canceled 2 paid for Apollo missions and essentially defunded any real attempt to go to mars, lower costs sounded good. The Air Force wanted certain capabilities, so they compromised the design even further to add them. The wings made it look cool.

But yes, it was a jet fighter dump truck. Several almost impossible characteristics crammed into the same vehicle. In several key ways, the space shuttle is fighting the laws of nature instead of working with them. I mean, it flew, but at exorbitant cost. The most damning number, to me, is the ratio of orbiting mass (the part you spend rocket fuel and engines to put into orbit) to payload mass. Just awful. You could have made a heavy lift rocket that puts a space shuttle mass worth into orbit every launch, and it would have realistically cost about the same or less per kg, and then you just get the crew back through a tiny capsule with a simple shape and simple heat shield. Instead of useless wings and other dead mass, you’d have something in orbit you can keep using that you benefit from it being in orbit.

What I called ridiculous was not your overall objections, but your specific claim that the tower would have to be hundreds of kilometers wide at the base. I’ve no idea what you have in mind as a design, but whatever it is, you’ve done something wrong if it has to be that wide.

Regarding the footings, a 1-m slice of the tower would weigh in the ballpark of 10^6 kg. Which means the average ground pressure is a measly 1 kPa. If we limit ourselves to ground loadings of 100 MPa, it means we’re only spending 0.001% of our ground area on footings. It’s so little that we can afford to spend a lot on seismically tolerant footings that don’t couple ground movement into the structure.

Wind is obviously a consideration, but less so at the higher altitudes where the air pressure is almost nothing. The upper parts of the structure can be relatively flimsy; at the top, the pressure is roughly that of Mars. Truss structures obviously do well overall when it comes to wind due to their low cross-sectional area (compared to solid structures of similar size), particularly if the components are reasonably aerodynamic.

The structure would obviously have many damping elements to remove internal energies, probably in the form of linear shock absorbers and damped masses. Allowing resonant modes to propagate unconstrained throughout the structure is a non-starter. Active elements are a possibility but probably overkill. Large damping elements are in widespread use in bridges and the like.

Redundancy is a potential problem with trusses as they are not always immediately amenable to simply adding more members, and balancing load between members in the normal case can be challenging–it can be easy for all of the load to be handled by just one supporting member. It helps if the design is not too rigid. This aspect probably requires more research but is hardly a project-killer.

While a 30-km version of the Gateway Arch would certainly look cool (I can only assume you had something like this in mind when you say “catenary supported”), it seems like wild overkill. If nothing else, you need a temporary structure to hold it up during construction. That temporary structure is… very likely to be some kind of truss. Since this isn’t supposed to be pretty, I see no reason not to just stick with the truss. A solid arch has ground pressure problems as well, unlike the truss where the load can be spread out over a large area.

Using tensile fiber elements (probably in the form of pressurized cylinders) would certainly make the structure much lighter, but is outside the range of widely-used and understood designs.

The “fighter” would require acceleration to near-orbital velocity and exit the tube at an altitude of 30 km (18.6 miles or 98,000 feet). By necessity of orbital mechanics, the exit angle would be mostly horizontal – what is called a depressed trajectory. It must be doing roughly 15,000 mph (22,000 ft/sec, 6.7 km/sec, Mach 20), or faster. Any slower and the vehicle size would be huge – see below.

If boosted to that speed and trajectory, an X-15 A2 could probably make it into orbit, and at 57,000 lbs it would qualify for “fighter size” – but it would not survive the thermal loads. Despite being constructed from a superalloy called Inconel-X, and having an ablative coating on top of that, and being limited to Mach 6.7 at 100,000 ft, the X-15 A2 nearly burned up:

https://www.dfrc.nasa.gov/Gallery/Photo/X-15/Large/EC68-1889.jpg


At that 30 km (about 100,000 ft) altitude, our vactube-launched “space fighter” would be going vastly faster than the space shuttle on reentry at that altitude, faster even than an Apollo capsule at that altitude when returning from the moon. At that altitude the space shuttle is normally doing about Mach 13. Heating rate increases as the cube of velocity. So the heating rate in BTU/square meter would be 3.6x worse at Mach 20 than the shuttle at Mach 13.

This would likely require the most exotic heat shielding imaginable – probably a mix of special materials combined with active cooling. IOW the thermal protection would require an underlying circulatory system of refrigerant tubes. It would be like the regenerative cooling in a rocket engine bell except covering the entire surface area of the fuselage and wings. Any plumbing leak or flow problem would result in a burn through – essentially like Space Shuttle Columbia STS-107, except worse.

The National Aero Space Plane (NASP) would have required similar thermal protection, which is one of the things that killed that program: Rockwell X-30 - Wikipedia

Two key items (1) The exit trajectory must be nearly horizontal. This is counter-intuitive since we see rockets take off straight up. However they spend most of their time thrusting in a horizontal direction; we just don’t see that because they are out of sight by then and TV coverage typically cuts away. The vactube’s low launch angle vastly increases thermal loads. I haven’t calculated the atmospheric drag losses but a lot of the energy would be lost from this. So if we wanted the equivalent of 15,000 mph in a vacuum, the exit velocity must be much higher because drag at 100,000 ft would cost a lot, thus requiring an even higher exit velocity and even greater thermal protection (which increases at the cube of velocity).

(2) Orbital velocity requires a certain kinetic energy. Unfortunately this increases as the square of velocity. This means boosting to (say) Mach 12 doesn’t do 1/2 the work, producing 1/2 the vehicle size to reach Mach 25 orbital velocity. Rather a Mach 12 boost does about 1/4 the work, meaning the orbiting vehicle will still be large. This is another thing that killed NASP. For scramjet (or in our case electromagnetic tube launch) to produce a modest-size vehicle, most of the orbital velocity must be achieved while in the atmosphere. With NASP it turned out that wasn’t possible, thus increasing the propellant load of the final rocket phase, making the entire vehicle much larger and more expensive than originally envisioned.

Unlike the casino game of craps, you don’t get extra payoff for getting there “the hard way”. Getting to orbit via airbreathing propulsion (or in this case, vactube electromagnetic launch) is exciting to contemplate, but in reality those methods are getting to orbit the hard way.

Well, if you want to fudge the definitions of “single stage” and “fighter sized” (and there have been some big fighters), there was the “Black Horse” spaceplane concept/design study from a few years back.

In short, a spaceplane takes off as an air-breather from a runway, makes a mid-air refueling stop (possibly to transfer cryogenic fuel/oxidizer), and then flies to orbit from there to deliver a 990 lb payload into a 200 mile orbit.

That could, technically, be enough to create an ungainly, inefficient platform to deliver a space-to-space missile (or a gun), to intercept a target and/or execute what could technically be called a dogfight. But, to be honest…there has got to be an easier way to squeeze money out of Congress.

This is an unusual situation. The OP wanted to get to orbit as quickly as possible. That requires gaining altitude as well as horizontal velocity. 30x200 km seemed like a good compromise–the cosine losses from the angle are negligible (~1%), but there’s enough vertical velocity to get to altitude quickly, without being too much to cancel by a modest rocket. Too shallow and it takes too much time to reach orbit; too steep and you need a huge circularization burn.

An X-15 design isn’t really what I had in mind. Perhaps this is my error, but I’m not taking “fighter jet” all that literally–the boosted craft wouldn’t have wings or any external control surfaces. I don’t know the details of how the X-15 almost burned up, but it was a craft with sharp front surfaces, as you’d expect from it being a rocket plane. Unfortunately, this is non-optimal when it comes to hypersonic heat loading, where you want blunt surfaces. The craft shouldn’t have any leading edges at all.

This craft is more like an ICBM, except much, much easier–it starts at 0.01 atm, which quickly reduces to zero. The overall thermal loading is pretty small; it just doesn’t spent much time in the air. A discardable, ablative nose cone should be sufficient here.

FWIW, Inconel is not particularly exotic any more; SpaceX 3D prints engine parts from the material.

I think your referring to a needle fighter, but the only reference I can find to it right now, is the novel footfall. They were projected to be lauched via modified icbm, and the pilot was immersed in water to be able to sustain higher G loads. I dont think returning was do able.

I haven’t read Footfall, but it would be something like that. For launch, the pilot would be on something like a waterbed. The lungs would collapse for the launch, but it’s only 16 seconds of acceleration. Maybe have them breathe pure oxygen beforehand to saturate the tissues. Whether the craft could return would depend on if you spent some mass on reentry shielding; the shape isn’t an obstacle (it would work as a lifting body and could be controlled via internal ballast sled, though you’d probably want the final landing to be via parachute).

I also wondered this when watching the “Independence Day” movies.

Then again, you could have the pilot breathe non-compressible Perflourocarbon fluid.

Liquid breathing is fun to think about but not yet suitable for anything except certain medical interventions. Unfortunately, the density of perfluorocarbons is too high for it to be useful as g-force protection. You need some other fluid, but I don’t think any are known yet.

It might be possible to force-inflate the lungs to keep them from collapsing. Let’s say the distance from lung to chest surface is 5 cm. The human body is about the density of water at 1000 kg/m^3, so 5 cm is 50 kg/^2, which at 50 gees is 25 kN/m^2. One atmosphere is 100 kN/m^2, so one just has to overpressure the lungs by about a quarter of an atm. Numbers are all ballpark, of course, but I doubt more than a factor of two off.

So maybe some kind of pressurization helmet that would force air into the lungs would do the trick, though it would have to be super-responsive to acceleration and tuned to the user. Don’t want to end the acceleration phase by popping the pilot like a ketchup-filled balloon…

The issue of lung “collapse” seems minor. Serious free divers routinely go to depths where the pressure is equivalent to 10 atmospheres, and record depths are around 25.

Completely different problem.

Deep divers have 10 atm of water pressure outside on their skin and 10 atm of gas pressure on the inside. As you say it all balances out just fine.

Somebody laying on a really high G couch has a couple thousand pounds-force of meat pressing “down” (=from tits to spine) on their lungs. But still just one atm of gas pressure on the inside pressing “up”.

It’s not really a lung collapse any more than having an elephant step on your chest is a lung collapse. More of a chest / ribcage collapse.

Even if the forces aren’t enough to bust all the person’s ribs there’s still the issue that they can’t breath unless their diaphragm muscle is strong enough to raise the ribcage against those forces. The effect under high G is about the same as somebody caught in a trench cave-in up to their neck. You can breath out. Once. But you can’t breath in against the resistance of high Gs or the dirt that’s collapsed around your chest.

That’s game over.

All true, but let’s not overstate the problem either–Colonel John Stapp withstood 46 gee decelerations, “eyeballs out” (the hard way, and restrained only by a harness) without traumatic injury. That was only for a fraction of a second, but that should have been enough to break anything that was going to break. He lived to a ripe old age of 89. Doing it the right way, “eyeballs in”, and with more compliant supports, should be a piece of cake in comparison.

Actually breathing is obviously a problem, but I think 16 seconds of that is survivable.