Virgin Galactic's space ship crashed

ISTM (with absolutely no relevant qualifications) that it would be simple to design the lever so that it can’t be engaged without the craft being at the requisite speed to make it safe to do so. Maybe it was pilot error in engaging it, but when it’s simple to design something to prevent such an error, that makes it more a failure of the design, in my eyes.

The more detailed article I read said he unlocked the tail early. BUT …

The tail was not supposed to deploy until a different lever was moved. Which would normally be done at the time they intended to deploy it. Namely near apogee. And the crew did NOT move the second lever.

So from the UI = crew point of view the tail malfunctioned; it should not have deployed given the inputs from the crew.

Perhaps from the engineering POV it worked as they would have expected had they thought about the scenario; e.g. perhaps the engineers knew the deployment latches weren’t strong enough to hold the tail in place passing through the transonic regime and in fact that’s exactly what the locking feature and its secondary latches are meant to overcome.

For sure I don’t know and the answers aren’t forthcoming … yet.

BUT … This whole machine is not a certificated aircraft; it’s an experiment. Just like Olden Tymes Aeroplanes, there are designed-in features done for engineering & manufacturing reasons (some well-founded and others mere convenience) which are traps for the operators.

My bottom line is I *suspect *the crew fell into a trap set by engineering. A trap they *should *have known existed. But may not have. IOW they *should *have been told the “why” for the [unlock tail at/above Mach 1.4] procedure. If they weren’t told, them not knowing is not their fault. If they were told and still screwed it up that’s much more on them.
As to why he did it early … Not knowing anything of the rest of the flight profile it’s hard to say.

There might be a bunch of task compression coming up and he decided to do the tail unlock a few seconds early to buy bandwidth to do the other tasks at the appropriate time. It’s not uncommon on test missions to have extra test-related taks to perform. So even if the normal mission profile is designed to keep the crew workload steady & manageable, ad hoc test tasks often get added right at the most dynamic parts of the mission.

Or it might have been a simple brain fart.

Or they might have hit a bump and his arm was resting against the lever at the moment and knocked it out of the detent. That particular scenario happened to the flap lever of a DC-10 in cruise and killed a couple people and injured a couple dozen.
The good news, such as it is, is there seems to have been plenty of telemetry. So the experts ought to get to the bottom of it.

As I said in another related thread: Rutan himself says the vehicle is expected to be about as safe as a 1920s airliner. That means there are a lot of rough edges & weak spots & gotchas. The crew’s job is to counteract all those shortcomings using a heaping helping of Macho Pilot Cool™. Maybe they were a little low on MPC that day.

I would have said (perhaps naively) that test pilots should certainly undertake to know that sort of thing, and not wait to be told.

Their areas of responsibility and knowledge should have a lot of overlap with those of engineers - if this isn’t true the project is asking for all sorts of trouble.

True in general. The full title of those kinds of guys is “engineering test pilot.” Which I most certainly am **not **one of.

But if it takes 50, or 100, or in the case of something like the Shuttle an army, of engineers to design it then there is some pretty serious limits on how much detail the pilots can go into for understanding the why of each control’s purpose and mechanism as well as the motivation behind each procedure and their collective sequencing and timing.

This vehicle is about the simplest thing that can possibly do the mission. So much (most?) of the why knowledge *ought *to both available and learnable.

Almost all mishaps can be laid at the feet of disconnects between “oughtuality” and “actuality.”

It’s my understanding that they release it early in the flight because they need the wing feather to slow sufficiently and controllably from the sub-orbital parabolic flight. If they get to 100 km up and then find that they can’t feather the wings, they’re screwed. So they release them early enough so that if it doesn’t work, they can shut down the engines and descend.

You guys pitting the engineers vs. the pilots in this accident forget that the pilots themselves are aerospace engineers. They know what’s up.

What should definitely be questioned is why are there people on board during these tests at all? Why does this craft need pilots are all? Guidance navigation controls and flight computers have been sophisticated enough for decades now that the entire flight is totally controlled by computer.

Even if a person designated “pilot” and another person designated “engineer” have the exact same qualifications/skills/knowledge, if they’re working in different roles on a project they each may not be fully informed of what the other knows. In the case of an complex, experimental machine that can lead to problems, sometimes serious.

Gee, I dunno - why do we still have human pilots on airplanes?

Partly, I don’t think the public is ready to entirely trust their lives to entirely automated machinery. Another consideration is that humans tend to deal with the unanticipated better than computers do, and the unanticipated still occurs from time to time.

It’s not necessarily simple. You have to interface with speed sensors and have a lockout mechanism. If either of those fails, they might end up being unable to use the system when they actually need it. The risk of that happening may have been judged higher than the risk of the pilots operating the unlock at the wrong time.

Respectfully this is complete nonsense, there is zero reason for humans to be piloting the craft at all. Any and all actions needed to be taken by a pilot can be done remotely, or more importantly preprogrammed into the flight plan, with all contingencies already worked out. Whichever executive had the bright idea that live humans needed to be inhabiting the craft during the testing and verification phases should be immediately dismissed.

Every time you make something idiot proof, a better idiot comes along.

Put in every thing that could possibly go wrong. It is 100% a surety that one not thought of will happen.

If you fly as a pilot long enough, you will have times when all the known correct moves do not fix it. And if you are lucky, doing the known wrong thing in general will fix your particular problem.

On an new & unknown flying machine, all the problems will not be known. How do you program for that?

More than one pilot in more than one kind of plane have been in planes that both the engines and the total electrical system has suddenly taken the day off. External forces, say large birds, sometimes do this to the most modern of aircraft. Remember ‘Miracle on the Hudson?’

I have sat in the sudden quiet and started sweating because the fan quit and the electrical system said good night & it was indeed night.

Luckily I was close enough to a lighted grass strip and manages an arrival that both I & the airplane were reusable afterwards. Not sure that could be done with a computer that had no electrons or had let all the smoke out of the wires.

Pilots like Bob Hover & probably most test pilots are all the time were doing things the engineers said the airplane would not do. And finding they would not do what the engineers said they would and having to come up with something or die. Some did die.

Drones are doing amazing things, but if they come unplugged from the controller, the on board computers do not always save them.

For every Flight 1549, there’s a Flight 447 or Flight 214. At some point, we should be willing to trade the possibility of pilot heroics for the elimination of pilot error. We aren’t there yet, but we will be eventually.

Also, there’s no reason an AI can’t ultimately do a better job at emergency landings than a human. The AI wouldn’t have forgotten to throw the “ditch switch,” at any rate.

Upon rereading, I think this last bit probably reads as way snarkier than I intended. Sullenberger was obviously an extraordinary pilot and not throwing the ditch switch was hardly a blemish on his feat. But under other circumstances it could have made a difference. In this case, Sullenberger correctly chose to focus on landing the plane, but a computer doesn’t have to prioritize in that way. It can do both, along with any other things that need doing. That kind of parallelism might open up possible landing scenarios that are simply impossible for any human to achieve.

I’m always amazed at the number of people who are arrogant enough to think we really have that much figured out. ALL contingencies? Even the unforeseen ones? I don’t think we’ve entirely run out of those.

Guns beat me to it, but “Until a duck flies through the airplane”.

I had a similar discussion in engineering class about electronic instrumentation. The guy proudly showed me a picture of a digital cockpit display-with a level built into the display frame. :slight_smile:

I’m not arrogant at all, all I’m saying that it is complete folly for this spacecraft to have a design of a cockpit based on the world war 2 model, where you have two or more people flipping switches and turning knobs. We have numerous drone models manufactured and deployed everywhere already. The science of unmanned aircraft is so advanced that universities already offer engineering degrees specifically tailored to those industries. As far as planning for contingencies, I’m fairly confident that a computer can be programmed to deal with thousands of machine states that a contingent of hundreds of engineers can deliberately think about and analyze over a period of years. Exactly what value does a live human being bring to the table when piloting an aircraft compared to a computer? Precisely what solution could a human brain conceivably come up with in let’s say the seconds before the craft broke up? Nothing. Between the time that the copilot presumably had a brainfart and threw the unlock switch at the incorrect time and the time the craft broke up was a matter of seconds, again what could a human possibly do in that situation? Nothing, sadly. The hidden truth in this sad incident is that Mr Branson hired a traditional old-school aerospace engineer to lead development of this program who himself was educated a long time ago before we had the dotcom boom and the related advances in computing. This person made all the wrong choices in the trade studies presented to him in managing this project. It remains to be seen whether Mr Branson will make some heads roll because of this incident. What is clear though is that this entire program needs to be fully debugged and flown with no further human loss, and the only way to do it is remotely or fully computerized.

Drones which are limited in their capacity, which either can’t change “mission” or can’t do so easily, and which have a failure rate higher than manned aircraft.

Unless you are referring to the sort of drones that actually DO have a pilot in the loop, even if not on board, which can not be described as fully automated.

I am not.

Humans are better than computers at dealing with novel or unforeseen circumstances - I already stated that.

The fact that humans are useful in some emergencies does not mean they will be useful in all emergencies, nor does being useless in some circumstances mean that a human will be useless in all circumstances.

We actually don’t know if the copliot had an actual “brainfart” or if he had some reason for doing that and thinking the result would be OK rather than disaster. Completely analyzing all aspects of a disaster rather than jumping to conclusion is important. Remember when everyone assumed that the engines had blown up, end of story? Now there’s a rush to judgement on the copilot. None of us sitting here have access to the crash site, the wreck, or the surviving pilot, therefore, we certainly don’t have all the facts.

So… at what point do you start flying tourists? The very first flight where it’s certified? By your reasoning we should never put people into space at all.

I’m OK with people flying into space, so long as they are informed of the risks. That applies whether they’re pilots or passengers.

If we fly into space we ARE going to have “human loss”. It’s an inherent risk of doing that thing. Accidents are inevitable, you will NEVER eliminate them all. Just like with airplanes, if we’re going to fly at some point there will be an accident, and some percentage of those accidents will be fatal. No form of transportation is 100% safe and people throwing up their arms and going on about how tragic the loss of life and we must never never never never ever have any deaths are not being realistic. Sure, minimize them, make them as survivable as possible, but get a grip on the fact that Stuff Happens.

There are a number of significant misapprehensions here, and while I agree that space launch vehicles should be initially flight tested in an uncrewed state (and should be capable of remote semi-autonomous flight) the reality is that there is a massive expense in doing so. It is true that “a computer can be programmed to deal with thousands of machine states that a contingent of hundreds of engineers can deliberately think about and analyze over a period of years”; however, this assumes that you have the funds to devote to this kind of simulation and analysis. The labor costs of this alone will run into the hundreds of millions of dollars, and result in primary flight code that is tens or even hundreds if millions of lines long, with all of the coordination, integrated testing, latent code, and configuration management challenges that come along with that.

A software autopilot is like an idiot savant; it will tell the vehicle to do exactly what is in its code, but any novel condition or collection of inputs (which for a complex flight vehicle will easily run to tens of thousands of independent channels) it hasn’t been programmed for (or any hiccup in the firmware or hardware running the code which gives it a false reading) will cause it to fail. By comparison, a typical large website (built upon an existing framework such as Lift or Django) will have a few tens of thousands to hundreds of thousands of lines of code and a large but well-identified input set, error-trapping mechanisms which will identify failures, and the ability to debug and reload code while the website is still running. On a flight vehicle, a failure in the software design to identify a failure condition, or any previously unsuspected latency or anomalous behavior in the underlying hardware will often result in a cascade failure in the autopilot that will rapidly lead to catastrophic failure of the flight. Hence, why programs that develop autonomous or “fly by wire” control systems devote such massive and increasingly large budgets into software, to the point of absurdity (see Augustine’s Law Number XVII: Software is like entropy. It is difficult to grasp, weighs nothing, and obeys the Second Law of Thermodynamics; i.e., it always increases").

Comparing a suborbital vehicle like SpaceShipTwo to an autonomous subsonic drone aircraft (on the latter for which billions of dollars have been poured into developing fairly robust autonomous software) is hardly apropos; SpaceShipTwo has to fly through numerous different regimes and conditions. And as difficult and costly as it is to train a live pilot, and the inherent latency in the human nervous system (and delicacy of the organism), the human pilot is (currently) capable of greater nuance and making decisions with limited data.

This notion that the problem is “a traditional old-school aerospace engineer to lead development of this program who himself was educated a long time ago before we had the dotcom boom and the related advances in computing” contains numerous misapprehensions. Despite what Barney says, newer is not always better, and while the innovation that younger engineers bring to the launch industry is a valuable injection, the reality is that success is largely predicated on having the experience to know what hasn’t worked in the past and avoiding repeating the same mistakes over and over again. The problems that I’ve perceived in The Spacecraft Company (from talking with a few former employees and reading public statements by Branson and others) aren’t “old-school aerospace engineer[s]” but too many inexperienced people making basic errors through a lack of experience and mentoring.

The “dotcom boom” did not create fundamental “advances in computing”, it was leveraged upon prior advances in networking, distributed computing, and modular architecture, and in fact while we’ve seen a massive increase in the application of software in everyday life, from embedded computers in consumer appliances to mobile Internet access, the core underlying technologies in computing haven’t undergone revolutionary change since the early 'Nineties (maturation of RISC architectures still in use). Most embedded software/firmware still uses C and C++ code just as they have for decades.

Test pilots are cognizant of the risks they are taking (flying vehicles that are in development and prone to failure) and take that risk knowingly. The same doesn’t apply to the general public paying to be a “tourist” on what is a supposedly mature flight vehicle. Ultimately, the space tourism industry, if it can be fiscally sustained at all (I have severe doubts) is predicated on having vehicles of a reliability at least approaching that of a light aircraft. And that requires extensive flight testing with a flight crew long before a single human passenger steps aboard.

Stranger

Well I didn’t mean to say that there should be no more deaths in space flight, I mean with this particular spacecraft, since they are relying on the general public for revenue in their future business model, if they have more high profile accidents like this most of their business will evaporate. Now after years of delays they managed to build a single article and it had an accident on its fourth powered flight. I think this program has been mismanaged.

I wouldn’t disagree with that assessment, but a “properly managed” program to develop a new propulsion system and novel launch vehicle of this class (e.g. crew rated) is a multibillion dollar effort. TSC and other entrants are largely trying to do this on a comparative shoestring budget (albeit still on the order of hundreds of millions of dollars) and as quickly as possible to be first to market heedless of risk; hence, the need to have an independent body representing the public interest which either develops and enforces the necessary minimum of regulation or advisess an existing regulatory authority such as the FAA on the particular concerns of nascent spaceflight development efforts.

Stranger