Boeing software fix for 737 Max

Yeah. That’s my point really. Those pilot’s failings weren’t a result of having low hours. It’s training that counts, not hours. To be more specific, it’s the quality, not the quantity, of hours that matters.

not positive but the heavy hitters are US and the vast majority of orders are US.

It was cut out in a downward pitch. They peaked at about 7000 ft and the plane was pitched 40 deg down on impact. Time wasn’t gained, it was lost.

200 hr pilots have absolutely no experience. they’re dead weight training novices. That’s fine if it’s a nice day and nothing goes wrong. The purpose of a 2 person crew is not to train for rainy days. The same redundancy you expect of the computer is expected of the crew. That’s not just my opinion, I’m agreeing with Sully:

A cockpit crew must be a team of experts, not a captain and an apprentice. In extreme emergencies, when there is not time for discussion or for the captain to direct every action of the first officer, pilots must be able to intuitively know what to do to work together.

Someone with that low amount of time would have only flown in a closely supervised, sterile training environment, not the challenging and often ambiguous real world of operational flying, would likely never have experienced a serious aircraft malfunction, would have seen only one cycle of the seasons of the year as a pilot, one spring with gusty crosswinds, one summer of thunderstorms.

Yes the orders are US, but the vast majority of MAXs delivered and “flying” are foreign. 261 vs 115. I’m not sure what the relevance of your statement was anyway.

Learning how to work together in an aircraft is exactly what the low timer has been doing. What has the high timer been doing? Flying single pilot IFR relying on himself and no one else. How does that help with team work? Maybe he’d be good at making a decision on his own, but that’s not his job, his job is to support the captain and help the captain make decisions.

You can’t be an expert pilot without first being an apprentice, and frankly you can’t learn airline flying by bashing around the place in a light twin. A 1500 hour pilot flying his first hour in a B737 having previously flown a light twin is just as much of a liability as a well trained 200 hour pilot doing the same. Probably worse in fact because the 200 hour guy has had all of his training focussed around the job of “B737 first officer”. Don’t forget that in addition to the 200 hours of air time he would have also had copious amounts of simulator time.

Hours are held up his as some kind of infallible measure of ability but it just isn’t. I wonder how many cadet pilots Sully has flown with professionally? I’m guessing none because the US regulations prevent it. On what basis is his opinion valid then? Just because someone did a great piece of flying under stress doesn’t suddenly make them an expert on everything aviation related. I would value Sully’s opinion on many things, the suitability of cadet pilots is not one of them though.

If I was given the choice of FO, and all I knew about them was that one had 1500 hours of general aviation experience and one had just finished a 200 hour dedicated airline multi-crew course focussed on the jet we were about to fly, I would take the low hour guy in a heart beat.

The above all assumes a good training system is in place. I have no idea whether Ethiopian’s is good or bad. I’m talking generalities.

Every time the captain trimmed up a little bit, the MCAS trimmed down more. If he had not trimmed at all, the MCAS would have run once and stopped and left the aircraft flyable. By trimming a little bit often the MCAS was reset and allowed to run again. By using the cutouts the FO stopped this feedback loop. If he’d done it even quicker they might’ve survived.

retrimming was a learning process. Shutting off the system interrupted that process. the last act was to turn the system back on and retrim it. It went in at 40 deg down so it was a fatal delay.

This doesn’t seems to be correct.

According to the detailed description of how MCAS works:

In this case the AOA was always shown incorrectly. The MCAS might ‘reset’ when the pilots manually adjusted the stabilizer, but it wouldn’t stay off. It would keep turning on again as long as the AOA appeared to be unacceptable. Also, if they hadn’t manually trimmed, it would still have continued again and again to perform ‘nose down for 9 sec, wait 5 sec’, waiting for the AOA to be become acceptable.

So as long as the MCAS hadn’t been manually cut out it would continue to function regardless of what the pilots did.

The problem was that they were nose down at low altitude, and at a high speed where the manual trim wouldn’t work. There was no option but to desperately try turning on the power to the stabilizers again - but in fact there was nothing they could have done that would have saved them at that point.

If they had realized earlier what the problem was, it might have helped, but there were probably many other things that could have gone wrong that had to be considered. It took 35 sec from the first activation of the MCAS for the FO (not the Captain) to correctly diagnose the problem and call “stab trim cut-out”.

Hmmm, that’s not my understanding. I was fairly certain from other sources that provided the pilot did not correct the MCAS with nose up trim, it would make one input then stop. It doesn’t make sense otherwise, if you’re in a steep turn and hold it there you only need the trim to move once. Moving more than once would not be in keeping with its purpose.

Something I read — that I may have misunderstood — made it sound like there are two separate systems (almost like the left and right hemispheres of brain? :rolleyes: ); that the left-side sensors, left-side controls, and left-side data displays are managed by one computer, and the right-side equipment by a separate computer.

Is this at least partly correct? Is this part of the reason that each MCAS relied on a single sensor?

That information comes from a serious technical site for 737 pilots and engineers. It may not be logical, but that’s apparently the way it works.

If you look further down that page to the proposed fix, you will will see:

  1. A modification to the activation and resynchronisation schedule. MCAS will be limited to operate only for one cycle per high AoA event, rather than multiple. At present it will operate for 10s, pause for 5s and repeat for as often as it senses the high AoA condition is present.
    That confirms that it makes repeated corrections, for as long as it sees high AOA. It may be ‘reset’ by manual stabilizer commands, but that just means it starts again with its corrections from the beginning, not that it switches off.

Is it a serious site? I thought it was basically a fan site, mainly good info but not to be confused for anything serious such as the flight crew operating manual.

My understanding was that MCAS would apply one set of nose down trim and if you did nothing, that would be it. You had some trim to pull against. Once you are away from the high AoA the MCAS then returns the trim to the original setting. I thought that if you trimmed against the MCAS it would only then be reset for 5 secs and trim again. The modification is removing the ability for it to reset and retrim after a manual trim input.

I will see if I can find something more official than that website.

It’s not an official site.

However other sources do agree with it. I remain unconvinced and suspect sloppy wording. If true then MCAS would have been dangerous even in its intended use.

Since MCAS is necessary only in a very limited part of the flight envelope, it should have been disabled for any reasonable doubt, surely if there are 75 degrees (!) of disagreement between the vanes. Hell, an AoA angle of 75 deg is unflyable, except perhaps in an Mig-29. Also, no other sanity check was done such as speed, attitude, descent speed and height. It is plainly crazy, I cannot even fathom how such a disaster went live.

Please look at the Ethiopian Airlines Preliminary Report and see for yourselves the data from the flight data recorder. It’s crystal clear. I store a copy on my Google Drive for convenience.

The MCAS was vulnerable to a single-point failure. Are AoA sensors known to be potentially unreliable? Shouldn’t attention or simulation have been devoted to the possible single-point error?

I was dismayed to learn that Boeing itself, and NOT the FAA, was responsible for certifying airworthiness. And even more dismayed to read that implementing a fix was delayed by the government shutdown! :eek: Is this latter claim a real fact? Or just one of America’s new-fangled True Facts™?

The question is, who programmed it? Was it outsourced to some cheap and useless company? Who at Boeing checked it and approved it? Some middle manager who knew nothing about flight engineering, and only wanted to keep costs down?

One thing we can be completely sure of is that the Boeing CEO’s $18 million paycheck is not going to be reduced.

I’m quite certain it is not a programming mistake but a system design fault. They probably analyzed by going on a event tree with [Airplane is in danger of stalling] / [Airplane is flying normally] Since first case is the more dangerous one, but it is a very low probability event, the chance that simultaneously the AoA sensor has a failure is infinitesimally small and need not be taken into consideration.

On the other hand, if [Airplane is flying normally] this is a good thing and we don’t have to analyze all the things that can happen any more than in a normal 737. They forgot that because the [Airplane is in danger of stalling] branch they added the MCAS thingy and this is not anymore a regular 737. This is a relatively standard cognitive failure.

What they should have done is split the event tree into AoA events and try to judge the reality and responses from there. Again a control system design decision that due to the time pressure no one has probably slept on very long.

I’m not sure if I’m clear though.

I mean, all sensors are known to be potentially unreliable. As sensors go, vane-type AOA sensors are not inherently less reliable than sensors in general. (I think they’re pretty simple and reliable, FWIW, and other AoA sensing options are not more reliable than vanes overall, AFAIK).

But sensor reliability is a bit of a red herring here, IMHO. It might be more productive to focus on system complexity and the behavior of that system when an AoA sensor does fail.

Prior to Boeing’s implementation of MCAS, a failed AoA sensor alone wouldn’t cause a 737 to crash. Furthermore, if Boeing had automatically disabled MCAS when the sensors disagreed, it’s unlikely that either plane would have gone down. The big problems (as far as I can tell) are that (a) the AoA vanes became critical sensors only after the implementation of MCAS, and (b) MCAS was implemented in such a way that made those sensors critical. That is, they used to be fail-safe sensors, but the implementation of MCAS made them “fail-dangerous.”

In other words, a previously non-critical sensor became critical without many people noticing, and that was significantly compounded by Boeing’s decision not to disable MCAS if the two sensors disagreed. Boeing elected to make optional a light showing AoA sensor disagreement, and making a safety feature optional doesn’t look good at a time like this. But that wouldn’t be so bad if, whether the customer paid for that light or not, MCAS disabled itself when the sensors disagreed.

But in that case, you’d likely have to inform the pilots that MCAS was disabled and that near-stall behavior is suddenly different, and that would require retraining so the pilots could learn to deal with the plane’s handling without MCAS.

It seems to me that the helpful-sensor-is-now-critical oversight is typical when designing large, complex systems with millions of variables. But the decision not to include the light as standard seems motivated by a desire to maximize revenue. The decision not to disable MCAS under sensor disagreement (and to gloss over it in general) seems to have been driven by an upper-management diktat to avoid retraining requirements for pilots at all costs.

If, in the final analysis, it turns out that the above description is basically accurate, the profit-related motives range from shady (for the optional light) to unconscionable (for the non-retraining part). We’ll see how things shake out. In the meantime, I’m really glad I don’t work for Boeing.

I can’t speak to the government-shutdown aspect of your question, but it’s true that Boeing employees probably played a direct role in certifying airworthiness for the FAA.

During the aircraft design process, the FAA appoints a set of representatives (Designated Engineering Representatives, or DERs) to help ensure that the proposed design meets airworthiness requirements. “Company DERs” are employed by the company designing the plane, while “consultant DERs” are independent and not employed by the aircraft company.

As a result of this whole debacle, I’d expect to see the FAA do away with company DERs entirely in favor of independent DERs. I would expect other changes, too, but this would address your specific concern.

DERs (independent or otherwise) exist because the FAA doesn’t have the resources to employ an army of engineers to perform the functions for which a DER is responsible. Personally, I’d like to see the FAA get more money so they can follow the design and certification processes more directly. It doesn’t make sense for all DERs to be employed directly by the FAA—demand for DERs varies a lot depending on the number of aircraft being designed at any given time—but the system could certainly be more robust and independent than it is now.

That is generally true for many systems on a passenger jet. There will typically be two or three independent systems consisting of sensors, computer, and output or display, often with the left sensors going to the left computer and being output to left subsystems and displays.

My understanding is that for the B737 there are two flight control computers (FCC) that control a large number of functions primarily to do with the autoflight system and the MCAS is a software function living in the FCCs. The FCC in charge of running MCAS alternates from flight to flight and the MCAS software is only given input from one AoA sensor.

What you were reading about is gauge/sensor redundancy. They tend to split up the sensors so that the left side sensors feed the left side gauges and the right side sensors feed the right side gauges. If a bird or flock of birds hit one side of the plane it would make sense if the gauges on that side of the plane go wonky.

As for controls that depends on the plane. Most if not all Boeing planes use a mechanical yoke system that is linked together. If you move one yoke you move the other. Fly by wire planes with joysticks are not mechanically connected and it’s up to the computer to choose which one has command.

Didn’t realize Richard Pearse responded to septimus’s post. Not sure how I missed his post.

No problem.