Exactly -- "The flying public loses one of its best protectors."
And the loss is the direct result of badly designed and badly implemented software automation.
Yet, software continues to eat everything, and large swaths of the industry praise a "move fast and break things" attitude about it. Even life-critical industries consider it OK to make airframe changes rendering a passenger aircraft unstable in parts of the performance envelope and patch it over with software supposedly compensating for those new flight characteristics, dependent on a single faulty sensor and no input sanity checking. After 346 people die in two incidents, they reconsider, only after forced by regulators.
This <it'll be OK, just patch it> attitude needs to be eradicated.
The thing is, the software automation wasn't badly designed and badly implemented. This isn't like Boeing's non-redundant MCAS system - a huge amount of engineering work was put into making every part of the system redundant and robust against erroneous data from a failing component and verifying that it worked as intended. It's just that some really weird, rare failure mode that no-one anticipated or ever found the cause of somehow created a data pattern which foiled all those checks.
Really? From every account I've read, it depended entirely upon a single flight attitude sensor, and did no sanity checking whatsoever against other data, sensors, or inputs. There was an extra-cost-option for a second AOA sensor and an obscure cockpit light that would tell when the two AOA sensors disagreed, but those were not installed in either of the crashed airplanes.
It seems at the very least, such a critical system should have three primary sensors with full algorithms to check for disagreement, plus checking against the artificial horizon display inputs, airspeed, throttle, etc. to determine if they were actually in the part of the envelope where the MCAS would be useful.
So, unless it there are a number of checks & features that have been written in no account I've read, it's really bad design -- software.
Moreover, they could have decided to NOT design the airframe so that it would have deadly characteristics requiring a software patch to hide.
Again, really bad design - airframe.
Or, they could have decided that this is a potential critical and deadly failure mode which properly required extra training and a new Type Rating for pilots to fly the new aircraft. But instead they decided to try to bury it in the same type and an hour of iPad training, so that their airline customers would see the a lower overall cost of the new model.
Again, really bad design -- process.
Perhaps you can point me to some documentation I'm missing here, but this is what I've consistently gathered from the substantial number of articles I've read.
Boeing's disasterous MCAS system relied entirely on a single AOA sensor and did no real sanity checking, yes. The Airbus plane this article is about had three AOA sensors fed through three redundant ADIRU modules, with the data being checked and cross-checked by each of the three primary and two secondary flight control computers, each of which also had an internal monitoring channel running independently-written software on seperate hardware checking its internal calculations.
The reason Qantas Flight 72 only nose-dived twice, if I'm understanding the incident report correctly, is that each nose-dive caused the internal monitoring to fail and that part of the flight control computer responsible to be faulted out for the rest of the flight. After the second nose dive, all three of the primary flight computers had faulted, disabling the affected flight control features.
And the loss is the direct result of badly designed and badly implemented software automation.
Yet, software continues to eat everything, and large swaths of the industry praise a "move fast and break things" attitude about it. Even life-critical industries consider it OK to make airframe changes rendering a passenger aircraft unstable in parts of the performance envelope and patch it over with software supposedly compensating for those new flight characteristics, dependent on a single faulty sensor and no input sanity checking. After 346 people die in two incidents, they reconsider, only after forced by regulators.
This <it'll be OK, just patch it> attitude needs to be eradicated.