Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
'I’ve become isolated': the aftermath of near-doomed QF72 (smh.com.au)
165 points by rosege on May 18, 2019 | hide | past | favorite | 84 comments


A similar thing happened with the captain of SAS MD-81 in 1993[1]. Clear ice had formed on the upper side of wings overnight, but it was not detected during de-icing. Shortly after takeoff, pieces of ice broke off and hit engines, deforming fan blades, which disturbed airflow and caused right engine to surge 25 seconds into the flight.

Pilots properly responded with thrust reduction to decrease engine stress, but unknown to them, the MD-81 had been fitted with Automatic Thrust Restoration[2] that kept increasing thrust to maintain normal climb setting. Increased thrust led to surging of the left engine too, until both engines failed at 78 seconds.

The aircraft crashed into a field and broke into three parts (everyone survived).[3]

Post-crash analysis determined that reduced thrust would have kept both engines operational, and maintained thrust would've lead to failure of only the right engine, which would have allowed to keep flying and perform a safe landing. ATR increased thrust and caused both engines to fail.

Captain of the flight retired from flying, saying that he lost trust in airplanes.

[1] https://en.wikipedia.org/wiki/Scandinavian_Airlines_Flight_7...

[2] https://patents.justia.com/patent/4662171

[3] https://i.imgur.com/k4q0WMm.png and https://i.imgur.com/KzPNhsU.jpg


For every vindicated hero, there are probably dozens of pilots who would no longer be alive without smarter systems.

Airline fatalities decreased 10-fold in the last 30 years, while total miles travelled increased by the same factor. Flying today is 100x safer now, and I don't quite understand peoples' fetish with human control.


That's exactly the opposite of the reaction I had which was "ugh, another example of software the human operators aren't told about doing something stupid because the author thought they understood every situation better than the people actually in the situation."

Furthermore, you don't know that it's software that's improving the safety of flights, it could be changes in procedures and training.


Where do those figures come from? They appear to be completely inaccurate. Looking here [1], while there has certainly been a reduction in fatalities over the past 30 years, it's not even close to 10x. Maybe 2x or 4x. As far as miles travelled, there has been at best a 2x increase over the past 30 years, at least for the US[2].

Airline travel certainly has become safer, at least on a per-mile basis, but 100x safer? That's fantasy.

1. https://en.wikipedia.org/wiki/Aviation_accidents_and_inciden... 2. https://www.bts.gov/content/us-passenger-miles


Sorry, I continue calculating from 2000 because it's easier, but the errors bars of that are starting to grow. Make it 50 years:

https://en.wikipedia.org/wiki/Aviation_safety#/media/File:19...

In any case, the larger point should be undeniably clear. Air travel is getting safer and safer.


You've only established that flying is getting safer. None of this proves or even implies that most of that improvement in safety is because of added features like ATR, especially ones that subtly subvert expectations and remove agency/control from pilots.


We need standards for ways to recognize when the computer controls have chosen to override the pilot, why they are doing it, and a way for the human to override. This is hard, not easy, because there are so many things going. One place to look and report back.

Tesla has some of these things. It beeps at you when it detects you are about to hit cars in front of you. If you are in traffic and it doesn't see that your lane will be going around say a stopped car in a different lane in traffic you can easily override it. Similarly for the driver can override the autopilot, and because it has a kind of tactile feeling when you use the steering wheel to override the self-driving. Still it's far far from perfect, current evidence being the front end crashes into immobile barriers. But they have these notions built in of feedback, control, override. I expect every self-driving vehicle has something different in how they handle these things. Eventually we'll have standards across car lines. I bet planes are all different with all those amazingly huge number of controls the pilot has to understand.


> since 1970 [to 2018] fatalities per trillion revenue passenger kilometre (RPK) decreased 54 fold from 3,218 to 59.


The humans are required to be there for a reason, and that reason is so important TWO are required to be inside the control cabin at ALL TIMES. If they aren't given the final say in what course of action should be happening at a given moment it's been designed wrong.

"For a pilot, loss of control is the ultimate threat. It's our job to control the aircraft, and if computers and their software, by design, can remove that functionality from the pilot, then nothing good is going to come out of that."


Pilot here. 100% agree. An autopilot isn't autonomous (like the most of the public think). It's more like flying the plane by pushing buttons vs manipulating the flight controls. You dial up a heading or altitude and press execute. You still have to watch it closely and it does mess up quite often ie: won't capture a localizer, altitude wasn't armed, etc. Procedure in the cockpit is to call out altitudes like "1000 above" when leveling off.

I've flown an F-16 where the fly by wire system won't let you over G the jet. So, I think some level of automation is good in that regard. But, it's very subtle and you feel in control at all times. I would never fly an aircraft that would actively take over control of the aircraft... because the automation messes up all the time.


HN has had two classes of aviation technology articles recently:

- Horrible examples of automation going wrong, hard to design and failing easily.

- Autonomous app-based air taxis are coming together with automated urban air traffic control (Uber and others). Market estimated to be worth €1.5 trillion by Morgan Stanley analysis.

What's your take?


Air taxis will become possible only when a quiet method of vertical take off and landing is developed, if ever. No current VTOL lightweight aircraft would be able to gain even experimental status in an urban environment, let alone thousands of them running at all hours of the day. The noise levels propellers make are absolutely horrendous.


Perhaps an opportunity for an airship service, hovering indefinitely over suburbia charging $5 to silently hoist your air taxi to altitude...


Yeah, living next to a busy street is bad enough that I can't actually use any of the outdoor space in my apartment. VTOL aircraft would make it completely unbearable indoors too.


I've always wondered if blasting inverted phase sounds recorded from the propellers (like noise cancelling headphones) would help a bit with the noise problem.


You can't cancel noise in general, only at specific chosen points. Noise cancelling headphones can work because your ears and in a consistent place relative to the headphones.


I came across a recent paper that discusses the cancellation of noise in a continuous 3D space. It seems complex, but not in the realm of impossibility.

https://ieeexplore.ieee.org/document/6736072


Thanks for all the replies, I did not know it worked that way.


You would still cancel normal conversation with that approach.


My eardrums would still blow up!


Have you seen Lilium? Was recently in HN. They propose to do this with many small electric motors supplying ducted fans.


I don't think anyone would describe their prototype as quiet. Have you seen their test flight video? It sounds like a jet.


It was hard for me to get a sense of how loud it might be from a video.


Probably somewhere in between. Automation will keep getting better. And, with the recent pilot shortage, probably make it safer for lower time pilots to operate. For example, some aircraft are now self-diagnosing during abnormal procedures. When something breaks, it'll bring up the appropriate checklist on the screen.

Maybe some kind of augmented reality that lets you land in true 0 visibility, or automation that lets us reduce spacing on approach. Think things that improve safety, reliability, and ops tempo, therefore profits.

But, the stakes are just too high to ever remove a human override, though, imo.


The more automated planes get, the less often a pilot gets to actually fly. Automation is more likely to fail in uncommon situations that an inexperienced pilot cannot deal with anyway.

Even if a pilot is not completely useless and we value a human life at infinity, at some point having a pilot on board is not worth it. If a pilot can save the plane in less than 1/<number of passengers and non-pilot crew> of situations that would have ended in a fatal crash without them, we lose more pilots than we save non-pilots.

Also, it's possible that an autopilot can handle a situation but the pilot erroneously overrides it and crashes.


Even if air taxis can be made to fly safely, there's nowhere for them to take off and land. For safety they would need an open space free of trees, tall buildings, and overhead wires. How many of those spaces do we have in dense urban areas?

And they're generally not going to land on rooftops either. A roof has to be designed with a helipad from the start. It's tough to retrofit one later due to structural issues, and use of roof space for antennas and machinery.


My take? The majority of HNers strongly support the second group and have an inflated view of the capability of even current automated systems.


> Autonomous app-based air taxis are coming together with automated urban air traffic control (Uber and others). Market estimated to be worth €1.5 trillion by Morgan Stanley analysis.

Those are a bad idea, independent of automation. At least until 100% of our energy is carbon-free.


> I would never fly an aircraft that would actively take over control

Should I read this to mean you don't support AGCAS?


Well, I can imagine the F16 g-limiting system messing with flight controls if the g-sensor is botched.

In my eyes the problem isn't so much that the systems are overly invasive, it's more that the failure modes of automation aren't well understood or signalled and cannot always be remedied.


There's a saying that for many years, plane could fly unattended (except the taking off and landing part). But for psychological reasons, pilots are still required to be in control. How off is that ?


These design flaws in the software running on modern aircraft are symptoms of a more fundamental flaw in the philosophy with which modern software is designed: the idea that the authors of the software understand the application better than the user.

I think this is part of the reason people who like free software defend it almost religiously: they understand the dangerous of the authors and users of software being legally segregated.


Some airlines do not require to have two pilots or two humans in the flight deck at all times.


> Qantas Flight 72 is descending in a nose-low altitude for just over 15 seconds

Yes, aviation-ignorant editor, he really did mean to write attitude before you changed it to altitude.


Uhm, you’re arguing with an automaton, a spelling checker.

Apt and ironic, given the circumstances


More likely a human sub-editor on a paper like the Sydney Morning Herald, surely.


Yes I would think so too


why would a spell checker prefer altitude over attitude?


I would think that even a slightly sophisticated spell checker would catch this. So arguably a lack of automation. Could of course be a human machine interaction problem, where the misguided human and machine makes errors together.


Australian here. SMH is known for a generally poorer standard of English usage and editing compared to other news publications (e.g. The Australian).


You assume mainstream newspapers have editors, ones that actually edit, these days :)


Or it's a typo.


This is heartbreaking. Only a markedly superior flight officer would be affected the way he was, and it made him unable to continue. The flying public loses one of its best protectors.


Exactly -- "The flying public loses one of its best protectors."

And the loss is the direct result of badly designed and badly implemented software automation.

Yet, software continues to eat everything, and large swaths of the industry praise a "move fast and break things" attitude about it. Even life-critical industries consider it OK to make airframe changes rendering a passenger aircraft unstable in parts of the performance envelope and patch it over with software supposedly compensating for those new flight characteristics, dependent on a single faulty sensor and no input sanity checking. After 346 people die in two incidents, they reconsider, only after forced by regulators.

This <it'll be OK, just patch it> attitude needs to be eradicated.


The thing is, the software automation wasn't badly designed and badly implemented. This isn't like Boeing's non-redundant MCAS system - a huge amount of engineering work was put into making every part of the system redundant and robust against erroneous data from a failing component and verifying that it worked as intended. It's just that some really weird, rare failure mode that no-one anticipated or ever found the cause of somehow created a data pattern which foiled all those checks.


"making every part of the system redundant"?

Really? From every account I've read, it depended entirely upon a single flight attitude sensor, and did no sanity checking whatsoever against other data, sensors, or inputs. There was an extra-cost-option for a second AOA sensor and an obscure cockpit light that would tell when the two AOA sensors disagreed, but those were not installed in either of the crashed airplanes.

It seems at the very least, such a critical system should have three primary sensors with full algorithms to check for disagreement, plus checking against the artificial horizon display inputs, airspeed, throttle, etc. to determine if they were actually in the part of the envelope where the MCAS would be useful.

So, unless it there are a number of checks & features that have been written in no account I've read, it's really bad design -- software.

Moreover, they could have decided to NOT design the airframe so that it would have deadly characteristics requiring a software patch to hide.

Again, really bad design - airframe.

Or, they could have decided that this is a potential critical and deadly failure mode which properly required extra training and a new Type Rating for pilots to fly the new aircraft. But instead they decided to try to bury it in the same type and an hour of iPad training, so that their airline customers would see the a lower overall cost of the new model.

Again, really bad design -- process.

Perhaps you can point me to some documentation I'm missing here, but this is what I've consistently gathered from the substantial number of articles I've read.


Boeing's disasterous MCAS system relied entirely on a single AOA sensor and did no real sanity checking, yes. The Airbus plane this article is about had three AOA sensors fed through three redundant ADIRU modules, with the data being checked and cross-checked by each of the three primary and two secondary flight control computers, each of which also had an internal monitoring channel running independently-written software on seperate hardware checking its internal calculations.

The reason Qantas Flight 72 only nose-dived twice, if I'm understanding the incident report correctly, is that each nose-dive caused the internal monitoring to fail and that part of the flight control computer responsible to be faulted out for the rest of the flight. After the second nose dive, all three of the primary flight computers had faulted, disabling the affected flight control features.


How about making him the new chief of the FAA?


Interesting how what is essentially a success in handling a crisis, can still be traumatic. It seems like I recall Chesley Sullenberger being on record stating that he and the rest of the crew of that "miracle on the Hudson" flight suffered from PTSD symptoms for weeks.


It's not that hard to get.

I hit a dog with my car and had flashbacks and intrusive thoughts about it for about three weeks.


A write-up of the technical side http://avherald.com/h?article=40de5374


This really resonates with me:

"One thing is certain: the computers blocked my control inputs. For a pilot, loss of control is the ultimate threat. It's our job to control the aircraft, and if computers and their software, by design, can remove that functionality from the pilot, then nothing good is going to come out of that."


The lesson is that automation can fail suddenly, even in domains like aerospace which are unusual in that they have an infrastructure of professionalism and safety. If an automatic system fails and then leaves the system in a position that humans can't recover from then the failure will be complete.

This is not real supervision, automatic systems must be designed to relinquish control before the situation is dangerous.


Note that New Zealand regulators were inspired by QF72 to provoke the following accident:

https://en.wikipedia.org/wiki/XL_Airways_Germany_Flight_888T

which reads like something right out of Perrow's "Normal Accidents" book in that it was an incredible confluence of hardware and human failures.


"Provoke"? They seem unrelated.


The wikipedia article doesn't explain it well compared to the accident reports.

The regulators were trying to test the envelope protection system under stressful conditions and it turned out that the AoA vanes were non-functioning which contributed to the crash.

Thus it is relevant to the later 737 MAX crashes.

It is a "normal accident" because of the human factors: e.g. no flight plan filed because it was a test flight, regulators ordering the pilots to defy the air traffic controllers, regulators ordering pilots to go forward with a test they didn't want to do... And on top of it all a maintenance error.


Aren't there standards on what is considered a safe flight maneuver? Small adjustments here and there are fine, but sudden pitch downs like those encountered on QF72 and with MCAS should be cross checked and analyzed by the computer systems. This is the whole point of triple redundancy, not just to maintain an accurate reading but also to detect when the reading accuracy has failed so you can respond accordingly.

Of course for every QF72 there are probably a bunch of injuries and deaths attributed to pilot error, which is why the computers are there, but at some point there has to be a middle ground. If you defer control to the computers in all situations, what's the point of the pilots?

If these computers and their software are smart enough to make life critical decisions, why can't they make the most important decision of all: should I stop?


Yes, there are standards. The full Australian accident report finds that the 3 times in 29M flying hours that this issue occurred were within the standard (for a non-catastrophic issue, as they found this was). The systems did indeed cross-check, but there was an algorithmic edge case where the incorrect data was inappropriately used anyway, as happened here.

It also did give control back to the pilots: after the second nose-down, the control law reverted as designed to 'alternate' (a degraded state) which removed the part of the envelope protection causing the nose downs. And the pilots also correctly switched to the backup ADIRU, which disconnected the faulty data from (most of) the aircraft's systems.

For me, the interesting part of the article is the severe consequences for the pilot's mental health. A serious, but non-catastrophic problem occurs, and is skillfully dealt with by the crew (whose training is designed for exactly this). There are a number of injuries, but none are life-threatening. But the pilot's sense of responsibility, the feeling that it could all have been much worse, and a loss of faith in the aircraft, results in long term disability despite the 'successful' result.


Yeah I found that pretty moving. It's worth pointing out he continued to fly for 8 more years between the incident in 2008 and his retirement in 2016, but seems like he was really struggling most of that time.


The flight computer responses were ones that help ensure that the airplane remains in safe maneuvering range when air movement suddenly shifts around.

For example, a sudden encounter with wind shear might push an otherwise well-trimmed aircraft into just that kind of move, and the FBW computer would be expected to compensate.

In this case, the "eyes and ears" of flight computers failed in ways that were unknown to designers and thus bypassed safety measures against their failure.


I don't really get the point of your statement.

There was still an extremely competent ready and able pilot at the helm who should've been able to disengage the auto-pilot features of a plane with faulty sensors, take control and land safely but couldn't. Seems like ridiculous failure mode to be honest.


It's not "autopilot" it is "flight envelope protection".


Its not only Boeing thats had issues with automation:

"I've learnt from these events, but none have generated the body response or trauma that this one has on October 7, 2008. This scenario involving computers, denial of control and potential mass casualties is at a different level. It seems we've survived a science-fiction scenario, a No Man's Land of automation failure on an unprecedented scale."


At least none of these incidents killed anyone. Boeing's automation (MCAS specifically) has a lot of blood on its hands.


Yes good point! But it seems by luck/good piloting that it didn't kill anyone.


Less luck, more good design. The accident investigation report reckons that this was close to the worst-case scenario: the first pitch-down was almost at the maximum possible, the other fault protection systems made it unlikely that more than two such uncommanded pitch-downs could happen in a flight, and it wasn't possible for this to happen anywhere near close enough to the ground that the pilots wouldn't be able to recover even if they panicked.


I think it's more than luck. The Airbus automation systems had sane failure modes. They realized they were dealing with malfunctioning input and returned manual control to the pilots. The 737 MAXes, on the other hand, trimmed the planes down all the way until they impacted into the ground.

Also, the Airbus autopilot could be disabled. The 737 MAX MCAS system couldn't be; all they could disable was all electronic control over the trim motors, which shut off MCAS but also shut off the system they needed to be able to recover the trim entirely.


Anyone know why autopilot cutoff switches aren't a thing? I know these aircraft aren't designed for pure manual operation, but I don't understand why there is not a way to downgrade the level of computer control, turn off all but the most simple autopilot functions. Anyone here in the know on this?


They are, even with autopilots in general-aviation airplanes that commonly have a disable button on the control yoke, a button on the panel, and a circuit breaker to cut George's power.


Ah I was more aluding to the automatic system that caused the crashes (MCAS I believe), but after a bit of research it seems this is not an autopilot function, more of a flight assist function. Still, it kinda blows my mind that any system that can forcibly counteract the pilot's control could not come with an off switch. I'd feel uncomfortable if my car didn't have an off switch for traction control.


An automation failure is scary because nobody understands what is going on or what can be done about it. Mechanical failures also occur and the risk is accepted. What are the odds of a bird strike? What are the odds of a bird strike being dangerous or catastrophic? Although Boeing isn't helping, Probably the odds or number of occurrences of mechanical failures with bad outcomes is way higher than For electronics or automation.

Is it a Similar situation to coal and nuclear power? Better the devil you know.


Related to Boeing old stats may be irrelevant since a big change in the process happened and the MAX plane updates were rushed, corners were cut, pilots were not informed on the changes and on top of that after the first crash Boeing proved that they would risk other crashes then grounding the planes until the fix is ready and even on top of that still after the first crash the pilots did not know that disagree lights won't turn on on some planes. It looks to me and I hope the investigation would find the proof that Boeing hoped they can push an update fast enough before the next crash and that with some luck the first crash fault would not be conclusive and they could blame the pilots or the airline.


I agree with you, still statistics doesn’t excuse bad engineering practices as we observed at Boeing. And if those practices continue the stats might reverse.


This story brought to you by Boeing; please try out our new 737 Max at your next opportunity!



I believe the pilots should intimately undestand the systems and the automation and have the ability to understand what the plane does and why, especially when it is wrong. In this case knowing the FBW envelope protections of airbus it was clear that the system was responding to a high angle of attack situation to avoid stalling, of course it was wrong and after disconnecting the autopilot the pilot should have immediately turned off the 2 out of the 3 ADIRUs to force the plane into alternate law where those protections are off.


As we go through life, we place faith in things based on past experience or on what other people say. In this case, the pilot lost faith in the aircraft automation and could not ever really trust it again. He couldn't live in the illusion that hurtling along at 600mph with computers in control is inherently safe at all times. His new reality is that computers bear watching at all times. That sort of hyper vigilance has got to be extremely taxing.

It's important to not place faith in things.


What are the odds Boeing paid for this article to be published? This incident happened in 2008.

https://en.wikipedia.org/wiki/Qantas_Flight_72


How about very low?

At the end of the article it mentions that the article is mostly an extract from the Caption's book. Which is coming out in roughly two weeks time.


I don't know about the odds of that, but the odds of someone unleashing a sophomorically cynical response on Hacker News seem pretty strong. As is quite obvious from the article, the person in question just published a book on his experiences (which might take a while to work through the system in every sense).


Seeing as the article doesn't specifically say anything negative about Airbus and it's an extract from a book released this month, I would find that incredibly unlikely.


Indeed it reads that as the final bits of data streamed out of the recordings, Airbus realizing what a lucky break they had when no one died from this f*ck-up, they rushed to fix the software and issue new procedures to turn off the dodgy computers.

In the more recent cases, with 100% death counts, I read unsubtle accusations at third world pilots not knowing how to fly planes


Seems to be an ad for a book by said captain to come out soon, so unlikely to be Boeing related


I doubt that Boeing paid to have it published or anything like that, but I wouldn't be surprised if the publisher has rushed to get it ready for publication given how topical it seems right now. This article will be getting way more attention than it otherwise would have (if the article would have even existed without recent Boeing mishaps).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: