Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It doesn't at all. It's an explanation why it might happen.They never said that this behavior won't be fixed or changed. It's not blaming the pilots.


There is a notion of blame wrapped up with "too quickly".


I'm having a hard time thinking up an alternative choice of words that is similarly clear and concise.

I also think that the phrase "too X" is polysemous. Depending on how it's used, it may imply a notion of blame. But it can also just be a way of describing an incompatibility. "This clearance is too low for that truck" and "This truck is too tall for that clearance" are entirely equivalent statements, IMO. Neither implies that the truck or the bridge is wrong, just that the driver would be wrong to try and drive under it.

Even further out there, when describing a timing-based bug that isn't known to be 100% deterministic as, "If X is happens too quickly after Y, Z might happen" seems to me like it's just a much more straightforward way of saying, "If X happens within some unspecified interval after Y, then Z might happen." Nine syllables shorter, same meaning.


>I'm having a hard time thinking up an alternative choice of words that is similarly clear and concise.

There's "beyond a certain speed", but still something of a mouthful.


There's a further implication that an instruction, but not a failsafe, exists to prevent the given condition. E.g. "do not reverse thrust until ground mode has fully activated." but no check to actually prevent the crew from doing so.

I'm not a lawyer; couldn't tell you where the fault would split in that case, but if my hunch about the lack of a failsafe for a given instruction is correct... it's still a surprise to me. I'd expect existing avionics production procedures to catch this sort of thing.


>> I'd expect existing avionics production procedures to catch this sort of thing.

The older I get, the more I believe your expectation is wrong. Lessons learned are rarely transferred to new people who were not present when the lesson was initially learned.

I've even worked at companies that try to compile a database of "lessons learned", but they never instruct anyone to read through the whole thing. Even if they did, when confronted with a large amount of material how much of it actually sticks?

The we move on to more procedural methods like fault-tree analysis, FMEA, etc... That's great and can help a lot, but it's still a GIGO process and new people need to learn how to do it well. There are always new people learning new things.


In software, we usually encode lessons learned as tests and static analysis. There is a reasonable level of success on that.

Aviation usually encode them on checklists. They have a much higher degree of success (probably because of culture, not medium), but failures happen some times too.


"Too quickly" appears to be the article's wording, not Boeing's, although I couldn't find a copy of the actual bulletin to confirm that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: