My take-away is a little different: As software grows it is hard to quantify.
If you have several small distinct systems, you can often quantify them down to inputs, outputs, and expected logic. Once systems get too complex, you have too many possible states, and bugs are harder to find (and the system cannot be fully defined).
I often go back to the famous Therac-25 accident. The bug in Therac-25 existed LONG before the accident, but there were two systems interacting, one system checked the other system's output and threw away invalid state. Once those two systems were merged, it was only then that the bug turned into an accident.
If aircraft stopped building monolithic software and instead built parts that ran on software, parts that could each be individually verified, it would likely result in safer software.
If you have several small distinct systems, you can often quantify them down to inputs, outputs, and expected logic. Once systems get too complex, you have too many possible states, and bugs are harder to find (and the system cannot be fully defined).
I often go back to the famous Therac-25 accident. The bug in Therac-25 existed LONG before the accident, but there were two systems interacting, one system checked the other system's output and threw away invalid state. Once those two systems were merged, it was only then that the bug turned into an accident.
If aircraft stopped building monolithic software and instead built parts that ran on software, parts that could each be individually verified, it would likely result in safer software.