It's quite possible that software creates new failure modes (as programmers I'm sure we all feel that to be true some days) while at the same time preventing more.
When the software fails (if it did) we assume the software makes it more dangerous because we have a concrete example of what happened but not what could have happened in it's absence.
This is what may eventually stall self driving cars for many years, the first time one does the PR equivalent of smashing through the local orphanage.
We as a society are notoriously poor at assessing systemic risks.
Now to address your point about evidence, I'm curious how you could control for other confounding factors.
Airliners are safer today than ever before (based on passenger miles) but how do you control for better engines, material science, procedures if you compare say 1979 to 2019.
But I have a friend who's an airline pilot. He was telling me about hitting windshear while coming in for a landing. The plane has software that not only tells him that he hit windshear, but also tells him what angle to put the nose at for best results. (The problem is airspeed. So you go to full power, but pitching the nose down also helps you gain airspeed. But the ground is down there, because you're coming in for landing. What angle is the best? The computer can figure it out and tell the pilot what to do.)
Aye, Things like that the G3000 (Garmin avionics suite for light aircraft) that now do real time 3D terrain generation so that less experienced pilots can avoid controlled flight into terrain (literally the way they describe when a pilot in control smashes into a mountain).
My suspicion is that modern computerised avionics are on balance a life saver but I'd love to know by how much.
I ended up watching videos of modern avionics in light aircraft on youtube a while back (as you do) and I find that kind of programming fascinating, couldn't find much out about the actual hardware/software side of things though I did gather it runs a custom OS.
When the software fails (if it did) we assume the software makes it more dangerous because we have a concrete example of what happened but not what could have happened in it's absence.
This is what may eventually stall self driving cars for many years, the first time one does the PR equivalent of smashing through the local orphanage.
We as a society are notoriously poor at assessing systemic risks.
Now to address your point about evidence, I'm curious how you could control for other confounding factors.
Airliners are safer today than ever before (based on passenger miles) but how do you control for better engines, material science, procedures if you compare say 1979 to 2019.