Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Respectfully, I call bullshit on this.

We know some very good ways to build reliable software. It's just that most industries can't / don't want to pay the costs associated with doing so.

The don't-know / don't-want distinction is functionally irrelevant, but I have a minor tick about it because asserting the former is often used to avoid admitting to the later.

Legacy IBM and ATT, for all their cluster%_$^!s, were amazing at engineering reliable systems (software and hardware). Maybe that's not practical nowadays, outside of heavily regulated industries like military / aerospace, because markets have greater numbers of competitors. But we do know how.

An accurate and modern truth would probably be "We don't know how to quickly build reliable software at low cost."



I mean, clearly, if we really wanted to, we could build all of our software with Coq and have it be formally verified. But we don’t. The Curry Howard isomorphism proves that code and math are equivalent and since we can prove a bunch of stuff with math, we should also be able to prove that our software systems are big free.

And so you’re right, maybe my statement is a little bit of a simplification, but not by much. The amount of man hours it would take to prove that the Linux kernel had no bugs according to some specification would require an absolutely prohibitive amount of time. As such formal verification is so prohibitive that only the most trivial systems could ever be proved.

This is pretty similar to saying that we don’t know how to build bug-free code. We need a paradigm change for things to get better, and I’m not sure it will happen or if it’s even possible.


I think of it in slightly different terms. Yes, we could build software that reliably. (Perhaps not totally bug free, but, say, with 1/100th of the bugs it currently has.) But that takes a lot more time and effort (say, 100 times as much, as a rough number). That means, because it would take 100 times as much effort to produce each piece of software, we'd only have 1/100th as much software.

Would that be a better world?

Sure, I hate it when my word processor crashes. On the other hand, I really like having a word processor at all. It has value for me, even in a somewhat buggy state, more value than a typewriter does. Would I give up having a word processor in order to have, say, a much less buggy OS? I don't think I would.

I hate the bugs. But the perfect may be the enemy of the "good enough to get real work done".


I look at that as the efficiency distinction.

It goes without saying that if we had more time, there would be fewer bugs.

But there are also things (tooling, automation, code development processes) that can decrease the amount of bugs without ballooning development time.

Things that decrease bugs_created per unit_of_development_time.

Automated test suites with good coverage, or KASAN [1, mentioned on here recently]. Code scanners, languages that prohibit unsafe development operations or guard them more closely, and automated fuzzing too.

[1] https://www.kernel.org/doc/html/v4.14/dev-tools/kasan.html


It is not about tools. It is about culture and process. Look at something like how SQLite for a real example of living up to the requirements of aviation software: 100 % branch coverage, extensive testing, multiple independent platforms in the test suite etc etc.

You don’t need anything fancy – but you do need to understand your requirements and thoroughly simulate and verify the intended behavior.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: