Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> if you're well known and liked author, you often get a pass on your papers and they're not well checked for errors.

A friend who was advised by a "famous" (i.e., in his/her field) mathematician told me basically the same thing. This person unquestionably has had brilliant insights during the course of their career, most of which turned out to be correct. But (s)he has also published several false results, including one "major" false result. Some mathematicians are known to be sloppy and the field puts up with it, which seems weird to me.

Regarding Coq, never tried it, but here is my guess at what's going on. Coq seems to be mainly embraced by algebraists and people doing discrete math, less so in fields which are more analytic (optimization, probability, functional analysis, PDE, etc.) Proofs in the latter branches of math seem to more commonly invoke geometric or "intuitive" arguments which, while completely rigorous, are harder to reduce to a sequence of logical statements. Take for example "convexity implies continuity". You can basically prove this in 1d by drawing a picture, whereas the full, gory, rigorous version, though elementary, requires a little effort. Proofs in modern algebra, otoh, involve a lot of symbol pushing, which computers excel at. Just a guess.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: