I'm no mathematician and this might be way out of my league, but I've wondered how Gödel's incompleteness theorems come into play if Mathematicians are using proof assistants to verify their mathematics.
The incompleteness theorem says that there exist some (paradoxical and self referencing) true theorems that you can't prove (i.e. "this sentence is unprovable".) Neither humans nor proof assistants should be able to prove them.
> there exist some true theorems that you can't prove.
Emphasis mine.
Are all unprovable theorems true? I googled "are all Gödel sentences true", but came up with nothing. Also: Are all unprovable theorems Gödel sentences or are there other kinds?
EDIT: Lots of good answers to my first question, thanks a lot to everyone. For the second one I think I also found an answer:
According to the Wikipedia article about the incompleteness theorems [1]: "The Gödel sentence is designed to refer, indirectly, to itself." So the question really is if there are non self-referential unprovable statements, and the answer seems to be yes[2].
I'm not a logician, but I think the question (a good one) is not quite well-posed. As I understand, in mathematical logic, a statement being provable means one can deduce it from a set of axioms. If you have a set of axioms A sufficiently powerful for Godel's construction to go through, the resulting Godel sentence s is (by construction) not provable from A, but one can always make it provable by appending s to the axioms A.
For a statement to be true, on the other hand, has a technical meaning: it means that there is a model (in the sense of model theory, see https://en.wikipedia.org/wiki/Model_theory ) in which both the axioms A and the sentence s are true. Godel's completeness theorem (https://en.wikipedia.org/wiki/G%C3%B6del%27s_completeness_th...) asserts that s is provable from A if and only if s is true in every model in which A holds.
Yet first order logic encompasses 99.9% of the mathematics anyone ever does, even professional mathematicians. The only people who consider anything but first order logic are people who have gone out of their way to make a point to study it just for its own sake. When the general subject of discussion is proof assistants and proofs in normal mathematics, I don't think it's necessary to constantly mention "only in first order logic". It's like discussing structural integrity of a building and constantly mentioning that all the calculations only hold on in our reality, but if we were living in an alternate dimension and the physical constants of the universe were different then the results wouldn't be applicable.
>Yet first order logic encompasses 99.9% of the mathematics anyone ever does, even professional mathematicians.
I'm way out of my depth here, but I'm curious about this statement. In first order logic you can't even assert that the natural numbers cannot be put into a one-to-one correspondence with the real numbers, which seems like a fairly basic mathematical fact.
Right, but you can't ensure that the only models of 'bijective functions' are actually bijective functions. To switch to a slightly simpler example, you can translate statements such as "x is finite" into a first order language (for example, set theory), but you can't ensure that x actually is finite in the models of this statement.
To your first point: yeah, some discomfort around it is valid, but I see it as a non-issue. Informally/philosophically, because a model's job is to capture everything a first-order theory can see about your object of study - so it's fine for (say) an "actually" countable set to model the reals, and not be up to the task of "actually being" the reals. Formally, you differentiate between internal and external statements and don't expect them to be the same (a rigorous version of my pretentious use of scare quotes above :).
It might be helpful to be a bit more precise here.
The expression "first-order logic" (FOL) is a bit vague. Do you mean
pure FOL (i.e. all axioms are about equality) or FOL plus additional
axioms? Pure FOL (without additional axioms) is indeed rather weak
and cannot be used for much. That's why mathematicians build a
foundation of mathematics on top of FOL by assuming the truth of
additional axioms.
The most well known foundation of mathematics built on top of FOL is
ZF-set theory which adds about a dozen additional axioms about how to
construct new sets from old, and when two sets are equal.
There are other FOL-based axiomatisations of mathematics such as Quine's NF, or
category theory (which can be expressed as a theory on top of
FOL). There are also foundations of mathematics that are not using FOL
(Martin-Loef type theory and successors such as HoTT).
Goedel's first incompleteness theorem forces all those axiomatisations
of mathematics to be incomplete, meaning there are formulae A such
that neither A nor the negation of A can be derived.
A statement like "99.9% of the mathematics anyone ever does" is about
FOL extended with foundations of mathematics like ZF, or a
type-theoretic equivalent.
However, mathematicians are getting ever better at producing
interesting formulae whose truth (or otherwise) cannot be settled in
ZF (or similar) [1]. In order to deal with such expressivity gaps,
mathematicians investigate more powerful axioms, called "large
cardinals", which claim the existence of certain extremely large sets.
You can also think of them as more powerful induction principles.
The problem with inventing new large cardinals (i.e. more powerful
induction principles) is to navigate between the Scylla of expressive
weakness (the axiom is a consequence of other axioms) and the
Charybdis of inconsistency (adding the axiom allows us to derive
"false").
>So the question really is if there are non self-referential unprovable statements, and the answer seems to be yes
Some mathematicians believe that many popular conjectures are true but unprovable. Like Goldbach's conjecture, Riemann's hypothesis, or perhaps even the collatz conjecture.
We know these things are probably true. If the prime numbers are distributed randomly, there are probabilistic proofs that show the probability of goldbach's conjecture approaches 1, and there is a similar argument for Riemann's hypothesis. However it's just a "mathematical coincidence", there is no reason there can't be a counterexample at 10^500+1, it's just really improbable, so it's impossible to prove.
No, the point is that there are no counterexamples. The probability of any counterexamples existing decreases exponentially, but is nonzero. So there is a 99.999...% chance that no counterexamples exist, and therefore it is true. But it's impossible to ever prove that.
"True" in this context means "conforms to my intuitive understanding of what true means".
Gödel sentences are an elaborate construction that express the concept "This sentence is unprovable from $AXIOMS" without using direct self-reference, but this construction only means what we think it means if the natural numbers are the "real" natural numbers.
Because no finite set of axioms can rule out all nonstandard models, you can't be sure that there isn't some "proof" corresponding to a nonstandard number.
If such a thing exists, the Gödel sentence is still unprovable (because no proof exists) but false (because it asserts that no "proof" exists, although there is a nonstandard one).
No. The definition of "undecidable" is that you can't prove it true or false. This means that if P is undecidable then so is not-P. But exactly one of these is true and the other false
This seems a little glib wrt the meaning of something being "true". We have undecidability results: the Axiom of Choice and its negation are both consistent with the Zermelo-Frankel axioms. The continuum hypothesis and its negation are both consistent with the ZFC axioms. The Parallel Postulate and its negation are both consistent with the other Euclidean axioms.
Are we justified in saying that for each of these, one must be true and the other false? Of course not. To resolve them one way or the other, you need additional axioms; none of those six propositions is true platonically. What you choose depends on the problem you're trying to describe.
Well, for one thing, you have to define 'true'. The incompleteness theorems don't have anything to do with what's 'true', they have to do with what's 'derivable'.
It's a consequence of the completeness theorem that every consistent theory has a model. Furthermore, in any incomplete system whereby a Godel sentence is useful, there are necessarily models that differ on the satisfaction of the Godel sentence.
Slightly off topic, but your comment reminded me of Tarski's paper The Semantic Conception of Truth and the Foundations of Semantics (http://www.jfsowa.com/logic/tarski.htm) Basically, it's actually kind of hard to define what true means and Tarski goes through a long discussion trying to pin it down.
> IT IS VITALLY IMPORTANT THAT YOU DO NOT THINK OF A Prop AS BEING EITHER "TRUE" OR "FALSE". A Prop either has a proof or it does not have a proof. Godel shattered mathematics by showing that some true propositions can never proven. Tarski went further and showed that some propositions cannot even said be to be true or false!!! Coq deals with these obstacles in modern mathematics by restricting Prop to being either proven or unproven, rather than true or false.
i'll use the first incompleteness theorem itself to explain this distinction between truth and proof
..the first because the second is essentially a stronger form of the first..
> First Incompleteness Theorem: "Any consistent formal system F within which a certain amount of elementary arithmetic can be carried out is incomplete; i.e., there are statements of the language of F which can neither be proved nor disproved in F." (o)
what's important is the statements are incapable of being proved or disproved, to me this is confusing language, let me try to show the possibilities
a statement can be:
o. proved : there exists at least one complete collection of validating evidence
i. disproved : there exists at least one complete collection of confuting evidence
ii. incapable of being proved : there only exists incomplete collections of validating evidence
iii. incapable of being disproved : there only exists incomplete collections of confuting evidence
..think of complete and incomplete in terms of russell's paradox(i): a set containing all sets; any proof would be incomplete because the evidence for the set containing all sets must also contain itself because it is a set; creating a subsequent requirement for evidence that this new self contained set is also contained in the set of all sets, ad infinitum, perpetually and subsequently incomplete..
the first incompleteness theorem is making a claim for the existence of ii and iii
if this statement is true how could you prove it?
if the statement is true it means you would need to prove that specific statements in the language of F are incapable of being proved or disproved, but if the statements themselves are incapably proved or disproved how can you express through evidence that your proof is even working on those same statements
a theorem is proved or disproved when one can express through finite steps, or statements of evidence, a proof
if a theorem has an incomplete proof or disproof how could i prove said incomplete proof?
but we know the statement to be true because that is how it essentially self references
by stating there are statements being incapably proved or disproved is itself incapable of being proved or disproved
making the statement true, but without proof of evidence
..except some strange brain itching intuition :P ..
but now we have left the domain of arithmetic and entered the domain of semantics(ii)
and here is where tarski comes in with his undefinability theorem{iii)
..is it irony that undefinability is underlined by this spell checker?..
coq is stating here that it is itself a semantic domain only dealing with arithmetic statements that can be proved or disproved
implying imperatively to avoid the incomplete and undefined because they explicitly and by design exist outside of this proof assistant's restricted domain of interest and so will be inexpressible
naively, if you wrote a program with the restriction of only computing the addition of positive integers; if you wanted to add using a negative integer you would have to write a new program
which going back to your original question:
> I've wondered how Gödel's incompleteness theorems come into play if Mathematicians are using proof assistants to verify their mathematics
they come into play only by being the very things that are disallowed in the spec from coming into play
in coq at least, you are required to work without them because it is written to only play with propositions that can be proved or disproved
and we showed above they are unable to be proved or disproved
There's a relationship between the incompleteness theorems and the halting problem in that no turing machine can decide whether an arbitrary statement in a logical system where the incompleteness theorems apply is derivable or not. Equivalently, there's no turing machine with input 'statement in the logical system' and outputs a proof or a "unprovable".
However, once you already have a proof, checking is much easier (at the formal level). Modern mathematical proofs, however, are not generally written in a fashion in any way similar to how the formal system operates, so applying this in practice is non-trivial.
This problem actually goes back to Descarte's proofs-of-god: in order to solve some problems Descarte needed infinite computational power. Guess who has infinite computational power?
Here's one interpretation of Gödel's first incompleteness theorem that may help.
First, let's look at computability. The intuitive idea that some functions can be computed by an algorith is very old (think Euclid's algorithm). The idea was formalized in the 20th century by the notion of a computable function. Several different definitions were given, but surprisingly, they turned out to be equivalent. Today, most programming languages are Turing complete. As far as pure computation is concerned, it is possible to translate between different programming languages.
The idea that a mathematical statement can be conclusively proven is also very old. Again, different formal definitions emerged: provable in PA, provable in ZFC, provable in ZFC + some large cardinal axiom, etc. These are not equivalent. And this, in a sense, is one way of framing Gödel's first incompleteness theorem. There is no absolute notion of provability: no equivalent of Turing completeness; no precisely definable concept that would deserve the name "provable".
In particular, this means that it is not always possible to translate proofs from one system A to system B. At least unless you are willing to add new axioms to system B.
Proof assistants are frequently used to prove the correctness of computer programs. And since a proof assistant is a computer program, and often quite a complex one that could contain bugs that render everything it does useless, you might want to use your proof assistant to prove that its code does what you hope it does.
Goedel's second incompleteness theorem says you can't do that. Ever. And more generally, it says that you will hit a roadblock as you try to prove what it is that your proof assistant does. This is really annoying.
But you can get pretty damn close. HOL Light has something of a self-consistency proof that is so close to the real thing that it's worth taking seriously.
But incompleteness, by itself, isn't a big deal for theorem provers. Russell himself worked as if the existence of infinities wasn't provable, which is why Goedel had little relevance to Russell. Hilbert on the other hand....