Suppose integer/1 does not raise such an error. So we have:
?- integer(X).
false.
Here, the system tells us: “There are no solutions whatsover.“, because this is what "false" means: No solutions whatsoever. But that's wrong, because in fact there are solutions, as we can also verify:
?- integer(0).
true.
So, in fact there are solutions! But that's not what the system said initially. So, this means that we can no longer rely on what the system is saying: In the above example, it tells us that there are no solutions, even though there are solutions.
By monotonicity, I expect that if a query succeeds unconditionally, then a generalization certainly does not fail (otherwise, adding a constraint would increase the set of solutions). But this is obviously the case here, so monotonicity is violated: Adding a constraint can make a query succeed that previously failed.
?- integer(X).
false.
?- X = 0, integer(X).
X = 0.
If an instantiation error is raised, then that clearly indicates that no decision can be made at this point, whereas "false" means that there are no solutions whatsoever, which in this case violates monotonicity, a core property we expect to hold when reasoning about logic programs.
If monotonicity is broken, then iterative deepening is no longer applicable, because increasing the depth of the search may now invalidate conclusions that were derived earlier. However, one of the foremost attractions of Prolog is the prospect of executing the same program with different evaluation strategies.
And, yes: The above behaviour of integer/1 is in fact prescribed by the ISO standard. Unfortunately, DEC 10 Prolog chose to replace instantiation errors by silent failures, and this has been perpetuated in the Edinburgh tradition for type tests including the standard.
A way to correct this is therefore to introduce a new family of type tests, the ..._si/1 family of predicates as available in library(si). They are what the standard type tests should have been in the first place to preserve these desirable logical properties.
>> By monotonicity, I expect that if a query succeeds unconditionally, then a
generalization certainly does not fail (otherwise, adding a constraint would
increase the set of solutions).
Thank you for clarifying. If I understand correctly you are saying that the
success set of a query should increase monotonically with the generality of
the query. If so, I don't think that's an unreasonable demand, but I do think
it is a bit ideological, given that normal _programs_ are non-monotonic anyway
(because negation as finite failure is non-monotonic).
Now, I have to point out that both type-checking and errors are extra-logic
features of Prolog (there are no types in first-order logic and of course
there are no exceptions). In that sense, it's a bit arbitrary how a predicate
"should" be defined. The decision must be made in the context of Prolog, not
logic, and the needs of Prolog as a programming language.
In that context, the problem that I see with raising an error on "integer(X)"
is that integer/1 is performing a type-check in the context of the datatypes
recognised by Prolog. So it makes sense that it should succeed when its
argument is of the "integer" datatype and fail if its argument is any other
datatype. In this context "X" is not an uninstantiated term, but a term of type
"variable", where variable is a Prolog datatype, like "integer".
I think of it as a membership test. If the argument of integer/1 belongs to
the set of integers, then the query should succeed. If not, it should fail. A
variable is not in the set of integers, so the query should fail.
I also suspect that iterative deepening and, indeed, any kind of
nondeterministic program, would be made a lot more complicated if type checks
raise errors left and right (a problem you've highlighted with respect to
Prolog meta-interpreters and instantiation errors raised by clause/2 in another discussion we had here I think).
Personally I dislike errors: not only are they extra-logical, they force the
programmer to add all sorts of boilerplate to her program to handle them and
make for code that is harder to read and parse at a glance. The automatically
caught exception in Scryer Prolog that you list above is even worse to my
mind: it enshrines the boilerplate in the behaviour of the language (if I
understand correctly what is being done). So now errors are a first-class
citizen of the language. That's awful! I would much prefer a language that
helps the programmer identify and eliminate programming errors _before_ the
program gets to the point where it has to exit abnormally.
There are already many built-ins that behave in exactly the same manner as integer_si/1 does. Take T =.. [F|Args]. which produces an instantiation error, idem functor(T,F,1).
The nice thing when you have (such) errors is that if they do not happen you can be sure that the result is correct (in an appropriate pure subset). Without errors, you never know.
With univ, it makes sense to raise an error because its first argument becomes the predicate symbol of an atom and you can't have an atom with a free variable as a predicate symbol in a first order language (because it wouldn't be a first-order variable anymore).
What I mean is, if:
A =.. [P,X,Y]
Did not raise an error, it would produce an atom:
A = P(X,Y)
Which is not a first order atom. So that would violate core assumptions of the language and the syntax of first-order logic to boot.
(I use "atom" in the non-Prolog sense, to mean a predicate symbol followed by terms in parentheses).
Edit: that said, Win-LPA Prolog allows P(X,Y), but that is a quirk of the interpreter and not standard at all.
(I do not agree, but let's rather take a sharper example) Consider atomic(T) and T =.. [_] or functor(T,_,0) which describe exactly the same except that atomic(T) similar to integer(T) silently fails and T =.. [_] and functor(T,_,0) produce an instantiation error.
There are two different notions of type here. One is determined by the abstract syntax, in this context a variable is a type, that's what you are referring to. With this type no desirable algebraic properties hold like commutativity, since atomic(T), T = a and T = a, atomic(T) differ: one says yes, the other no. As a consequence, the meaning of a program is fixed by the very precise control of Prolog, no way to change this in some context. Many consider this a central weakness of Prolog. Say, Peter Norvig: However, Prolog's depth-first control structure is just too inflexible for me.https://www.quora.com/What-do-Lisp-Haskell-programmers-think...
The other meaning of type is the pure, declarative one. Here, variables do not belong to the valid types. In that context, atomic_si/1 and the other predicates fit in. Here, commutativity (modulo errors) holds. And thus alternative search procedures may be used.
(There is a third meaning similar to the second, but it is only standard related.)
That's correct but both the things you bring up are a matter of taste. Regarding the ordering of goals, as far as I know there is no straightforward way to make it not matter, that is also efficient to run on a modern computer (but I might be wrong about this because I haven't looked recently). With all due respect to Peter Norvig who is one of my AI heroes, that bit about "not knowing whether a particular ordering of goals is essential or arbitrary" is just picking at nits.
Regarding pure, declarative types those again are extra-logical and so, in the context of a first-order logic language, they are arbitrary and conventional. You can say "it's better this way", someone else can say "no, it's better this other way" but in the end of the day there's nothing set in stone. Far as I can tell.
Anyway I really think all those little complaints about Prolog's lack of absolute, pure perfection are always bordering on nitpicky- and from the wrong side of the border, at that. Peter Norvig says Prolog is not a general purpose relational solver, but it's a general purpose programming language. Well, yes, because it was, from the start, designed to be a general purpose programming language! That is, a language that can be used for, you know, actual programming in first order logic. Of course it's not perfect. That has always created a rift in the community, with some people taking the issue more seriously than others. Some have always been happy that they (we) have a language that's 90% of the way to pure, declarative perfection and that lets you express yourself in code in ways that would be really painful in any other language (just think of DCGs vs. the pain that compiler writing is in anything else). Others are always unhappy because they're missing that idealistic 10% that remains on the road to clean and pure declarative perfection. Well, we're not in a pure world. We don't have perfect computers. In fact, our computers just suck. We have to accept compromises, if we want to get anything done. Prolog is almost perfectly balanced in between shitty languages that abuse imperfections in computer architectures and mathematical purity. Personally, I'm grateful to Colmerauer and Kowalski for creating it. I can only imagine the mess I'd make of it if I had to do it myself.
By monotonicity, I expect that if a query succeeds unconditionally, then a generalization certainly does not fail (otherwise, adding a constraint would increase the set of solutions). But this is obviously the case here, so monotonicity is violated: Adding a constraint can make a query succeed that previously failed.
If an instantiation error is raised, then that clearly indicates that no decision can be made at this point, whereas "false" means that there are no solutions whatsoever, which in this case violates monotonicity, a core property we expect to hold when reasoning about logic programs.If monotonicity is broken, then iterative deepening is no longer applicable, because increasing the depth of the search may now invalidate conclusions that were derived earlier. However, one of the foremost attractions of Prolog is the prospect of executing the same program with different evaluation strategies.
And, yes: The above behaviour of integer/1 is in fact prescribed by the ISO standard. Unfortunately, DEC 10 Prolog chose to replace instantiation errors by silent failures, and this has been perpetuated in the Edinburgh tradition for type tests including the standard.
A way to correct this is therefore to introduce a new family of type tests, the ..._si/1 family of predicates as available in library(si). They are what the standard type tests should have been in the first place to preserve these desirable logical properties.