From: Ken Kubota <mail@kenkubota.de>
Dear Makarius Wenzel, dear Lawrence C. Paulson,
dear Members of the Research Community,
Catching up with the email discussion of the last weeks, I feel the need to
comment on some statements by Makarius Wenzel and by Lawrence C. Paulson, e.g.,
Wenzel's claim:
"The assumption that any of the LCF-style provers is 100% correct is
wrong -- that was never the case in last decades. If your work depends
on that assumption, you need to rethink what you are doing."
https://lists.cam.ac.uk/pipermail/cl-isabelle-users/2016-December/msg00075.html
In other contributions, similar statements are made. For example, in the
German-language presentation at
https://youtu.be/R8pyFwil9KY
from minute 30:18 on Wenzel argues that John Harrison's HOL Light has a
provably safe kernel, but is implemented in the unsafe programming
language/environment OCaml, claiming more or less all that HOL implementations
are subject to logical failures in the same way.
Paulson adds the "point of view of many mathematicians, [that] reliance on
computers introduces too much uncertainty" also because of hardware failure due
to physical influence ("cosmic rays and the like").
https://lists.cam.ac.uk/pipermail/cl-isabelle-users/2016-December/msg00080.html
This is not convincing to me, not only because the human brain is subject to
physical influence, too, but such effects can by minimized by repeating the
proof verification, which drastically reduces the likeliness of such errors
(with exponential decay depending on the number of independent systems, I
believe).
In general, the aim of these arguments seems highly problematic to me, as they
seem to confuse violations of the LCF approach itself with minor technical
questions.
Of course, depending on the choice of the programming language/environment,
there may be ways to bypass restrictions guarding the LCF logical kernel. But
these are technical implementation details. Not every logician wants to create
and implement a new language for proof verification (however, I did with the R0
implementation), and therefore many have to rely upon ML, Standard ML, OCaml or
other software meta-languages, including their disadvantages such as offering
the means to freely manipulate the kernel with some hack.
But when carrying out a proof, the author clearly knows whether he uses regular
proof commands or some hacks of the software meta-language (e.g., ML or OCaml);
moreover, such hacks can be easily identified in the proof script.
These implementation details are a completely different issue than a flaw in
the logical kernel, as recently discovered in Isabelle/HOL.
For the same reason, the reference to Hugo Herbelin's email at
https://lists.cam.ac.uk/pipermail/cl-isabelle-users/2016-December/msg00077.html
is inappropriate, since he only refers to either such weaknesses that are
technical implementation details as described above, or weaknesses where
improvement attempts are already in progress (and which obviously have little
to do with the formal language, i.e., the logic, itself).
The inconsistency in Isabelle/HOL found by Kuncar and Popescu in
http://doi.org/10.1007/978-3-319-22102-1_16
http://andreipopescu.uk/pdf/ITP2015.pdf
clearly is a flaw in the logical kernel, and not only a matter of some
technical hack (exploiting some programming language/environment).
Tobias Nipkow expressed it very clearly: "This is not a minor issue with fonts
but concerns and endangers the very core of what we are doing."
https://lists.cam.ac.uk/pipermail/cl-isabelle-users/2015-September/msg00083.html
If you believe that other HOL systems are affected in the same way, please
provide the reference to papers on inconsistencies in that systems.
As of my knowledge, rather the opposite is the case, e.g.,
HOL Light provides some self-verification:
http://www.cl.cam.ac.uk/~jrh13/papers/holhol.pdf
and HOL Zero offers a reward for finding soundness-related flaws:
http://www.proof-technologies.com/holzero/unsoundnesses.html
But relativizing a flaw in the logical kernel itself by classifying it into the
same category as a hack allowed by the technical implementation more or less
ends up in giving up the LCF approach as a whole.
Similarly, I disagree with some statements of Lawrence C. Paulson:
https://lists.cam.ac.uk/pipermail/cl-isabelle-users/2016-August/msg00075.html
https://lists.cam.ac.uk/pipermail/cl-isabelle-users/2016-August/msg00089.html
I can understand that Paulson is not very happy about such an incidence, but
denying that the flaw is an inconsistency is not helpful, since it very
obviously has the two properties of a classical paradox: self-reference
(circularity) and negativity (negation).
Moreover, the source code should be subject to scrutiny, of course, otherwise
we cannot talk of a system conforming to the LCF approach.
Basically, I agree with the replies of Popescu and Kuncar:
https://lists.cam.ac.uk/pipermail/cl-isabelle-users/2016-August/msg00096.html
https://lists.cam.ac.uk/pipermail/cl-isabelle-users/2016-August/msg00097.html
In the same manner, I somewhat disagree with Tobias Nipkow's following
statement from 2015 I recently stumbled across:
"The length of this dicussion puzzles me. Typedef can introduce
inconsistencies, nobody wants this behaviour and nobody needs this behaviour.
There is a perfectly clear way to rule out this whole class of unwanted
behaviours: do not allow certain cycles. This solution, including code, has
been on the table for a year now. By now not just mere users but also
certification agencies are worried. This is a pressing issue and it is not
rocket science."
https://lists.cam.ac.uk/pipermail/cl-isabelle-users/2015-September/msg00115.html
My impression is, that in the Isabelle group, the attitude is only that
self-reference (circularity) is a problem, but no further analysis is made,
including the comparison with other systems by trying to express the same
phenomenon within the other systems, and the search for the exact reason for
the paradox, which goes beyond just self-reference.
Self-reference in formal logic shows that some dependency is not properly
resolved.
The proper way to correctly resolve these dependencies in order to avoid
paradoxes is a way by means of the formal language (in the tradition of Russell
and Church).
For this reason, I consider the approach by Kuncar and Popescu as a rather
preliminary solution.
As far as I have looked at it, it seems to keep track of the dependencies in
the background.
It may work, but the logician's desire for naturally expressing formal logic
and mathematics would be using the means of the formal language to keep track
of the dependencies (in order to avoid paradoxes). The same critique could be
applied to the type correctness conditions (TCCs) of PVS, where correctness is
checked only afterwards, instead of directly preventing the expression of
incorrect terms (non-well-formed formulae / non-wffs) by means of the language.
In R0, the same problem was solved using syntactical means only: type
abstraction, i.e., the binding of type variables with lambda.
In my opinion, the inconsistency in Isabelle/HOL created by axiomatic type
classes (ad hoc overloading of constants) has three reasons:
Clearly, in the documentation a strict distinction between the kernel and the
rest is missing, not to speak of the inner kernel and the extension. In the
original HOL system, the logic is a separate part (Part III: The HOL Logic
[Gordon and Melham, 1993, pp. 191–232]), in current HOL4 a separate file
(http://freefr.dl.sourceforge.net/project/hol/hol/kananaskis-10/kananaskis-10-logic.pdf).
Within the logic, the extension is easily identifiable (e.g., online
as the section "2.5 Extensions of theories"). I have discussed this earlier in
Oct. 2016:
"If one takes the LCF approach of a small trusted logical kernel for serious,
the logic has to be described in a separate and clearly identifiable part, as
the kernel should be a verified and therefore trusted, not a blindly trusted
kernel.
Further, the extensions are clearly marked as 'extensions' in the table of
contents, and the use of axioms ('Asserting an axiom', section 2.5.4 Extension
by type definition) is clearly named"
https://sourceforge.net/p/hol/mailman/message/35437586/
Similar as in the documentation, in the source code the distinction between the
kernel and the rest seems to be missing, as otherwise the introduction of
axiomatic type classes (ad hoc overloading of constants) could not have
happened, as discussed in the following section.
Last updated: Nov 21 2024 at 12:39 UTC