Stream: Archive Mirror: Isabelle Users Mailing List

Topic: [isabelle] Type definitions in Isabelle; article "A Consi...


view this post on Zulip Email Gateway (Aug 22 2022 at 13:52):

From: Andrei Popescu <A.Popescu@mdx.ac.uk>
Dear Ken,

My suspicion is that declarations separated from definitions generally might cause inconsistency.

Actually, this is precisely what we prove in our paper: that, under the described checks, delayed ad hoc overloaded definitions of constants (performed via declarations followed by definitions of instances at later times) and type definitions are jointly consistent under HOL deduction.

In order to experiment, I would need some hint how declarations of
constants (accompanied with a latter definition) work in Isabelle 2016, as asked for at
https://lists.cam.ac.uk/pipermail/cl-isabelle-users/2016-August/msg00093.html
as I couldn't find any information in the manuals about it.

In Isabelle 2016, you have to be more explicit about overloading -- see section 11.3 (ad hoc overloading for constants) in the reference manual: http://isabelle.in.tum.de/doc/isar-ref.pdf

Your noncircular example should be written as follows:

consts c :: 'a

typedef T = "{True}" by blast

overloading
c_bool ? "c :: bool"
begin
definition c_bool :: bool where
"c_bool = (if (EX (x::T). ALL y. x = y) then False else True)"
end

By contrast, Ondra's circular example fails in Isabelle 2016:

consts cc :: 'a

typedef (overloaded) TT = "{True,cc}" by blast

overloading
cc_bool ? "cc :: bool"
begin
definition cc_bool :: bool where
"cc_bool = (if (EX (x::TT). ALL y. x = y) then False else True)"
end

The theory is also attached.

Andrei
Defs.thy

view this post on Zulip Email Gateway (Aug 22 2022 at 13:52):

From: Ken Kubota <mail@kenkubota.de>
From a logical point of view, there is no reason to go below the HOL standard,
according to which (in proof mode) "only consistency-preserving actions
(namely valid proof) can be performed" [Gordon/Melham, 1993, p. 258 f.].

Andrei Popescu found a quite adequate formulation saying that the (consistent) logic exists
independent of a particular implementation. Hence, if Isabelle/HOL is supposed to be a
manifestation of a consistent logic, it must be consistent, too. "Nonsensical" user data
should be allowed only in the form of hypotheses, which can be refuted by the logic (and,
hence, dealt with in a reasonable way), as shown previously for R0.

If we would allow weakening logical rigor by enabling (user) definitions to make the
logical system inconsistent, we could directly step back to Cantor's naive set theory,
and accuse Russell of being the malicious "who has studied the source code" of set theory,
claiming that Russell's paradox makes use of an "overtly nonsensical" definition.
Of course it is nonsensical (in formal logic and mathematics), but therefore it
shouldn't be expressible at all in the language of mathematics. This insight resulted,
as is well known, in type theory.

My suspicion is that declarations separated from definitions generally might cause
inconsistency. In order to experiment, I would need some hint how declarations of
constants (accompanied with a latter definition) work in Isabelle 2016, as asked for at
https://lists.cam.ac.uk/pipermail/cl-isabelle-users/2016-August/msg00093.html
as I couldn't find any information in the manuals about it.


Ken Kubota
doi: 10.4444/100
http://dx.doi.org/10.4444/100

view this post on Zulip Email Gateway (Aug 22 2022 at 13:52):

From: Makarius <makarius@sketis.net>
I just want to point out explicitly that I am no longer paying attention
to this very strange thread.

At the level of abstraction of the discussion, Isabelle is indeed
"inconsistent" and that is not going to change in the foreseeable future.

Makarius

view this post on Zulip Email Gateway (Aug 22 2022 at 13:52):

From: Andrei Popescu <A.Popescu@mdx.ac.uk>
Makarius wrote:

I just want to point out explicitly that I am no longer paying attention
to this very strange thread.
At the level of abstraction of the discussion, Isabelle is indeed
"inconsistent" and that is not going to change in the foreseeable future.

This is indeed a very strange thread -- and your backwards conclusion contributes to its strangeness. At the level of abstraction discussed here, that is, speaking of the logical system implemented in Isabelle/HOL, we are now (in 2016) talking about a logic proved to be consistent, not inconsistent. This level of abstraction assumes that we ignore what you call the back doors, and only focus on the purely logical mechanisms: type and constant definitions and HOL deduction.

Btw, even though they also deal with a very complex implementation, the Coq people do not have a problem speaking at this level of abstraction -- and in particular acknowledging any logic inconsistency violations arising from their definitional mechanism (as happened recently). Needless to say, they also don't have a problem reporting when these violations have been addressed. And they do use words like 'inconsistency' and 'fix'.

You show contempt for people's efforts to address such issues using proper and well-established logical concepts and terminology. And since you are speaking with the (deserved) authority of Isabelle's release manager, this is a big strategic mistake for Isabelle.

Andrei

Makarius

view this post on Zulip Email Gateway (Aug 22 2022 at 14:00):

From: Ken Kubota <mail@kenkubota.de>
Dear Isabelle Developers,

Concerning the type definitions in Isabelle and the article by Ondřej Kunčar and Andrei Popescu, I have some questions.

  1. Which restriction was introduced in Isabelle (2016) in order to avoid the inconsistency of earlier versions revealed by Kunčar/Popescu? See
    http://www21.in.tum.de/~kuncar/kuncar-popescu-jar2016.pdf (p. 3)
    The definition of the constant c contains a free variable y.
    Constant definitions in Gordon's original HOL do not allow free variables, but only "a closed term" [Gordon/Melham, 1993, p. 220; or
    http://sourceforge.net/projects/hol/files/hol/kananaskis-10/kananaskis-10-logic.pdf (p. 33)].

  2. Are all types still constructed via the "typedef" mechanism?
    The following formulation allows two interpretations: a mechanism on top of "typedef",
    or a new mechanism at the same (basic) level as and in addition to "typedef":
    "In the post–Isabelle 2012 development version, we have converted the numeric types int, rat, and real to use Lifting and Transfer. (Previously they were constructed as quotients with typedef, in the style of Paulson [6].)"
    http://www21.in.tum.de/~kuncar/documents/huffman-kuncar-itp2012.pdf (p. 17)

  3. The main questions is, whether in Isabelle still all new types are (besides being tested for non-emptiness)
    introduced with an axiom stating a mapping, like in Gordon's original HOL,
    where for each new type an axiom is asserted saying that the new type is isomorphic to the defined (sub)set, cf.
    http://sourceforge.net/projects/hol/files/hol/kananaskis-10/kananaskis-10-logic.pdf (p. 39).

Let me remark that the section "8.5.2 Defining New Types" in
http://www.cl.cam.ac.uk/research/hvg/Isabelle/dist/Isabelle2016/doc/tutorial.pdf (pp. 173 ff.)
explaining "typedef" is quite hidden.
It would be desirable to have it in a separate logic manual like in the HOL4 logic part linked above.

  1. Why should "dependent types" be "to a large extent incompatible with the HOL philosophy of keeping all types non-empty" as claimed at
    http://www21.in.tum.de/~kuncar/kuncar-popescu-jar2016.pdf (p. 4)?
    My R0 implementation is a dependent type theory, and there are no empty types in it.

Kind regards,

Ken Kubota


Ken Kubota
doi: 10.4444/100
http://dx.doi.org/10.4444/100

view this post on Zulip Email Gateway (Aug 22 2022 at 14:00):

From: Andrei Popescu <A.Popescu@mdx.ac.uk>
Dear Ken,

Please see *** below.

  1. Which restriction was introduced in Isabelle (2016) in order to avoid the inconsistency of earlier versions revealed by Kunčar/Popescu? See
    http://www21.in.tum.de/~kuncar/kuncar-popescu-jar2016.pdf (p. 3)

*** As explained in the paper, now the definitional dependency is tracked through types as well.

The definition of the constant c contains a free variable y.
Constant definitions in Gordon's original HOL do not allow free variables, but only "a closed term" [Gordon/Melham, 1993, p. 220; or
http://sourceforge.net/projects/hol/files/hol/kananaskis-10/kananaskis-10-logic.pdf (p. 33)].

*** No: y is also bound there, by a forall quantifier.

  1. Are all types still constructed via the "typedef" mechanism?
    The following formulation allows two interpretations: a mechanism on top of "typedef",
    or a new mechanism at the same (basic) level as and in addition to "typedef":
    "In the post-Isabelle 2012 development version, we have converted the numeric types int, rat, and real to use Lifting and Transfer. (Previously they were constructed as quotients with typedef, in the style of Paulson [6].)"
    http://www21.in.tum.de/~kuncar/documents/huffman-kuncar-itp2012.pdf (p. 17)

*** All these types, including int, rat and real, as well as all the (co)datatypes, are ultimately compiled into typedefs.

  1. The main questions is, whether in Isabelle still all new types are (besides being tested for non-emptiness)
    introduced with an axiom stating a mapping, like in Gordon's original HOL,
    where for each new type an axiom is asserted saying that the new type is isomorphic to the defined (sub)set, cf.
    http://sourceforge.net/projects/hol/files/hol/kananaskis-10/kananaskis-10-logic.pdf (p. 39).

*** Yes: One can also declare types (which are assumed non-empty). But all the defined types come with the two functions
representing the back and forth isomorphisms with a subset of the host type.

Let me remark that the section "8.5.2 Defining New Types" in
http://www.cl.cam.ac.uk/research/hvg/Isabelle/dist/Isabelle2016/doc/tutorial.pdf (pp. 173 ff.)
explaining "typedef" is quite hidden.
It would be desirable to have it in a separate logic manual like in the HOL4 logic part linked above.

  1. Why should "dependent types" be "to a large extent incompatible with the HOL philosophy of keeping all types non-empty" as claimed at
    http://www21.in.tum.de/~kuncar/kuncar-popescu-jar2016.pdf (p. 4)?
    My R0 implementation is a dependent type theory, and there are no empty types in it.

*** Having all types non-empty is incompatible with the mainstream dependent type approach, where one uses the Curry-Howard isomorphism
to model propositions as types, with their elements representing proofs -- then the existence of empty types simply means that not everything is provable.

Best,
Andrei

view this post on Zulip Email Gateway (Aug 22 2022 at 14:00):

From: Lawrence Paulson <lp15@cam.ac.uk>
This question is a perfect example why I dislike the use of the word “inconsistency" in connection with the work of Kunčar and Popescu. Probably most people associate that word with some sort of incorrect logical reasoning. However, their work is concerned with Isabelle's automatic detection and rejection of circular definitions. Obvious ones such as x == x+1 and i==j, j==i+1 have been rejected for many years, but they found some cases where a very devious user could sneak a circular definition through. While it’s right that they have fixed this problem, nobody had actually exploited it, and the fix had no effect on the millions of lines of Isabelle proofs in existence.

And it’s worth stressing again: even consistent definitions can be wrong.

Larry Paulson

view this post on Zulip Email Gateway (Aug 22 2022 at 14:00):

From: Makarius <makarius@sketis.net>
Indeed, this misleading tale told around the work by Kuncar / Popescu
greatly subtracts from otherwise fine technical improvements.

I have already pointed out many times that it was mainly a change of the
documented situation in Isabelle/HOL. Before there was a section in the
manual saying clearly that typedefs are not fully checked. Afterwards,
the officially supported typedefs were (1) extended to typedefs
depending on overloaded consts (which was previously not covered) and
(2) the extended scheme checked by the system.

Thus the isar-ref manual entry of HOL typedef could be changed
accordingly: the system has become more user-friendly, since the user is
no longer blamed implicitly for not having read the manual when
something goes wrong; instead the system complains explicitly. In
practice, such a situation was never seen, though, because the
counter-example from the paper by Kuncar / Popescu was constructed
synthetically to illustrate a theoretical point.

Makarius

view this post on Zulip Email Gateway (Aug 22 2022 at 14:00):

From: Ondřej Kunčar <kuncar@in.tum.de>
That's a typo, that type should be tau, the type that was defined on the
previous line. I've already uploaded an updated version of the draft.

Btw, we usually use alpha, beta, gamma for type variables, not sigma.
But this is not mentioned until page 7.

Ondrej

view this post on Zulip Email Gateway (Aug 22 2022 at 14:00):

From: Ondřej Kunčar <kuncar@in.tum.de>
On 08/21/2016 12:21 PM, Lawrence Paulson wrote:

This question is a perfect example why I dislike the use of the word
“inconsistency" in connection with the work of Kunčar and Popescu.

Which word would you use then? I'm open to suggestions.

Probably most people associate that word with some sort of incorrect
logical reasoning. However, their work is concerned with Isabelle's
automatic detection and rejection of circular definitions.

I see. I guess the whole discrepancy boils down to the question if
definitions are part of logical reasoning. For me, this is the case
since you can view definitions as the following inference rule:

T
----------- [side conditions]
T' |- c = t

And if "c = t" allows you to prove False, then I call it an inconsistency.

I don't find distinguishing between theorems stemming from axioms (and
"pure" logical reasoning) and between theorems stemming from definitions
particularly enlightening.

While it’s right that they have
fixed this problem, nobody had actually exploited it, and the fix had
no effect on the millions of lines of Isabelle proofs in existence.

You are right. As far as I know, this was the case for any problems
(either implementation bugs or more substantial problems) that were
found in other provers. But does this mean we should resign on
inspecting and improving our kernels/logics? I hope not.

Bests,
Ondrej

view this post on Zulip Email Gateway (Aug 22 2022 at 14:01):

From: Jasmin Blanchette <jasmin.blanchette@inria.fr>

As far as I know, this was the case for any problems (either implementation bugs or more substantial problems) that were found in other provers.

I believe the soundness bug found in the Coq termination checker in December 2013 was hard to fix without breaking libraries -- I'm not sure what the exact resolution was and how much of an impact it had, but I believe it took them over one year to sort this out precisely because of the tension between soundness, expressiveness, and compatibility. Perhaps somebody with more knowledge about this should comment, but regardless it's dangerous to draw the conclusion that all soundness bugs can be fixed with no impact on real, legitimate applications.

Jasmin

view this post on Zulip Email Gateway (Aug 22 2022 at 14:01):

From: Lawrence Paulson <lp15@cam.ac.uk>
A number of bugs in PVS had this effect too. As with your example, even the most careful user could not escape. And that’s precisely my point: soundness bugs often have consequences.

Larry Paulson

view this post on Zulip Email Gateway (Aug 22 2022 at 14:01):

From: Lawrence Paulson <lp15@cam.ac.uk>
You can see definitions that way, but then almost everything can be codified as an inference system.

Of course I am not criticising your work, it's a positive contribution. Everything we can do to catch user errors is constructive. Note also how sledgehammer warns you if it can prove your theorem directly from an inconsistency in your assumptions. And we also have nitpick to catch user errors in theorem statements. But there is a difference between failing to catch a user error and performing incorrect deductions.

Larry

view this post on Zulip Email Gateway (Aug 22 2022 at 14:01):

From: Makarius <makarius@sketis.net>
On 21.08.2016 21:41, Ondřej Kunčar wrote:

I guess the whole discrepancy boils down to the question if
definitions are part of logical reasoning. For me, this is the case
since you can view definitions as the following inference rule:

T
----------- [side conditions]
T' |- c = t

And if "c = t" allows you to prove False, then I call it an
inconsistency.

This is an important observation. In classic Cambridge HOL systems,
adding definitions is an inference, but in Isabelle it is not: it is a
certain extension of the theory with certain properties that might or
might not be checked separately. That distinction of genuine inferences
vs. specifications is important, because explicit theory contexts are
central to Isabelle.

I don't find distinguishing between theorems stemming from axioms (and
"pure" logical reasoning) and between theorems stemming from
definitions particularly enlightening.
Maybe, but this is how Isabelle works. Note that Isabelle follows
conventional logic text books in that respect. It was Cambridge HOL to
make specifications an inference, probably due to technical reasons:
there is only one big implicit theory in the ML environment that is
augmented by rules of inferences.

It is always possible to discuss design decisions of existing systems,
but it needs to be made clear in the text that this is done. Calling
something broken (or "inconsistent"), while actually talking about a
different system is leading to endless confusion.

While it’s right that they have
fixed this problem, nobody had actually exploited it, and the fix had
no effect on the millions of lines of Isabelle proofs in existence.
This is because the lack of checks was known over many years as
something that is hardly relevant in practice. The feature addition of
that additional check is theoretically nice, nonetheless.

You are right. As far as I know, this was the case for any problems
(either implementation bugs or more substantial problems) that were
found in other provers. But does this mean we should resign on
inspecting and improving our kernels/logics? I hope not.
No, but here the situation was different: You started to look at the
implementation for other reasons, found things that were surprising, and
called them a "bug". Then I pointed to the documentation, that this
"bug" was an official feature. That caused a lot of noise and confusion
that still seems to be unsettled until today, even though "typedef" has
been officially upgraded from an axiomatic specification to a
definitional one in the implementation and documentation. (Both were
consistent before and after that feature addition.)

Makarius

view this post on Zulip Email Gateway (Aug 22 2022 at 14:01):

From: Ken Kubota <mail@kenkubota.de>
Dear Andrei,

Thank you for the quick answer, and also thanks to Larry Paulson and Makarius Wenzel.

I had read the abbreviated notation with the forall quantifier too quickly, but still the third restriction specified at
http://sourceforge.net/projects/hol/files/hol/kananaskis-10/kananaskis-10-logic.pdf (p. 33) or at [Gordon/Melham, 1993, p. 220].
seems to be violated, as the type variable sigma occurs in the definiens, but not in (the type of) the definiendum.
Your example
c:bool = ~(AL x:sigma AL y:sigma . x = y)
is very similar to the standard example on the next page of the logic manual with type variable alpha not occurring in type of the definiendum "c:bool":
c:bool = EX f: alpha -> alpha . One_One f /\ ~(Onto f)
It seems to me that a remedy simply would have been the restriction of Gordon's HOL (the third restriction).

Larry Paulson's argument at
https://lists.cam.ac.uk/pipermail/cl-isabelle-users/2016-August/msg00075.html
seems to aim at a strict distinction between reasoning (inference) and circular definitions (language).
I doubt that this distinction can be made that strictly, since what restriction (iii) in Gordon's HOL tries to prevent,
in Andrews' logic is prevented by the restrictions on Rule R' [cf. Andrews, 2002, p. 214], where an inference from
H > A = B and H > C
to
H > C [A/B]
('H' denotes a set of hypotheses, '>' the turnstile) by substituting one occurrence of A in C by B is subject to a certain restriction
concerning variables, i.e., if a variable x is free in a member of H and free in (A = B).
If one steps from polymorphic to dependent type theory, type variables are also subject to binding (quantification),
such that type variable dependencies are preserved.
For example, from
{} > ~(AL x:sigma AL y:sigma . x = y)
one could infer by the derived Rule of Universal Generalization (5220) [cf. Andrews, 2002, p. 222]
{} > AL sigma:tau ~(AL x:sigma AL y:sigma . x = y)
(tau denotes the type of types) and then
{} > F
because some types are singletons.
In other words, your c:bool would be F (false), because free (type) variables are implicitly universally quantified.
In contrast, if there were a condition upon sigma, for example, being bool: H = { sigma = bool }, then from
{ sigma = bool } > ~(AL x:sigma AL y:sigma . x = y)
one couldn't use the derived Rule of Universal Generalization (5220) to infer
{ sigma = bool } > AL sigma:tau ~(AL x:sigma AL y:sigma . x = y)
since (type) variable sigma occurs free in the set of hypotheses H, which violates the condition of rule 5220,
which is a condition derived from Rule R', since 5220 (like all rules in Q0 except Rule R/R') is a derived rule.
Hence, circular definitions are prevented by the inference rule (the reasoning) in Q0/R0 in a very simple manner.

Best wishes,

Ken Kubota


Ken Kubota
doi: 10.4444/100
http://dx.doi.org/10.4444/100

view this post on Zulip Email Gateway (Aug 22 2022 at 14:01):

From: Makarius <makarius@sketis.net>
Yes, such extreme measures would greatly simplify the story from a
theoretical point of view, but we are talking here of implemented logic:
it needs to provide extra provisions to make it usable in practice.

Abbreviations that are always expanded would render the system
impractical due to huge terms in expanded internal form.

Ruling out declarations that are separated from definitions greatly
simplify things, but prevent important applications like type class
instantiation.

There is always a tradeoff in theoretical simplicity versus complexity
required for practical applications of logic.
Without that tradeoff, making a usable theorem proving environment like
Isabelle would be a trivial exercise, and not an effort of 30 years.

Makarius

view this post on Zulip Email Gateway (Aug 22 2022 at 14:01):

From: Andrei Popescu <A.Popescu@mdx.ac.uk>
Regardless of whether we consider definitions as part of the inference kernel, the following is self-evident:

A logical system, if it takes the trouble to guarantee anything, is supposed to guarantee that False cannot be inferred via the canonical 'safe' user interaction with the system -- which consists of performing definitions plus logical deduction.

Since nobody does bare deduction without (building on top of) definitions, having the former consistent is pointless without having the latter consistent. So what are we talking about here?

Of course, Makarius, you are right to remind us that Isabelle/HOL is a complex software system and its reliability has many facets. Of course, Larry, you are right to remind us that often user definitions can fail to be faithful to the intended concepts. But please let me remind you (again) that these are both different points.

Andrei

view this post on Zulip Email Gateway (Aug 22 2022 at 14:01):

From: Makarius <makarius@sketis.net>
I am talking about how Isabelle works. You are talking about a different
system. That is very dangerous, because people might pick up wrong ideas
about the system, and then produce true non-sense.

The whole "typedef is axiomatic" versus "typedef is definitional in the
sense of HOL set theory" affair from last year was about adding a
feature in order to amend such social problems and misunderstandings.
There was little practical relevance from a logical standpoint.

In Isabelle we have yet more ways to produce a bad theory content and
then derive results that look wrong.

Makarius

view this post on Zulip Email Gateway (Aug 22 2022 at 14:01):

From: Andrei Popescu <A.Popescu@mdx.ac.uk>
I find it very dangerous for the theoretical progress in our area to refuse to accept the existence of an Isabelle/HOL logical system -- namely, a system that

can be described mathematically by:

(A) a mechanism for constant and type definitions

(B) a set of rules of deduction

This system exists as a mathematical entity independently of the implementation. (All my contributions to Isabelle/HOL so far, admittedly modest, have

been based on this view -- and they have led to additions to the real system.)

As soon as you accept this view (regardless of how you choose to model (A) and (B) in a reasonable way), you will see the following:

You have incorporated the additional check that we suggested in the paper not only for 'adding a feature in order to amend such social problems and misunderstandings', but also for transforming an inconsistent logical system into a consistent one -- or, if you take the view that type definitions used to come with no consistency guarantees, this was the transformation of a logical system that officially allowed inconsistency into one that does not. And I am very happy for this.

A recurrent objection that you are bringing to the above view is that Isabelle/HOL is built on top of a logical framework, which, according to you, makes the situation so complex that cannot be depicted by mortals. As I wrote before, this is a matter that is settled separately, by an adequacy proof -- so that we can ignore the logical framework when making theoretical advancements.

Finally, let me note that, as far as I see, Larry's point is a distinct one: He considers that the consistency of all definitions, be they constant or type definitions, are in principle entirely the user's responsibility -- although he admits that it would be nice if the system 'nitpicked' them here and there.

On the way, he reveals something that had never crossed my mind: that what, in our paper, we naively call a consistency theorem is in fact an enhancement of Nitpick. :-) My last email was addressing Larry's point. And note that I was not talking about 'a different system', but about a logical system in general.

Andrei

view this post on Zulip Email Gateway (Aug 22 2022 at 14:01):

From: Ken Kubota <mail@kenkubota.de>
But then, looking at the revised version, the problem still seems to be caused by the language.
The first line ("consts" …) seems to be a declaration, and the third line ("defs" …) a definition.
Declarations like this allow the declared constant to practically behave like a variable, since the constant can be specified (defined) much later.

In Q0/R0 there are no such declarations.
If one would try to express the same in the language of R0, only one possibility would be left.
The definition ("defs" …) could not be placed first, since it requires tau, which would be introduced by the latter type definition ("typedef" …) not available at the beginning.
Hence first type tau has to be defined (being a mere abbreviation) on the basis of a variable "c" as:
tau := [\x:bool . x=T \/ x=c:bool]
Proving tau_T (i.e., T is element of tau; '_' denotes lambda application) then makes tau a type (being non-empty).
Finally,
c = ~(AL x,y:tau . x=y)
could be introduced as a hypothesis. Since it leads to a contradiction, it is shown to be false (hence, c != ~(AL x,y:tau . x=y) ), which is the desired result.
In summary, the solution would be not allowing declarations in the mathematical language, but only definitions, like in Q0/R0.

Concerning the question of definitions, they shouldn't be part of the kernel, since non-logical axioms should be avoided as discussed in section 3 at
https://lists.cam.ac.uk/pipermail/cl-isabelle-users/2016-August/msg00069.html
Also, implementing definitions within the logical core contradicts to the LCF philosophy of a small trusted kernel.
For this reason, definitions in R0 are mere abbreviations located outside of the logical kernel.
They also can be removed at any time (and re-introduced again) without modifying the theorems already obtained.

Ken Kubota


Ken Kubota
doi: 10.4444/100
http://dx.doi.org/10.4444/100

view this post on Zulip Email Gateway (Aug 22 2022 at 14:02):

From: Lawrence Paulson <lp15@cam.ac.uk>
We are talking about the danger – almost certainly realised – of misleading people.

The various issues involving Isabelle, Coq, PVS and other systems are all complicated in their details. Outside observes are probably not going to invest the time and effort necessary to investigate precisely what is going on in case case. They will regard these “inconsistencies” as roughly the same. So we are drawing a false equivalence between

I. The prover incorrectly handles correct and meaningful definitions. Users, who have done nothing wrong, cannot complete their projects.

II. Somebody who has studied the source code has devised a definition that is not meaningful and the prover fails to detect this. Nobody has ever used or wanted such a definition, because it is overtly nonsensical.

It’s wrong and misleading to give the impression that I and II are the same.

Larry

view this post on Zulip Email Gateway (Aug 22 2022 at 14:02):

From: Ken Kubota <mail@kenkubota.de>
Using the current Isabelle 2016, it is not possible for me to reproduce the problem.

If I understand Andrei Popescu and Makarius Wenzel correctly,
declarations of constants earlier than and separated from their definitions should be possible,
but circular dependencies not anymore in the 2016 version of Isabelle.
Nevertheless, the error message reports a problem with the definition of the constant,
even if circularity is avoided by removing 'c' from tau ('T').

Could you please provide a working example (for the 2016 version of Isabelle)
by correcting my attempt (shown with the results below)?

Please note that line "consts c:α" ("consts c:alpha") at
http://www21.in.tum.de/~kuncar/kuncar-popescu-jar2016.pdf (p. 3)
still differs from line "consts c :: bool" in
http://www21.in.tum.de/~kuncar/documents/False.tar.gz
available via
http://www21.in.tum.de/~kuncar/documents/patch.html

Ken Kubota

(* using 'a, "(overloaded)", "definition", and c is removed from T *)

consts c :: 'a

typedef (overloaded) T = "{True}" by blast

definition c_bool_def:
"c::bool ≡ if (∀(x::T) y. x = y) then False else True"
(ERROR: Bad head of lhs: existing constant "c")

(* using 'a, "(overloaded)" and "definition" *)

consts c :: 'a

typedef (overloaded) T = "{True, c}" by blast

definition c_bool_def:
"c::bool ≡ if (∀(x::T) y. x = y) then False else True"
(ERROR: Bad head of lhs: existing constant "c")

(* using "(overloaded)" and "definition" *)

consts c :: bool

typedef (overloaded) T = "{True, c}" by blast

definition c_bool_def:
"c::bool ≡ if (∀(x::T) y. x = y) then False else True"
(ERROR: Bad head of lhs: existing constant "c")

(* original version by Ondrej Kuncar, with "defs" *)

consts c :: bool

typedef T = "{True, c}" by blast

defs c_bool_def:
"c::bool ≡ if (∀(x::T) y. x = y) then False else True"
(* several errors in current Isabelle 2016 *)


Ken Kubota
doi: 10.4444/100
http://dx.doi.org/10.4444/100

view this post on Zulip Email Gateway (Aug 22 2022 at 14:02):

From: Andrei Popescu <A.Popescu@mdx.ac.uk>
Dear Larry,

We are talking about the danger - almost certainly realised - of misleading people.

I don't see how severely (and artificially) lowering the standard of what we want from a logical system can help against the potential confusion. Let me restate the obvious: We want consistency not only of bare deduction (from the HOL axioms), but of deduction from the definitional theories (including constant and type definitions). This is not a complicated statement, and most users of this systems and of similar systems will agree on it.

We did not have that in Isabelle/HOL---and this was hinted to in the reference manual, but in not very clear terms and few people knew about it. Today we have achieved this goal and it is documented in quite clear terms in the same reference manual, and people should know about it. What does not help is refusing to accept what we have already accomplished, even as a principial goal for a logical system.

It should also be stressed, of course, that the modifications needed to achieve this goal -- a harsher check at definitions -- did not affect in any way the previous Isabelle developments, which is great, and indeed different from similar problems in other systems.

The various issues involving Isabelle, Coq, PVS and other systems are all complicated in their details. Outside observes are probably not going to invest the time and effort necessary to investigate precisely what is going on in case case. They will regard these "inconsistencies" as roughly the same. So we are drawing a false equivalence between

I. The prover incorrectly handles correct and meaningful definitions. Users, who have done nothing wrong, cannot complete their projects.

II. Somebody who has studied the source code has devised a definition that is not meaningful and the prover fails to detect this. Nobody has ever used or wanted such a definition, because it is overtly nonsensical.

It's wrong and misleading to give the impression that I and II are the same.

There is no a priori notion of 'meaningful' definition, apart from what the system accepts. So yes, from the point of view of the logical system these situations must be treated in the same way, in particular taken equally seriously. And this while ignoring the unnecessary reference to that malicious 'somebody who has studied the source code' -- who happens to be named Ondrej and happens to have been working on something else when he discovered that. But you know very well from day one the story behind this discovery and its ordeals.

Andrei

view this post on Zulip Email Gateway (Aug 22 2022 at 14:03):

From: Jeremy Dawson <Jeremy.Dawson@anu.edu.au>
How about III,
a user devised a definition that is meaningful but mistakenly omits a
word (eg "primrec") somewhere, and could "complete his project" or so it
seems, by virtue of the fact that his definitions are nonsense, but not
obviously so

Jeremy

view this post on Zulip Email Gateway (Aug 22 2022 at 14:03):

From: Ken Kubota <mail@kenkubota.de>
Dear Andrei,

Thank you very much for your kind support at
https://lists.cam.ac.uk/pipermail/cl-isabelle-users/2016-August/msg00099.html
which I really appreciate.

I was wondering whether it would be possible to create declarations separated
from definitions without (!) overloading in the current Isabelle version
(2016). Such attempts were rejected by the current Isabelle version (see
attached theory CircularDefs). So it seems that overall your approach is
successful in preventing the kind of inconsistency described in your article.

Also, I tried to express your (Ondřej's) definitions in the R0 implementation
and obtained interesting results on the methods preventing such paradoxes. I
shall write more about it in a separate e-mail soon.

Kind regards,

Ken Kubota


Ken Kubota
doi: 10.4444/100
http://dx.doi.org/10.4444/100
CircularDefs.thy

view this post on Zulip Email Gateway (Aug 22 2022 at 17:25):

From: Ken Kubota <mail@kenkubota.de>
Dear Andrei,

Answering a private communication, I referred to your paper on Isabelle/HOL
as well as to Peter Homeier’s HOL-Omega TUTORIAL in a reply at
http://owlofminerva.net/kubota/r0-faq/

Regards,

Ken


Ken Kubota
http://doi.org/10.4444/100


Last updated: Apr 24 2024 at 16:18 UTC