Stream: Mirror: Isabelle Users Mailing List

Topic: [isabelle] Using Types_To_Sets


view this post on Zulip Email Gateway (Nov 16 2022 at 19:08):

From: Dominique Unruh <cl-isabelle-users@lists.cam.ac.uk>
Hello,

[tl;dr: I have worked with Types_To_Sets and am shortly reporting my
experience and asking for feedback whether there are easier mechanisms
for these kinds of tasks.]

I have just used Types_To_Sets to translate a theorem stated on Hilbert
spaces on the whole type to one stated on a subspace on that type.

Specifically, I'm translating this (the existence of an orthonormal basis):

[source]
<https://github.com/dominique-unruh/afp/blob/5ceb065bd5f9f4469717a11a73d894b0711b44f0/thys/Complex_Bounded_Operators/Complex_Inner_Product.thy#L1905>

into this:

[source]
<https://github.com/dominique-unruh/afp/blob/5ceb065bd5f9f4469717a11a73d894b0711b44f0/thys/Registers/Tmp_Move.thy#L758>

It turns out that this is a lot of effort. I needed to do the
following things before I could even start translating the theorem:

* I had to make unoverloaded definitions//(e.g., /closure.with/
<https://github.com/dominique-unruh/afp/blob/5ceb065bd5f9f4469717a11a73d894b0711b44f0/thys/Registers/Tmp_Move.thy#L695>)
//of many constants (those used in the theorem, and recursively
those in the definitions). This is fortunately quite easy using the
command unoverload_definition.

* For each of those constants I had to define a related one "on sets".
(E.g., /definition ‹nhds_on A open a = (⨅ (principal ` {S. S ⊆ A ∧
open S ∧ a ∈ S})) ⊓ principal A›/
<https://github.com/dominique-unruh/afp/blob/5ceb065bd5f9f4469717a11a73d894b0711b44f0/thys/Registers/Tmp_Move.thy#L475>
as an analogue to the constant nhds.)

* For each of those constants I had to write a transfer-rule relating
the one on sets to the original (unoverloaded) one. (E.g.,
nhds_transfer
<https://github.com/dominique-unruh/afp/blob/5ceb065bd5f9f4469717a11a73d894b0711b44f0/thys/Registers/Tmp_Move.thy#L478>.)
Interestingly, I had in various cases to unfold definitions using
the definitional axiom (e.g.,
Topological_Spaces.topological_space.nhds_def_raw
<https://github.com/dominique-unruh/afp/blob/5ceb065bd5f9f4469717a11a73d894b0711b44f0/thys/Registers/Tmp_Move.thy#L484>)
because the unoverloaded ends up using "internal" constants such as
topological_space.nhds instead of nhds.

* Those constants on sets, I needed to relate to existing definitions
on sets. (E.g., /‹nhds_on (topspace T) (openin T) x = nhdsin T x› if
‹x ∈ topspace T›/
<https://github.com/dominique-unruh/afp/blob/5ceb065bd5f9f4469717a11a73d894b0711b44f0/thys/Registers/Tmp_Move.thy#L493>
to related the nonstandard /nhds_on /to the existing /nhdsin/ on the
type /topology/. (I cannot directly write transfer-rules from the
unoverloaded constants to those existing definitions on sets because
the transfer mechanism leads to definitions that behave different
from the builtin ones in "bad" situation, e.g., when a function maps
to values outside its supposed range.) In some cases, the
definitions don't match up nicely, and one needs to do some
non-trivial proofs to relate them (i.e., actually the math, not just
translating terms, e.g. here
<https://github.com/dominique-unruh/afp/blob/5ceb065bd5f9f4469717a11a73d894b0711b44f0/thys/Registers/Tmp_Move.thy#L202>).

* Additionally, I had to (again recursively) write transfer theorems
and "definitions on sets" for the type class /chilbert_space/
<https://github.com/dominique-unruh/afp/blob/5ceb065bd5f9f4469717a11a73d894b0711b44f0/thys/Registers/Tmp_Move.thy#L617>
and all its ancestors.

* And I had to relate those definitions on sets to existing ones again
(e.g., ‹chilbert_space_on V (⇩R) (⇩C) (+) 0 (-) uminus dist norm
sgn (uniformity_on V) (openin (top_of_set V)) (∙⇩C)› if ‹complete V›
<https://github.com/dominique-unruh/afp/blob/5ceb065bd5f9f4469717a11a73d894b0711b44f0/thys/Registers/Tmp_Move.thy#L637>).

All of this is a lot of effort. In fact, the Isabelle code just for
translating the theorem is over 800 lines (available here
<https://raw.githubusercontent.com/dominique-unruh/afp/5ceb065bd5f9f4469717a11a73d894b0711b44f0/thys/Registers/Tmp_Move.thy>,
needs Isabelle2022 with AFP). Of these, about 750 are just setup.

So I am wondering: Are there better ways to do this? Are there
automations for this that I am not aware of? Other shortcuts? Etc.?

(And if not, then I leave this out here as an instructive example how to
do a nontrivial transfer with Types_To_Sets.)

Best wishes,
Dominique.
DOd7tJAoTaqHiL0u.png
ymc1JRSESGOFkrkG.png

view this post on Zulip Email Gateway (Nov 16 2022 at 19:26):

From: Andrei Popescu <andrei.h.popescu@gmail.com>
Hi Dominique,

Mihails Milehins has a tool in the AFP that automates much of the process:
https://www.isa-afp.org/entries/Types_To_Sets_Extension.html
He also describes this in his CPP paper:
https://gitlab.com/user9716869/etts_preprint/-/raw/master/ETTS_Preprint.pdf

Best wishes,
Andrei

view this post on Zulip Email Gateway (Nov 16 2022 at 19:42):

From: Andrei Popescu <andrei.h.popescu@gmail.com>
And just to advertise some further developments on the topic: In
recent work (with Dmitriy Traytel)
https://www.andreipopescu.uk/pdf/types2pers_POPL2023.pdf
we perform truly pervasive types-to-sets relativization without
structural restrictions, in the more general form of "types to PERs".
The tool linked from the paper is currently in a very prototypical
stage, and adds axioms -- even though we show meta-theoretically that
the axioms can be replaced by proved statements.

Best wishes,
Andrei

view this post on Zulip Email Gateway (Nov 21 2022 at 16:34):

From: Dominique Unruh <cl-isabelle-users@lists.cam.ac.uk>
Hi,

thanks for the pointer. I tried out ETTS (i.e., tried porting my code to
use it instead). Here are my experiences (and further thoughts). Note
that these are experiences after a smaller project (1000 loc) so may not
or may not be accurate.

* ETTS (and the connected project CTR) provide automation for defining
unoverloaded and set-based constants and for transferring type-based
theorems to set-based theorems. If everything works smoothly, that
means that one can transfer both constants and theorems with minimal
boilerplate.

* I feel that in practice, it is rare that everything works smoothly.
In many cases, one simply gets an answer such as "couldn't find a
transfer rule". This may be due to either the need to first generate
transfer-rules for other dependent constants, or simply because the
terms in questions are not good for transferring. (E.g., contain
constants that do not have admit strong parametricity- or
transfer-rules.)

* These problems also occur when directly writing transfer theorems
but it is easier to debug. (E.g., by replacing /apply
transfer_prover /by /apply transfer_prover_start apply
transfer_step+/.) So I found myself often proving the required
transfer theorems by hand (and first defining the unoverloaded
constants by hand), and then replacing the result by a CTR call.

* An additional difficulty is that to understand how to use (and
tweak) CTR/ETTS, one needs to understand the TTS and transfer
mechanisms quite well. So ETTS does not help users who cannot use
TTS manually anyway. (ETTS may lead to shorter code but the effort
to produce this code does not seem smaller to me.)

* I did not get the tts_lemmas command itself to work. (But I did not
try long after getting an error because the number of theorems that
I wanted to transfer is quite low so I stuck to the manual approach
there.)

* In the end, my code length actually increased to my surprise. (Point
to note: I transferred classes involving filters such as
metric_space etc. Filters are quite fiddlesome when trying to do
transfers because rel_filter is a complicated beast.)

Altogether, I would say there there is still a lot of research potential
to make TTS more userfriendly. (But ETTS is already an important and
impressive step. It's just that TTS in its nature is complicated.)

In the end, I came to follow the following approach for my own project
(maybe this guidelines is of use for others):

* For each involved class X, do the following:
o Define a locale /class_X_ow /with one additional parameter U
(the carrier set) but otherwise analogous to the definition of
the predicate /class.X/. (E.g., metric_space_ow
<https://github.com/dominique-unruh/afp/blob/ba7d9eb8dbe6e877c195c8077b65c6fae9d792cc/thys/Registers/Tmp_Move.thy#L638>.)
Try to write the definitions already so that parametricity
proofs are easy. In most cases, this is done by simply replacing
unbounded "∀x" by "∀x∈U" but in some cases a little more care is
needed.
o Prove a parametricity result /lemma class_X_ow_parametricity
assumes "bi_unique T" shows "(relator involving T)
//X_ow_parametricity //X_ow_parametricity"/. (Here I deviate
from the approach used by ETTS. There one would prove a
transfer-rule relating the unoverloaded definition with the
set-based definition and with assumptions /bi_unique T/ and
/right_total T/. My approach avoids the definition of the
intermediate unoverloaded but type-based constant. Sometimes the
/ctr parametricity/ command manages this. (E.g., here <ctr
parametricity in metric_space_ow_def[unfolded
metric_space_ow_axioms_def]> or here
<https://github.com/dominique-unruh/afp/blob/ba7d9eb8dbe6e877c195c8077b65c6fae9d792cc/thys/Registers/Tmp_Move.thy#L517>.)
o Prove an unoverloading result of the form: /lemma class_ud:
"class.X = class_X_ow UNIV". /In most cases, this works quite
easily by unfolding the definitions and using /simp/. E.g. here
<https://github.com/dominique-unruh/afp/blob/ba7d9eb8dbe6e877c195c8077b65c6fae9d792cc/thys/Registers/Tmp_Move.thy#L645>.
o Note that this approach needs less definitions than the ETTS
approach because we do not define first an unoverloaded version
and then a transferred version. We still get everything that we
have in ETTS: /class_//X_ow UNIV/ is the same as the
unoverloaded (but not yet set-based) constant.
o Prove various helping lemmas for standard cases, such as
/class_metric_space_ow_typeclass/
<https://github.com/dominique-unruh/afp/blob/ba7d9eb8dbe6e877c195c8077b65c6fae9d792cc/thys/Registers/Tmp_Move.thy#L648>that
servers as an introduction rule for the _ow predicate when we do
have use the type-class /metric_space///on the set-based side.

* For each involved constant X, do the following:
o Define /X_ow/, the unoverloaded set-based constant (e.g.,
nhds_ow
<https://github.com/dominique-unruh/afp/blob/ba7d9eb8dbe6e877c195c8077b65c6fae9d792cc/thys/Registers/Tmp_Move.thy#L656>).
o Prove parametricity (e.g. here <ctr parametricity in
nhds_ow_def[folded transfer_bounded_filter_Inf_def, unfolded
make_parametricity_proof_friendly]>) and an unoverloading result
(e.g., here
<https://github.com/dominique-unruh/afp/blob/ba7d9eb8dbe6e877c195c8077b65c6fae9d792cc/thys/Registers/Tmp_Move.thy#L664>)
like with the class. And auxiliary lemmas (e.g., here
<https://github.com/dominique-unruh/afp/blob/ba7d9eb8dbe6e877c195c8077b65c6fae9d792cc/thys/Registers/Tmp_Move.thy#L667>
one relating /nhds_ow /to the existing /nhds//_in/ based on the
type /'a topology/.)

*  Finally, all this can be used for transferring theorems, I give
one example here
<https://github.com/dominique-unruh/afp/blob/ba7d9eb8dbe6e877c195c8077b65c6fae9d792cc/thys/Registers/Tmp_Move.thy#L959>.

Note the difference in approach: we do not define two unoverloaded
constants but one. As a consequence, only one transfer-theorem is used
per constant. Also, in my opinion, things become more regular this way:
A type-based constant can often be defined in numerous equivalent ways.
But once unoverloaded, the definitions become non-equal. And the ones
that arise from unoverloading directly may then not be suitable for a
transfer theorem (because the transfer theorem can then not assume that
the laws of the original type-class are satisfied). (E.g., the sum over
an abelian monoid can choose any order in which to add, or even require
by definition that all the ways of adding give the same result. But when
I underload sum, I get something that invole THE or SOME which is very
unsuitable for transferring.) My approach forces imposes some discpline
that avoids falling into this trap. Whether it is preferrable for
transferring large libraries I cannot tell but I thought I put it out
there for inspiration.

Best wishes,
Dominique.

view this post on Zulip Email Gateway (Nov 25 2022 at 15:44):

From: Mihails Milehins <mihailsmilehins@gmail.com>
Dear Dominique Unruh,

Thank you for your feedback about the ETTS. I would like to make several
remarks with regard to your feedback:

  1. The UD/CTR provide auxiliary/complementary commands. Neither one is a
    prerequisite for using the ETTS. Neither one of them is universally
    applicable. They merely implement the algorithms that are described in the
    user manuals, hopefully, faithfully. Indeed, your assumption is correct,
    neither creates the constants/transfer rules recursively. Thus, for
    example, for the operation of the CTR, the constants that occur as subterms
    in the definitions need to be relativized explicitly. Also, the CTR suffers
    from the same fundamental limitations that the algorithm used in the
    transfer_prover suffers from, as the CTR merely provides an interface to
    it. However, most of this is already stated in the manuals/paper or, at
    least, can be easily inferred from the descriptions of the algorithms.

  2. The ETTS was built around an implementation of a variant of the
    types-to-sets algorithm that is described in the manual/paper. The
    implementation is available via the command tts_lemmas. This algorithm is
    largely based on the algorithm from [1], with some additional "features"
    based on [2]. In a sense, the ETTS is meant to be applied in a manner
    similar to how one would use traditional classical reasoners/simp-based
    methods (like auto/force) with additional settings (trial and error) and,
    in my view, provides a similar level of automation over proving each step
    manually using the rule-based tactics. Indeed, there are times when I feel
    like it is easier to use rule-based tactics explicitly, than making
    auto/force work implicitly. A good rhetorical question to ask here would
    be: are auto/force tools inferior in comparison to rule application or am
    I a "bad" user of auto/force? Perhaps, the answer is neither. With
    experience we all learn when methods like auto/force are more suitable, and
    when the preference should be given to direct rule application. However,
    like with the types-to-sets-related-technologies, there is much scope for
    the improvement of traditional proof methods (the only difference being
    that types-to-sets is still quite novel and specialized, whereas the
    traditional proof methods and their implementations have been refined over
    decades by a large number of users/developers).

  3. The most common problem for the operation of the ETTS is the
    unavailability of suitably stated transfer rules. It has an in-built "proof
    debugger" that can be invoked by adding "!" to the command tts_lemmas. This
    shows the steps invoked by the algorithm, and it is meant to be used
    extensively during the "setup" phase of the ETTS, while looking for
    suitable transfer rules for the constants associated with a given theory.
    The full unconstrained automation of the synthesis of arbitrary transfer
    rules was never claimed anywhere in the paper/manual, merely the automation
    of the application of a variant of a types-to-sets algorithm, if the
    prerequisites for its application are satisfied.

  4. I have to admit, subjectively your feedback feels quite similar to the
    feedback that new (hypothetical) users of Isabelle often give to the method
    "auto" on Stack Overflow when trying it for the first time: "it just gives
    me an error (failed to finish proof) and it does not prove the theorems
    that I want!". I was expecting a more in-depth feedback from experienced
    users of Isabelle before applying a certain amount of smear to someone
    else’s work :-). However, of course, words are cheap. As you can see from
    the ETTS AFP entry, I performed the relativization of significant parts of
    the main library using the ETTS, including essentially all of the
    relativization work performed by Fabian Immler and Bohua Zhan in [2].
    Furthermore, thus far, I only reported on a fraction of all of the theorems
    that I relativized while trying the tool. For example, I performed the
    relativization of almost all of the theorems about topological spaces,
    including the ones that required filters (the latter, indeed, was not
    trivial and required some additional ad-hoc manual proof effort related to
    the transfer infrastructure). Unfortunately, these efforts are now lost,
    but I can recreate them within a reasonable time frame.

  5. I would be happy to take on the challenge of performing the
    relativization of the theorems that you would like to relativize using the
    ETTS in my spare time. Feel free to point me to the specific theories for
    which you would like the relativization to be performed using the ETTS.
    However, presently, I have little time to dedicate to such matters (the
    time around Christmas would probably be the best time for me). Therefore, I
    cannot guarantee a very quick response time.

Yours Sincerely,
Mihails Milehins
(he/him/his)

  1. Kunčar O, Popescu A. From Types to Sets by Local Type Definition in
    Higher-Order Logic. Journal of Automated Reasoning. 2019;62(2):237–60.

  2. Immler F, Zhan B. Smooth Manifolds and Types to Sets for Linear Algebra
    in Isabelle/HOL. In: Mahboubi A, Myreen MO, editors. Proceedings of the 8th
    ACM SIGPLAN International Conference on Certified Programs and Proofs, CPP
    2019, Cascais, Portugal. New York, NY, USA: ACM; 2019. p. 65–77. (CPP
    2019).

view this post on Zulip Email Gateway (Dec 15 2022 at 08:04):

From: Dominique Unruh <cl-isabelle-users@lists.cam.ac.uk>
Hi,

sorry for the late answer...

view this post on Zulip Email Gateway (Dec 25 2022 at 14:02):

From: Mihails Milehins <mihailsmilehins@gmail.com>
Dear Dominique Unruh/All,

Thank you for your email.

Since I already lifted the theorems using my own approach at the time of

writing, I do not necessarily need a relativization using ETTS for my
use-case. But if you think it would be interesting as a case study, I would
at least be curious to see how easy or hard it is to do with ETTS.

...
But as I said – only if you find this interesting yourself. I will
probably not include it in my theories because I already have a solution.

If I understood correctly, there were three theorems for which the
relativization needed to be performed:

1. on_closure_eqI
2. orthonormal_basis_exists
3. has_sum_comm_additive_general

I felt that it could be beneficial for me to demonstrate that the ETTS can
be used for the relativization of the aforementioned theorems, because your
previous comments did cast doubt on the applicability of the ETTS to these
theorems. However, given that the results are not needed for any practical
purpose and (as I explain below) I have little personal interest in the
ETTS, I provide only the relativization of the theorems on_closure_eqI and
has_sum_comm_additive_general (
https://gitlab.com/user9716869/etts_sml_extra/ - this is meant to be used
with Isabelle2022 and the associated release version of the AFP). This was
achieved without any alterations of the ETTS or the methodology for its
application proposed in the SML examples suite. I hope that it is apparent
to see that orthonormal_basis_exists can also be lifted using the ETTS in a
similar manner with some further effort on the side of the relativization
of constants.

I will use this opportunity to make several general remarks:

1. As I mentioned before, I believe that it can be easy to conflate
several interrelated topics when assessing the UD/CTR/ETTS. The UD/CTR/ETTS
provide an implementation/automation/infrastructure for the application of
several algorithms that were suggested in the published literature (e.g.,
[2] and [3]). The relativization algorithm implemented as part of the ETTS
requires every constant that occurs in the input theorem to be relativized
(in a predefined format) prior to its application. AFAIK, no algorithm for
fully automated relativization of an arbitrary constant in Isabelle/HOL was
proposed before/while I was working on the ETTS. While the frameworks
UD/CTR/parametricity provide a partial solution for the relativization of
constants, it is expected that some manual effort will also be (inevitably)
required. Essentially all of the manual proofs in
https://gitlab.com/user9716869/etts_sml_extra/ are somehow related to
the relativization of constants, not the application of the relativization
algorithm. An entirely different topic is the presentation of the
relativized results in a specific format desired by the user. Here, the
ETTS provides a range of utilities up to the application of certain
attributes for post-processing of the results of the relativization at a
large scale. Nonetheless, it is difficult to disagree with your original
criticism that there is some scope for the improvement of the ETTS/UD/CTR:
a common saying in commercial software development is "software can only be
released, never finished". My own backlog for the ETTS was growing
continuously throughout its development, and its release was based on the
time I was happy to dedicate to its development, rather than any other
factor.

2. https://gitlab.com/user9716869/etts_sml_extra/ provides many
additional results, not directly related to the theorems in
https://github.com/dominique-unruh/afp/blob/306ef85bea71cea76b5794c76ef260ab1ffa46ff/thys/Hilbert_Space_Tensor_Product/Misc_Tensor_Product_TTS.thy#L944.
These results are not needed for the relativization of the theorems in
question. However, their creation required almost no extra effort, and I
thought that it does not hurt to provide them given the opportunity.

3. Overall, I largely abandoned my work on the ETTS in 2019 due to the
lack of personal interest in this subject. Nonetheless, I felt a
necessity to refine this work to the extent that it could be considered
close to production level and make an attempt to publish it because much of
the community's time was invested in it (in fact, I allowed this
development to be largely driven by the desires of the community, providing
merely an implementation of the ideas suggested to me) and because I
was encouraged to do so by several members of the community. Therefore, I
was very pleased to find out that this line of work is being taken in a
different direction [1], as I felt that it would be a chore for me to
respond to feedback about the ETTS or improve the ETTS in any way (I became
aware of the other line of work some time before the publication of my own
work at the CPP). For these two reasons, I also minimized the effort on
investigating the current issue, and only provided a sketch of a solution
using the ETTS, not a refined production level code immediately suitable
for your specific application.

4. Finally, from a personal perspective, I do not recommend the use of
the original framework types-to-sets (not only the ETTS) for the
formalization of mathematics at large. The reasons for this are largely
subjective, but go beyond the use of the additional axioms. Despite that I
tried not to criticize publicly my own work on types-to-sets and its
predecessors prior to the publication of the ETTS as a courtesy to those
who invested time in helping me, I became a staunch opponent of the library
design patterns that rely on the conversion of type-based-results to any
form of set-based-results (or vice versa, or any other form of duplication)
towards the end of 2019. I believe that maintaining entire "duplicate
libraries" of constants and results is exceptionally inefficient: the
priority in the area of library design should be the elimination of
duplication, not finding ways to generate more "semantic duplicates"
automatically (of course, this is not an attempt to criticize the transfer
tools at large, merely their use for the production of
"semantic duplicates" in the context of the design of libraries of
formalized mathematics).

Lastly, Merry Christmas and a Happy New Year!

Yours Sincerely,
Mihails Milehins
(he/him/his)

  1. Popescu A, Traytel D. Admissible Types-To-PERs Relativization in
    Higher-Order Logic. POPL 2023 (to appear).

  2. Kunčar O, Popescu A. From Types to Sets by Local Type Definition in
    Higher-Order Logic. Journal of Automated Reasoning. 2019;62(2):237–60.

  3. Immler F, Zhan B. Smooth Manifolds and Types to Sets for Linear Algebra
    in Isabelle/HOL. In: Mahboubi A, Myreen MO, editors. Proceedings of the 8th
    ACM SIGPLAN International Conference on Certified Programs and Proofs, CPP
    2019, Cascais, Portugal. New York, NY, USA: ACM; 2019. p. 65–77. (CPP
    2019).

On Thu, Dec 15, 2022 at 10:03 AM Dominique Unruh <unruh@ut.ee> wrote:

Hi,

sorry for the late answer...

  1. I would be happy to take on the challenge of performing the
    relativization of the theorems that you would like to relativize using the
    ETTS in my spare time. Feel free to point me to the specific theories for
    which you would like the relativization to be performed using the ETTS.
    However, presently, I have little time to dedicate to such matters (the
    time around Christmas would probably be the best time for me). Therefore, I
    cannot guarantee a very quick response time.

Since I already lifted the theorems using my own approach at the time of
writing, I do not necessarily need a relativization using ETTS for my
use-case. But if you think it would be interesting as a case study, I would
at least be curious to see how easy or hard it is to do with ETTS.

You can find the relativizations that I was proving here:
https://github.com/dominique-unruh/afp/blob/306ef85bea71cea76b5794c76ef260ab1ffa46ff/thys/Hilbert_Space_Tensor_Product/Misc_Tensor_Product_TTS.thy#L944
starting from line 944.

But as I said – only if you find this interesting yourself. I will
probably not include it in my theories because I already have a solution.

Best wishes,
Dominique.

On 2022-11-22 5:19, Mihails Milehins wrote:

Dear Dominique Unruh,

Thank you for your feedback about the ETTS. I would like to make several
remarks with regard to your feedback:

  1. The UD/CTR provide auxiliary/complementary commands. Neither one is a
    prerequisite for using the ETTS. Neither one of them is universally
    applicable. They merely implement the algorithms that are described in the
    user manuals, hopefully, faithfully. Indeed, your assumption is correct,
    neither creates the constants/transfer rules recursively. Thus, for
    example, for the operation of the CTR, the constants that occur as subterms
    in the definitions need to be relativized explicitly. Also, the CTR suffers
    from the same fundamental limitations that the algorithm used in the
    transfer_prover suffers from, as the CTR merely provides an interface to
    it. However, most of this is already stated in the manuals/paper or, at
    least, can be easily inferred from the descriptions of the algorithms.

  2. The ETTS was built around an implementation of a variant of the
    types-to-sets algorithm that is described in the manual/paper. The
    implementation is available via the command tts_lemmas. This algorithm is
    largely based on the algorithm from [1], with some additional "features"
    based on [2]. In a sense, the ETTS is meant to be applied in a manner
    similar to how one would use traditional classical reasoners/simp-based
    methods (like auto/force) with additional settings (trial and error) and,
    in my view, provides a similar level of automati
    [message truncated]

view this post on Zulip Email Gateway (Dec 26 2022 at 23:16):

From: Dominique Unruh <cl-isabelle-users@lists.cam.ac.uk>
Dear Mihails Milehins,

thanks for the example theories. It is definitely interesting to see how
different approaches compare on this problem.

I believe that maintaining entire "duplicate libraries" of constants
and results is exceptionally inefficient: the priority in the area of
library design should be the elimination of duplication, not finding
ways to generate more "semantic duplicates" automatically (of course,
this is not an attempt to criticize the transfer tools at large, merely
their use for the production of "semantic duplicates" in the context of
the design of libraries of formalized mathematics).

I do agree that having libraries of duplicates is probably undesirable,
and that the TTS approach (at least at its current level of maturity)
leads to a lot of boilerplate and duplication. However, I fail to see an
alternative to the occasional use of it. For example, the three theorems
I needed to convert were not converted simply out of a which to
generalize the library. Instead, I needed those theorems in a subproof
somewhere in further developments in their set-based form (a certain
already proven lemma needed to be applied to a subset of UNIV), and I do
not see any way to avoid the TTS conversion in such a use-case. (The
alternative, redoing the proof of the already proven lemma would seem
much worse, especially since that lemma uses a lot of other lemmas on
the way, and I would have to reprove large parts of existing
developments for that.)

Therefore my approach is to develop libraries in a type-based
presentation, but keeping the option open to do a TTS translation in the
rare cases where this is needed. But I am open to thoughts about
alternatives to this approach. (From the Isabelle community at large,
not just from Mihails.)

Best wishes,
Dominique.

view this post on Zulip Email Gateway (Dec 26 2022 at 23:57):

From: Andrei Popescu <andrei.h.popescu@gmail.com>
Hi Dominique and Mihails,

Therefore my approach is to develop libraries in a type-based presentation, but keeping the option open to do a TTS translation in the rare cases where this is needed. But I am open to thoughts about alternatives to this approach. (From the Isabelle community at large, not just from Mihails.)

In the case of libraries that have already been developed type-based,
I also do not see an alternative different from the two that were
mentioned: (1) restating the results and redoing the proofs set-based
or (2) TTS. And yes, this is exactly how we originally envisioned TTS:
translate only when needed.

Moreover, there is also a case to be made for TTS for libraries that
are yet to be developed. While a set-based development would avoid the
duplication mentioned by Mihails, a type-based development is of
course easier -- which can make a big difference in productivity. Not
to mention that for large enough libraries, the duplication introduced
by (E)TTS might be dwarfed by the reduction in the overall size of the
lemmas and proofs.

Best wishes,
Andrei

view this post on Zulip Email Gateway (Dec 27 2022 at 19:13):

From: Mihails Milehins <mihailsmilehins@gmail.com>
Dear All,

Thank you for your replies.

view this post on Zulip Email Gateway (Dec 27 2022 at 20:31):

From: Andrei Popescu <andrei.h.popescu@gmail.com>
Dear Mihails,

Doubting the need for set-based developments leads to a useful
discussion, which is important IMO. I have not done a systematic
empirical study, but based on what I've seen I believe such
developments are common in Isabelle/HOL (even beyond what is in the
library), and perhaps in the other HOL-based provers as well. The
practice seems mostly motivated by the desire to instantiate the
results flexibly. Otherwise at instantiation time one pays the price
of having to define some types and transfer some structure.

My favourite simple example is the following: For instantiating
results about groups to the set of bijections between 'a and 'a, you
may not want to define a custom type -- especially if the results
about bijections are used as part of a bigger whole, involving
arbitrary functions as well (employing results that may instantiate
other libraries as well).

Another example: Say you develop some general results about labelled
transition systems and their induced traces. Working with the
collections of labels, events, states etc. in a type-based fashion
(i.e., having entire types to model these collections) will be painful
when instantiating the theory to the semantics of particular
programming languages, where any well-formedness conditions would need
to be baked into the type.

In conclusion, I don't think that set-based results are a legacy issue
that we should aim to replace.

As for your remark that the type-based version of a theorem is
essentially semantically equivalent to its set-based counterpart: I
agree this is the case, just that the HOL logic cannot "see'' this --
which is why in our initial paper we advocated the addition of the
Local Typedef rule as "a gentle eye surgery" to HOL. Our last result
(which you also cited in your message) shows that this rule is in fact
(usually) admissible. So the set-based and type-based versions are
indeed very close, but still converting one to the other needs an
extra push.

Finally, I do not have anything to add concerning the set theory
versus type theory debate. Everything that could have been said, and
even more, has probably been said over the years. :-) I can see the
relative advantages of both, and I think HOL is somewhere in the
middle -- a sweet spot perhaps, with some touches of bitterness.

Best wishes,
Andrei

view this post on Zulip Email Gateway (Dec 27 2022 at 23:18):

From: Mihails Milehins <mihailsmilehins@gmail.com>
Dear Andrei Popescu/All,

Thank you for your comments. I will try to provide concluding remarks from
my side. It seems that there are 3 possible evolution paths for
Isabelle/HOL core libraries:

  1. Both type-based and set-based libraries will continue to be maintained
    and developed in parallel, now with the aid of tools like
    types-to-sets/types-to-PERs.

  2. Migration to a single type-based library.

  3. Migration to a single set-based library.
    I doubt that it will be possible to "prove" objectively that one path
    should be preferred over another. However, having tried all of them, even
    with the aid of the state-of-the-art types-to-sets, my personal preference
    would be 2 or 3 (despite having to define additional types in 2 or the
    inefficiencies associated with 3). Therefore, I guess, my initial comment
    that started this debate was prompted primarily by my concern about the
    structure of the core library (and its extensions), rather than tools like
    types-to-sets. Indeed, if 1 is to be followed, having tools like
    types-to-sets and its successor(s) is better than not having them.
    Nonetheless, I still hope that one day, somehow, these distinct libraries
    will coalesce into a homogeneous whole, with a single definition for each
    structure/concept, one statement and one proof of each result.

Also, I agree that much has been said already about the use of
alternative foundations and there is no reason to return to this topic. My
initial remark was prompted by my observation that the duplication problem
seems to be significantly easier to avoid in some of these alternative
systems. So, I hope that it was not entirely off-topic.

Yours Sincerely,
Mihails Milehins
(he/him/his)

view this post on Zulip Email Gateway (Dec 28 2022 at 01:04):

From: Andrei Popescu <andrei.h.popescu@gmail.com>

  1. Both type-based and set-based libraries will continue to be maintained and developed in parallel, now with the aid of tools like types-to-sets/types-to-PERs.
  2. Migration to a single type-based library.
  3. Migration to a single set-based library.

Well, variant 2 could be done in conjunction with exporting set-based
results on demand via types-to-sets. This is the main appeal here:
develop type-based and export set-based.

Also, I agree that much has been said already about the use of alternative foundations and there is no reason to return to this topic. My initial remark was prompted by my observation that the duplication problem seems to be significantly easier to avoid in some of these alternative systems. So, I hope that it was not entirely off-topic.

Not at all off-topic, I didn't mean to suggest that. Of course working
in ZF would not face these, but other problems.

Best wishes,
Andrei

view this post on Zulip Email Gateway (Dec 28 2022 at 16:25):

From: Ken Kubota <mail@kenkubota.de>
This is due to the restrictions of simple type theory lacking the possibility of quantifying over type variables, and not a matter of type theory in general.
If have shown, using some example from group theory, that with lambda binding type variables, too, it is possible to carry out abstract proofs and instantiate them later.
See the section "Type abstraction" here: https://owlofminerva.net/kubota/software-implementation-of-the-mathematical-logic-r0-available-for-download/
Also: https://sympa.inria.fr/sympa/arc/coq-club/2022-06/msg00025.html

Kind regards,

Ken Kubota


Ken Kubota
https://doi.org/10.4444/100

view this post on Zulip Email Gateway (Dec 28 2022 at 16:26):

From: Ken Kubota <mail@kenkubota.de>
The whole approach of set theory (generally restricting sets by a rather arbitrary list of non-logical axioms in order to avoid inconsistency) is not systematic.
Mathematics should be expressed naturally, where the restrictions are a matter of the syntax (type theory).
However, simple type theory like HOL or Isabelle/HOL is, of course, too weak. Without introducing some kind of abstraction/quantification over type variables this kind of problems will persist.


Ken Kubota
https://doi.org/10.4444/100

view this post on Zulip Email Gateway (Dec 28 2022 at 17:39):

From: Mihails Milehins <mihailsmilehins@gmail.com>
Dear Ken Kubota,

Thank you for your remarks.

view this post on Zulip Email Gateway (Dec 28 2022 at 19:10):

From: Dominique Unruh <cl-isabelle-users@lists.cam.ac.uk>
Maybe a variant of variant 2 would be what works on the long run:
Libraries will contain both set- and type-based definitions (with proper
linking via transfer-rules or some related mechanism), but only provide
lemmas in the type-based language. If the definitions are properly
linked, set-based lemmas could be created on the fly (maybe using an
attribute) without much extra effort. (Because it seems to me that a lot
of the effort that is not fully automatable in the TTS world is to match
up the already existing, and somewhat differently flavored, definitions
in the set- and the type-world.)

A good example of something where not thinking of TTS when definition
leads to trouble is "sum". The existing definition uses THE and before
lifting it, one needs to come up with a completely different definition
(and Mihails and my writeup use different ones, too). Generally, THE and
SOME should be avoided (and maybe we should have something like
the_default that returns a specified value when the predicate does not
have unique value). Maybe "undefined" is also bad, I am not sure.

Best wishes,
Dominique.

view this post on Zulip Email Gateway (Dec 29 2022 at 00:12):

From: Ken Kubota <mail@kenkubota.de>
Dear Mihails Milehins,

It is correct that none of the major HOL systems, unfortunately, have moved to type quantification yet, despite the fact that Mike Gordon suggested this in 2001 already.
This includes HOL4: https://sourceforge.net/p/hol/mailman/message/37215715/ <https://sourceforge.net/p/hol/mailman/message/37215715/>

HOL-Omega should have enough expressiveness (although I consider the stratification of types as problematic):
"using the HOL-Omega system to model monads and prove theorems about their general properties, as well as concepts from category theory such as functors and natural transformations."
http://www.trustworthytools.com/id17.html <http://www.trustworthytools.com/id17.html>

However, unless HOL4 and Isabelle/HOL move to quantification over types, users will either have to live with duplication, or have to resort to other software or to set theory, or spend an increasing amount of time on workarounds such as conversion between type theory and set theory.

It should be noted that an implementation of quantification over types should use lambda as type quantifier (as suggested by Mike Gordon, and independently of that implemented in my logic R0), and not introduce type quantifiers as new primitive symbols (as described in Tom Melham's paper, which heavily draws on Peter B. Andrews' PhD thesis).

Kind regards,

Ken Kubota


Ken Kubota
https://doi.org/10.4444/100

view this post on Zulip Email Gateway (Dec 29 2022 at 22:14):

From: Mihails Milehins <mihailsmilehins@gmail.com>
Dear All,

Thank you for further remarks.


Last updated: Apr 26 2024 at 04:17 UTC