Stream: Mirror: Isabelle Users Mailing List

Topic: [isabelle] Performance considerations of locales


view this post on Zulip Email Gateway (Jul 23 2021 at 06:12):

From: Norbert Schirmer via Cl-isabelle-users <cl-isabelle-users@lists.cam.ac.uk>
Hi,

In our applications we have a wide spread use of locales, both on the Isar toplevel
as well as in Isabelle/ML were locales are auto generated and certain definitions are made and
theorems are proved within these locales.

Recently we hit the tipping point, where actually more cpu-time is used for the
declaration of locales compared to the actual proof load that is performed within the locales.
I did some analysis to get a better understanding what actually happens and
want to initiate a discussion on how this situation might be improved in the
future.

First some high level description of our use of locales which triggered the tipping point:

  1. Locales are used to structure and modularize our work.
  2. We stay abstract: Our clear focus is on declaring new
    locales on the basis of existing locales rather then interpreting locales.

  3. Most of the time an abstract entity within a locale aka fixes x::'a stays the same in
    the complete locale hierarchy. It is neither renamed nor is it's type changed when being imported to
    another locale. It might be shared among multiple locales.

  4. Within a locale we make definitions and proof theorems which are noted.
    A typical right hand side (rhs) of a definition can be a sizeable term containing polymorphic constants.
    For example think of a sequence of about 100 monadic binds in a state-monad.

  5. We use attributes on definitions as well as theorems to maintain our data for proof automation.

Reading and digesting the excellent background papers on locales [1] and local theories [2] and
studying and profiling the actual code, I come to the conclusion that the actual
hot-spot is the omnipresent instantiation of
locales. This happens in the roundup of locale expressions, both when defining new locales as well as
when entering the context of a locale. Profiling highlights low-level functions like Term.fold_atyps,
Term_Subst.generalizeT_same, Term_Subst.inst_same...

This is not surprising as the implementation of locales and local theories always starts from the
global theory where all definitions / theorems (notes) are generalised with the 'fixes' being
schematic variables and the 'assumes' being explicit preconditions. When entering a target context
the (term and type) variables get instantiated and the preconditions become hypothetical assumptions.

This process becomes so costly in our application because of (4) and (5) and is especially unsatisfying
for the user because of (3): From a user perspective nothing has to be instantiated.
It turns out, that by using attributes (5) the optimisation of "lazy notes" is unfortunately
ineffective (in contrast a theorem with no attributes is represented as a lazy value and hence
only instantiated when it is actually used, this speeds up entering a locale or composing locales).

I guess there is a huge design space to improve the situation. Here are some of my thoughts:

I. Keep the terms small
a) Manual workaround: Instead of a local definition we could manually make a global definition of the
large rhs, and then only make a local definition having the (folded) global one as rhs.
This is more or less the first half of what a local definition does anyway.
b) Infrastructure: have an option to the local definition that only introduces the abbreviation
but does not note the instantiated equation in the target context.
II. Improve the use case for the "canonical" locale instances (points 2 + 3) by some
sort of caching or pre-evaluated instances.
III. Extending the lazy notes idea to "lazy attributes", giving user level control to what is already relevant
for the declaration of new locales and what is only relevant for doing proofs later on.
IV. Make instantiation cheaper, e.g. by changing the low-level representation of cterms (like
environment + term-skeleton instead of a fully instantiated term; ML antiquotations to
match on cterms and construct cterms might bring the convenience of programming with
terms to cterms, without having to fully convert between terms and cterms all the time).

Manually keeping terms small (I.a) or avoiding attributes, feels like breaking the
nice and convenient abstraction and falling back to bare-metal error prone tool development.
I hope that some (maybe combination) of the ideas above or some other ideas materialise as a rescue.

Find attached three benchmark theories illustrating some aspects:

Benchmark_poly.thy: large terms within locale using polymorphic constants
Benchmark_mono.thy: monomorphic variant
Benchmark_global.thy: Workaround (I.a), keeping terms in definitions and theorems (notes) small.

[1] Clemens Ballarin. Locales: a module system for mathematical theories. Journal of Automated Reasoning, 52(2):123–153, 2014.
[2] Florian Haftmann and Makarius Wenzel. Local theory specifications in Isabelle/Isar. In Stefano Berardi,
Ferruccio Damiani, and Ugo de Liguoro, editors, Types for Proofs and Programs, TYPES 2008,
volume 5497 of Lecture Notes in Computer Science. Springer, 2009.

Regards,
Norbert

--

Norbert Schirmer (nschirmer@apple.com <mailto:nschirmer@apple.com>)
 SEG Formal Verification
Regards,
Norbert

--

Norbert Schirmer (nschirmer@apple.com)
 SEG Formal Verification
Benchmark_global.thy
Benchmark_mono.thy
Benchmark_poly.thy

view this post on Zulip Email Gateway (Sep 22 2021 at 11:35):

From: Makarius <makarius@sketis.net>
On 23/07/2021 08:11, Norbert Schirmer via Cl-isabelle-users wrote:

In our applications we have a wide spread use of locales, both on the Isar
toplevel
as well as in Isabelle/ML were locales are auto generated and certain
definitions are made and 
theorems are proved within these locales.

Recently we hit the tipping point, where actually more cpu-time is used for the 
declaration of locales compared to the actual proof load that is performed
within the locales.

In ancient times, we did have the understanding that proofs require most of
the time, and the surrounding setup is neglectable. In recent years, our
theory infrastructure has become so rich, that it often requires more time to
build than the proofs.

So we need to take more care, to trim that down again.

I did some analysis to get a better understanding what actually happens and
want to initiate a discussion on how this situation might be improved in the
future.

First some high level description of our use of locales which triggered the
tipping point:

  1. Locales are used to structure and modularize our work. 
  2. We stay abstract: Our clear focus is on declaring new
      locales on the basis of existing locales rather then interpreting locales.
  3. Most of the time an abstract entity within a locale aka fixes x::'a stays
    the same in
      the complete locale hierarchy. It is neither renamed nor is it's type
    changed when being imported to
      another locale. It might be shared among multiple locales.

Points 1 + 2 + 3 seem to suggest that most of the locale interpretation
add-ons from the past 10-15 years are not required. They are nonetheless
central to how locales work today. "Shortcuts" might be possible, but it is
difficult to say what really can or should be done: We need to resist from
"improvements" that move towards a terminal state where nobody understands how
things work.

  1. Within a locale we make definitions and proof theorems which are noted. 
      A typical right hand side (rhs) of a definition can be a sizeable term
    containing polymorphic constants. 
      For example think of a sequence of about 100 monadic binds in a state-monad.
  2. We use attributes on definitions as well as theorems to maintain our data
    for proof automation.

Points 4 + 5 are more particular to your application, and look like the true
focus of performance improvements.

I come to the conclusion that the actual 
hot-spot is the omnipresent instantiation of
locales. This happens in the roundup of locale expressions, both when defining
new locales as well as 
when entering the context of a locale. Profiling highlights low-level
functions like Term.fold_atyps,
Term_Subst.generalizeT_same, Term_Subst.inst_same...

Instantiations --- especially of types within terms --- often show up
prominently in profiles. Over the years, we have made a lot of performance
tuning in this respect. My most recent attempt is here
https://isabelle-dev.sketis.net/phame/post/view/44/scalable_operations_for_thm.instantiate_and_thm.generalize
although it had hardly any measurable impact on your examples or other
applications. In the worst case, it can even become a bit slower, due to more
heap garbabe produced by the underlying 2-3 trees, in contrast to plain lists.

In recent years, I have come to the conclusion that we could remove excessive
redundancy of types within terms, via the "typargs" representation of Consts.
E.g. instead of overloaded Const "plus" with type instance "T => T => T", it
would be only "plus" applied to a single typearg T. We have already support
for this view within Consts.T of the theory (or Proof.Context).

What remains is a small exercise in cleaning up Isabelle/ML tools after > 35
years of Gordon-style Const term constructors, with their explicit type
instances. The following ML antiquotations may help to get to that point
within a few years:
https://isabelle-dev.sketis.net/phame/post/view/46/ml_antiquotations_for_type_constructors_and_term_constants

This process becomes so costly in our application because of (4) and (5) and
is especially unsatisfying
for the user because of (3): From a user perspective nothing has to be
instantiated.
It turns out, that by using attributes (5) the optimisation of "lazy notes" is
unfortunately 
ineffective (in contrast a theorem with no attributes is represented as a lazy
value and hence 
only instantiated when it is actually used, this speeds up entering a locale
or composing locales).

Note that attributes are just a special case of Local_Theory.declaration in
Isabelle/ML or 'declaration' in Isabelle/Isar: after applying a morphism, you
add your own data to the context; your tools pick up the data later on.

An attribute always has a visible "thm" parameter that is always transformed
by the morphism on the spot.

In contrast, a declaration can control precisely what happens when. Here
https://isabelle-dev.sketis.net/source/isabelle/browse/default/src/Pure/ex/Def.thy;f79dfc7656ae
is an example where primitive definitions are stored by there LHS-term, while
the thm LHS=RHS is only materialized via the morphism when required in a
corresponding simproc.

This demonstrates, how much flexibility the regular local theory user-space
offers, but it requires some understanding of the big picture.

I guess there is a huge design space to improve the situation. Here are some
of my thoughts:

I. Keep the terms small
a) Manual workaround: Instead of a local definition we could manually make a
global definition of the
   large rhs, and then only make a local definition having the (folded) global
one as rhs. 
   This is more or less the first half of what a local definition does anyway.
b) Infrastructure: have an option to the local definition that only introduces
the abbreviation 
   but does not note the instantiated equation in the target context.
II. Improve the use case for the "canonical" locale instances (points 2 + 3)
by some 
   sort of caching or pre-evaluated instances.
III. Extending the lazy notes idea to "lazy attributes", giving user level
control to what is already relevant
    for the declaration of new locales and what is only relevant for doing
proofs later on.
IV. Make instantiation cheaper, e.g. by changing the low-level representation
of cterms (like 
   environment + term-skeleton instead of a fully instantiated term; ML
antiquotations to
   match on cterms and construct cterms might bring the convenience of
programming with 
   terms to cterms, without having to fully convert between terms and cterms
all the time).

My explanations and experiments should already correlate with most of these
ideas, while avoiding huge upheavals in the underlying infrastructure.

Your point IV sounds very ambitious: cterms could certainly benefit from many
reforms, like open bounds as frees (to simplify e.g. Thm.dest_abs). But most
of this will introduce a lot of extra complexity, and might never materialize.

Right now, I hope that Const-typargs within terms would be sufficient for all
kinds of conceptual simplifications and performance improvements everywhere. I
leave it as an exercise for Isabelle/ML tool builders, to update their stuff
after the next release, to use Type/Const antiquotations everywhere.

Find attached three benchmark theories illustrating some aspects:

Benchmark_poly.thy: large terms within locale using polymorphic constants
Benchmark_mono.thy: monomorphic variant
Benchmark_global.thy: Workaround (I.a), keeping terms in definitions and
theorems (notes) small.

I have experimented with these benchmarks a bit: often they did not even work
with the regular 16 GB heap model of x86_64_32.

Maybe you can rephrase most of this directly with the 'def' command from
https://isabelle-dev.sketis.net/source/isabelle/browse/default/src/Pure/ex/Def.thy;f79dfc7656ae
(it should also work with current Isabelle2021).

Makarius

view this post on Zulip Email Gateway (Sep 22 2021 at 11:40):

From: Makarius <makarius@sketis.net>
See also the adjacent examples:
https://isabelle-dev.sketis.net/source/isabelle/browse/default/src/Pure/ex/Def_Examples.thy;f79dfc7656ae

Makarius

view this post on Zulip Email Gateway (Sep 27 2021 at 08:52):

From: Norbert Schirmer via Cl-isabelle-users <cl-isabelle-users@lists.cam.ac.uk>
Thanks a lot for the explanations and examples and work into the direction to improve performance in the mid-term.

I guess with some combination of hand-crafted declarations (instead of attributes) and implementing the ‘def’ idea above I can improve the performance for our applications.
The tradeoff is between simplicity / readability vs. performance. Especially the neat thing about attributes (compared to declarations) is that you can already provide them to almost all relevant functions like definitions, derived specifications (like function package) and ’notes’. Note that we not only use plain definitions with large terms but also derived concepts like (partial) functions which besides the defining equation also result in a bunch of theorems (like induction) with large terms.

My main working-horse is actually the ’named_theorems’ attribute which is only relevant in the body of the locale when doing proofs and not in the declaration of new locales. So I would reimplement this functionality with a declaration instead of the existing attribute, explicitly postponing the application of the morphism. Thats why I mentioned the idea of “lazy attributes” to implement an attribute like “lazy_named_theorems” that can be provided to all the already existing functions in Isabelle/ML.

Regards,
Norbert

view this post on Zulip Email Gateway (Sep 27 2021 at 09:50):

From: Lawrence Paulson <lp15@cam.ac.uk>
Just a general comment: locales have been around for a long time but we are still figuring out how to use them effectively in large developments. Papers on the locale methodology itself ought to be publishable, certainly at the Isabelle workshop (which we hope to hold next summer at flOC 2022).

Larry


Last updated: Apr 18 2024 at 01:05 UTC