Stream: Archive Mirror: Isabelle Users Mailing List

Topic: [isabelle] Storing theorems on the fly


view this post on Zulip Email Gateway (Aug 19 2022 at 09:14):

From: Ognjen Maric <ognjen.maric@gmail.com>
Hi all,

I'm writing some Isabelle/ML code which, given a term, generates
theorems about it. This is essentially a recursive function which
generates theorems for the subterms and then combines them. A single
subterm might appear many times, and I'd like to store the generated
theorems to avoid repeating the work. Clearly, they could be manually
threaded through the recursive calls, but I'd like to avoid doing the
plumbing myself, especially since I'd like to be able to retrieve them
later (outside of the function) as well.

So I assume I should be using one of the context.*_Data structures to
store the theorems in an Item_Net (indexed by the terms). Questions:

Questions:

  1. Right now I'm trying to do this from an ML {* *} block within Isar
    text, using Generic_Data and then storing the generated theorems with
    Context.>>. However, it appears that the theorems only actually get
    stored after the whole ML {* *} block is executed. Is this a feature,
    and if so, how to get around to it?

  2. Could this be done within a proof tactic, and how?

Thanks,
Ognjen

view this post on Zulip Email Gateway (Aug 19 2022 at 09:14):

From: Makarius <makarius@sketis.net>
On Wed, 28 Nov 2012, Ognjen Maric wrote:

A single subterm might appear many times, and I'd like to store the
generated theorems to avoid repeating the work.

This part in isolation sounds like an application of Thm.cterm_cache or
Thm.thm_cache, i.e. it is semantically plain function application with
operational tuning for re-use earlier values (in a thread-safe way, with a
little overhead).

Clearly, they could be manually threaded through the recursive calls,
but I'd like to avoid doing the plumbing myself, especially since I'd
like to be able to retrieve them later (outside of the function) as
well.

So I assume I should be using one of the context.*_Data structures to
store the theorems in an Item_Net (indexed by the terms). Questions:

Questions:
1. Right now I'm trying to do this from an ML {* *} block within Isar
text, using Generic_Data and then storing the generated theorems with
Context.>>. However, it appears that the theorems only actually get
stored after the whole ML {* *} block is executed. Is this a feature,
and if so, how to get around to it?

Sounds all too much like implicit destructive stateful programming to me,
i.e. not what you normally do in Isabelle/ML.

What are you trying to achieve concretely?

The normal way is to "thread through" values, usually using the
Isabelle/ML combinators for that (variations on |> and fold/map/fold_map
as explained in the implementation manual). This also avoids the seeming
Context.>> problem above (ML commands have their own linear context that
can be updated implicitly -- at compile-time, not run-time).

  1. Could this be done within a proof tactic, and how?

Probably not. Depends what you actually need. Tactical proof steps
cannot change the context. (I am very glad for that, now that I know how
messy this is in other major provers on the market.)

Makarius

view this post on Zulip Email Gateway (Aug 19 2022 at 09:14):

From: Ognjen Maric <ognjen.maric@gmail.com>
On 11/28/2012 01:44 PM, Makarius wrote:

This part in isolation sounds like an application of Thm.cterm_cache or
Thm.thm_cache, i.e. it is semantically plain function application with
operational tuning for re-use earlier values (in a thread-safe way, with
a little overhead).

I'm not immediately certain if they will work in my scenario, but they
look very useful, thanks. I didn't know they were there.

Sounds all too much like implicit destructive stateful programming to
me, i.e. not what you normally do in Isabelle/ML.

What are you trying to achieve concretely?

The normal way is to "thread through" values, usually using the
Isabelle/ML combinators for that (variations on |> and fold/map/fold_map
as explained in the implementation manual). This also avoids the seeming
Context.>> problem above (ML commands have their own linear context that
can be updated implicitly -- at compile-time, not run-time).

They can of course be threaded through, but I haven't been able to come
up with an elegant way to express it in this case. A stateful solution
would be straightforward, though (that is, once you offload all the
worries about potential concurrency issues and whatnot to somebody else
:) At any rate, it's just an annoyance, not a showstopper.

Concretely, I prove refinement between monadic programs (I'm aware of
Peter's AFP entry). In my particular case, the data refinement typically
affects a few primitive "procedures", for which I prove refinement
(manually). More complex procedures are then built by combining these
with monadic operations. Their refinement proofs are straightforward
(and tedious). Basically unfold their definitions, build refinement
theorems for the components, combine them appropriately and fold the
definitions inside the resulting theorems. It is these theorems I'd like
to cache (since the "intermediate" procedures can appear many times),
and eventually also export into the context, so that they can be used
later on in other "top-level" proofs.

Probably not. Depends what you actually need. Tactical proof steps
cannot change the context. (I am very glad for that, now that I know
how messy this is in other major provers on the market.)

Thanks, this isn't such an important point for my application anyway.

Ognjen

view this post on Zulip Email Gateway (Aug 19 2022 at 09:14):

From: Makarius <makarius@sketis.net>
On Wed, 28 Nov 2012, Ognjen Maric wrote:

They can of course be threaded through, but I haven't been able to come
up with an elegant way to express it in this case. A stateful solution
would be straightforward, though (that is, once you offload all the
worries about potential concurrency issues and whatnot to somebody else
:) At any rate, it's just an annoyance, not a showstopper.

I wouldn't use "stateful" and "straightforward" in the same sentence.
Even without the omnipresent parallelism in Isabelle, it is better to say
explicitly how you fiddle with your values and updated environment.

See again section 0.3 "Canonical argument order" in
http://isabelle.in.tum.de/dist/Isabelle2012/doc/implementation.pdf -- it
is one of the success stories of Isabelle/ML. We gradually eloborated
this purely functional combinator style for aesthetic reasons over the
years, and later we found ourselves in the lucky situation to be able to
get on the multicore train early, with minimal cleanup of the code base.

Sequentialism is a very strong assumption that is better not imposed on
any serious program today.

Concretely, I prove refinement between monadic programs (I'm aware of
Peter's AFP entry). In my particular case, the data refinement typically
affects a few primitive "procedures", for which I prove refinement
(manually). More complex procedures are then built by combining these
with monadic operations. Their refinement proofs are straightforward
(and tedious). Basically unfold their definitions, build refinement
theorems for the components, combine them appropriately and fold the
definitions inside the resulting theorems. It is these theorems I'd like
to cache (since the "intermediate" procedures can appear many times),
and eventually also export into the context, so that they can be used
later on in other "top-level" proofs.

I've seen the AFP entry Refine_Monadic occasionally from a distance, but
am not familiar with it.

Generally, it depends a lot how you want to organize things, or rather how
you need to organize things.

Within a tactical expression or Isar proof method, you can access the
Proof.context only as a local immutable value (of course it can be updated
in a functional manner). The context might contain funny quasi-functional
elements, such as lazy values or caches of values, but that is
semantically still functional. You cannot have a tactic do some ad-hoc
poking in a non-monotonic manner.

Within a proof, you can have commands updating the context according to
its block structure. The most basic approach is to make an attribute (see
the 'attribute_setup' command in the isar-ref manual) and use that with
'note' or other fact producing commands in the proof. Whatever you do, it
will be local to the proof. Proofs are formally irrelevant, i.e. you
cannot export anything from them. The main result is specified before,
and already determined before commencing the proof.

Within the theory context, you can have commands of the form "theory ->
theory" or "local_theory -> local_theory". You can store whatever you
want in the theory context, as long as it is plain functional data in any
of the senses above.

You could try to move results you discover within a proof back to the
toplevel context, by exporting it in some logical obvious way.

In a trivial manner, this happens all the time when you work with
Isabelle:

lemma B
.
.
.
fix x
assume "A x"
.
txt {* Oh, I need "lemma C x" *}

At that point you merely go back in the text and prepend this before it:

lemma
fixes x
assumes "A x"
shows "C x"

with its proof. So you re-assemble the text in the editor, without funny
tricks about implicit context extension in the middle of the other proof.

In more specific, or more complex situations, one could consider making
this assembly of properly ordered proof documents supported by some tool.
It is up to your imagination.

Makarius

view this post on Zulip Email Gateway (Aug 19 2022 at 09:15):

From: Peter Lammich <lammich@in.tum.de>
Sounds interesting. I have a similar problem, when trying to
automatically refine terms to more executable versions (replacing sets
by red-black-trees, etc.). Once the basic operations are proven,
refining more complex terms is straightforward. However, in order to
refine a term that contains defined constants (procedures in your
speaking), one either has to prove a separate refinement lemma for each
constant manually, or unfold all such constants beforehand (one then
gets one big term, and looses the structure), or do something like you
propose, i.e., extract those constants from the term first, and
automatically generate refined versions.

We have experimented with the first two options already, and they are
not too bad for our use-cases. For the third option, I thought about
implementing it as an outer-level command, something like:
"refine constant as name", that would generate the refinement of
constant (and, recursively, of all constants used in the term that need
to be unfolded), then define a new constant name, and generate a
refinement lemma name.refine: "new-constant refines constant".

Note the similarity of this problem to code generation: Also the
code-generator collects code-theorems for constants and, recursively,
for constants that are used in those code-theorems.


Last updated: Mar 29 2024 at 08:18 UTC