Stream: Archive Mirror: Isabelle Users Mailing List

Topic: [isabelle] Structured induction with custom tactics


view this post on Zulip Email Gateway (Aug 18 2022 at 14:41):

From: Palle Raabjerg <palle.raabjerg@it.uu.se>
I am currently trying to write custom tactics to resolve different
instances of some induction proofs. With some help from Christian Urban
on [nominal-isabelle], I found out how to invoke nominal_induct_tac from
the ML level, which has been a great help so far (thanks Christian!).

But while it is possible to do much simply by combining an induction
tactic with a number of other tactics, I increasingly find that it would
be useful (possibly even necessary) in many cases to be able to
structure the tactics so that

  1. The base case and the inductive cases are separated, such that
    different tactics can be used for each.

  2. We have access to the induction term in each case.

Example. If we have the following datatype:
nominal_datatype trm = Name name | Pair trm trm

The Isar-proof might be structured like this:
proof(induct T rule: trm.strong_inducts)
case(Name a)
...
case(Pair N M)
...
qed

So we want to keep the cases separate in the tactic, or at least the
base case separate from the inductive steps.
And whenever we do a nested proof in the base case, for instance, we can
refer directly to 'a' as the induction term. This might not work well in
general in a tactic if we cannot assume knowledge about the datatype we
do the induction on, but in most cases at least, the base case would be
the same.
Any pointers on how this might be achieved, if it is possible?

Best Regards,
Palle Raabjerg

view this post on Zulip Email Gateway (Aug 18 2022 at 14:42):

From: Palle Raabjerg <palle.raabjerg@it.uu.se>
It occurs to me that I may not have been very clear or specific in my
previous mail on this subject. I am also wondering: Would this be more
appropriate for the [isabelle-dev] list, as we are dealing with Isabelle/ML?

But in short, what I want to do is automate/manipulate Isar style
structured proofs in ML.

Since the last mail I have been looking around to find out more. The
Isabelle Programming Tutorial has a section intended for the subject of
structured proofs in ML. But that section is marked TBD, and there is
only a very short example on a single page.
The Isabelle/Isar Implementation Manual has a chapter intended for Isar
(supposedly to explain how to manipulate Isar style proofs on the ML
level). It contains the empty sections "Proof commands", "Proof methods"
and "Attributes".

If we look at the source files, it would seem that Pure/Isar/proof.ML
corresponds to "Proof commands", Pur/Isar/method.ML corresponds to
"Proof methods" and Pure/Isar/attrib.ML to "Attributes". And indeed,
many of the functions look interesting to me. I have been experimenting
a bit, but much of it still feels like a bit of a mystery.

But am I on the right track at least?
I may well be on the wrong track, in which case most of the questions
below are probably pointless.

It seems clear that most of the functions in proof.ML manipulate a proof
state. 'Proof.init @{context}' will give me a new (empty) proof state,
which looks promising. I can populate the state with facts using
Proof.put_facts and so on, but have so far been unable to give the state
a goal.

Using 'Proof.theorem_i' as follows:
Proof.theorem_i NONE (K I) [[(@{prop "A ==> A"}, [])]] @{context}
seems to yield a state with a stack of 3 nodes, the _second_ node
including the goal "A ==> A", the other two empty. Puzzling. Maybe
theorem_i is not what I am looking for.

So a few questions on this:
What is the intention of Proof.theorem/theorem_i?
How do we get an initial state with a goal we want to prove?
Is state-manipulation a recommended way of doing this in ML?
I am under the impression that the node stack corresponds to the levels
(scopes) of an Isar proof. Is that correct?

Moving on to possibly the most puzzling subject: Methods. Or Proof
methods, as the implementation manual seems to call it.
There are two prevalent Method types seen in method.ML and proof.ML:
Method.method and Method.text

A Method.method is simply a function type: thm list -> cases_tactic
There are a number of basic methods in method.ML that seem to correspond
to some basic tactics (assumption, erule, drule...), and Method.text can
be a 'Basic of (Proof.context -> method) * Position.T'.
Method.text can also be 'Source of src', which seems to indicate that we
may be able to include Isar code in a Method.text.

Intuitively, it seems we should be able to use Proof.apply to apply
elements of type Method.text to states, and perform proofs that way. But
the 'Position.T' element indicates that this is not the whole story.

So a few more questions:
What are Method.method and Method.text really intended for?
Are there functions for creating proof methods?
Can Isar scripts be included in a Method.text element?

Best Regards,
The inquisitive
Palle Raabjerg

view this post on Zulip Email Gateway (Aug 18 2022 at 14:43):

From: Christian Urban <urbanc@in.tum.de>
Hi Palle,

I am afraid I cannot give an answer to all your questions.

Palle Raabjerg writes:

But in short, what I want to do is automate/manipulate Isar style
structured proofs in ML.

Since the last mail I have been looking around to find out more. The
Isabelle Programming Tutorial has a section intended for the subject of
structured proofs in ML. But that section is marked TBD, and there is
only a very short example on a single page.

Sorry. Writing tutorials takes time and it is usually
a thankless task (in terms of paper-counts etc). What
I thought I will describe in this section is how to
deal with manipulations of contexts. If I understood
your original question correctly, this will not be of
much help to you.

Using 'Proof.theorem_i' as follows:
Proof.theorem_i NONE (K I) [[(@{prop "A ==> A"}, [])]] @{context}
seems to yield a state with a stack of 3 nodes, the _second_ node
including the goal "A ==> A", the other two empty. Puzzling. Maybe
theorem_i is not what I am looking for.

Usually, if you want to set up a goal to be proved, then you
need to use the function Goal.prove. For this you need a tactic
that completely solves the goal. Not sure whether this is what
you want.

What is the intention of Proof.theorem/theorem_i?

The function theorem is for "outer layers" where the terms
are given as strings. Theorem_i is the "internal" interface
where parsing has been already done and terms are given
as "real" terms. Having said that, forget this convention
quickly, as it is already obsolete. I think today, functions
for "outer layers" are called foo_cmd, while the internal
functions are just called foo.

To give more help, maybe it would be helpful if you describe
exactly what you like to do. In your last email you described
your problem in abstract terms. Judging from the absence of
replies, it seems nobody was able to give an appropriate
answer. Maybe a concrete example or even a small theory
file would be more helpful....though no promises ;o)

Best wishes,
Christian

view this post on Zulip Email Gateway (Aug 18 2022 at 14:43):

From: Makarius <makarius@sketis.net>
Fine-grained control over the "structure" of proof states is usually very
delicate, but it can be done. The proof methods "induct" and
"nominal_induct" are the most sophisticated examples for that.

As a starting point, I recommend looking at the relatively new FOCUS
combinators in Isabelle2009-1/src/Pure/subgoal.ML -- the plain
Subgoal.focus versions together with Isar.goal() in interactive mode
should give you some idea how to get hold of the basic fix/assume/show
layout of a goal state.

Makarius

view this post on Zulip Email Gateway (Aug 18 2022 at 14:43):

From: Makarius <makarius@sketis.net>
On Fri, 12 Feb 2010, Palle Raabjerg wrote:

It occurs to me that I may not have been very clear or specific in my
previous mail on this subject. I am also wondering: Would this be more
appropriate for the [isabelle-dev] list, as we are dealing with
Isabelle/ML?

Isabelle/ML is definitely part of the "user space" of Isabelle. As long
as we speak about official releases for end-users, this is the correct
mailing list.

If we look at the source files, it would seem that Pure/Isar/proof.ML
corresponds to "Proof commands", Pur/Isar/method.ML corresponds to
"Proof methods" and Pure/Isar/attrib.ML to "Attributes". And indeed,
many of the functions look interesting to me. I have been experimenting
a bit, but much of it still feels like a bit of a mystery.

But am I on the right track at least?
I may well be on the wrong track, in which case most of the questions
below are probably pointless.

I am impressed how far you've got in figuring out the sources. These
modules implement the main concepts of the Isar proof language. Chapter 2
of the Isar reference manual provides some high-level explanations of all
this. You do not need to understand all these details to implement
advanced proof methods, though.

It seems clear that most of the functions in proof.ML manipulate a proof
state. 'Proof.init @{context}' will give me a new (empty) proof state,
which looks promising. I can populate the state with facts using
Proof.put_facts and so on, but have so far been unable to give the state
a goal.

Using 'Proof.theorem_i' as follows:
Proof.theorem_i NONE (K I) [[(@{prop "A ==> A"}, [])]] @{context}
seems to yield a state with a stack of 3 nodes, the _second_ node
including the goal "A ==> A", the other two empty. Puzzling. Maybe
theorem_i is not what I am looking for.

It is unlikely that you will need any of this. The proof state is already
given by the user -- you can access it via Isar.state() in interactive ML.

The Proof.theorem_i function is the most basic entry point for producing a
toplevel goal; the 'theorem' command does this in regular user space (with
lots of extra sophistication). The stack implements Isar block structure
-- just an internal detail.

How do we get an initial state with a goal we want to prove?

lemma "A ==> A"
ML_prf {* Isar.goal() *}

You can also say this:

apply (tactic {* ... *})

but you need to give a closed ML expression of type tactic on the spot, so
it is advisable to play around with ML_prf or ML_val first.

Is state-manipulation a recommended way of doing this in ML?

No, unless you want to come up with new primary Isar commands, which
usually requires several years of intensive study of the machinery.

I am under the impression that the node stack corresponds to the levels
(scopes) of an Isar proof. Is that correct?

Yes. User code normally only needs the top-of-stack, though.

Moving on to possibly the most puzzling subject: Methods. Or Proof
methods, as the implementation manual seems to call it. There are two
prevalent Method types seen in method.ML and proof.ML: Method.method and
Method.text

Method.method is what you need; the Method.text merely represents source
text.

Are there functions for creating proof methods?

Method.METHOD etc. You should also grep for method_setup or Method.setup
in the existing theory/ML sources to get an idea how it is usually done
(but there are occasionally old forms that should not be copied blindly).

Wrapping up a tactic as a proof method involves some standard boiler
plate, but the hard part is the logic behind the tactic itself. Maybe we
should go back to some concrete examples from you nominal application.

Can Isar scripts be included in a Method.text element?

Not really. You can probably ignore Method.text altogether for your
purposes.

Makarius

view this post on Zulip Email Gateway (Aug 18 2022 at 14:43):

From: Makarius <makarius@sketis.net>
According to Stefan Berghofer, this is how to implement sophisticated
proof tools:

(1) Make some typical examples (in plain Isar source language).
(2) You see the general idea.
(3) Implement the general idea in ML.

Steps 2 and 3 are not so trivial. I recommend to take the concrete Isar
proofs and inspect intermediate situations with Isar.goal(), Isar.state()
etc. in ML. Don't try to reconstruct situtions in ML by hand, let Isar
produce them and merely look at the results.

For the actual goal manipulation part, combinators like SUBPROOF or
Subgoal.FOCUS can be helpful, beause they represent Isar structure quite
closely at the ML level.

Makarius


Last updated: Apr 25 2024 at 16:19 UTC