From: Daniel Raggi <danielraggi@gmail.com>
I'm having trouble understanding something. Let me explain:
When folding a definition, why would someone (in some cases), need to
specify instantiations of variables using [where "X = ..."]?
I can see how it's useful to do it in some cases to avoid having to
backtrack.
I've defined function fadd as follows:
definition "fadd M N = (λa. M a + N a)"
purely as a test, I have the following goal to prove:
lemma "(λx. 0 + 0) = (λx. 0)"
However, when I try this:
lemma "(λx. 0 + 0) = (λx. 0)"
I get *(λx. fadd (λa. 0) (λa. a) 0) = (λx. 0) *as a result. However, if I
try:
lemma "(λx. 0 + 0) = (λx. 0)"
I get what I want, which is fadd (λx. 0) (λx. 0) = (λx. 0).
I'm a bit confused, because in the case of *apply (fold fadd_def) *I'm not
even getting what I want, even when using back. Can anyone point me at
why my desired instantiation of variables is not tried?
Thanks a lot to anyone who minds taking a look at this.
Best,
Daniel
From: Makarius <makarius@sketis.net>
(This old thread still looks unsettled.)
On Fri, 29 May 2015, Daniel Raggi wrote:
When folding a definition, why would someone (in some cases), need to
specify instantiations of variables using [where "X = ..."]?I can see how it's useful to do it in some cases to avoid having to
backtrack.
First note that (fold eq) is the same as (unfold eq [symmetric]).
For more than one equation, there is a slight difference in the initial
order, although the details are unclear to me; see
src/Pure/raw_simplifer.ML:
val rev_defs = sort_lhs_depths o map Thm.symmetric;
Larry might be able to explain that code "from the depths of time", as he
usually says; see
http://isabelle.in.tum.de/repos/isabelle/rev/e7588b53d6b0
Back to "unfold". That is just the Simplifier with its usual policies,
see also the isar-ref manual section 9.3 "The Simplifier". The main
strategy is bottom-up rewriting with higher-order patterns.
There is no support for back-tracking in the rewrite engine, so the
command 'back' normally does not make any sense with applications of
"unfold", "simp" etc. (In principle some wrapper tactics of the
simplifier can produce more than one result for back-tracking.)
I've defined function fadd as follows:
definition "fadd M N = (λa. M a + N a)"purely as a test, I have the following goal to prove:
lemma "(λx. 0 + 0) = (λx. 0)"However, when I try this:
lemma "(λx. 0 + 0) = (λx. 0)" * apply (fold fadd_def)*I get *(λx. fadd (λa. 0) (λa. a) 0) = (λx. 0) *as a result. However, if I
try:
lemma "(λx. 0 + 0) = (λx. 0)" * apply (fold fadd_def[where M = "λx. 0" and N = "λx. 0"]) *I get what I want, which is fadd (λx. 0) (λx. 0) = (λx. 0).
I'm a bit confused, because in the case of *apply (fold fadd_def) *I'm not
even getting what I want, even when using back. Can anyone point me at
why my desired instantiation of variables is not tried?
Looking quickly over the example, it looks as an expected result of
bottom-up rewriting.
What you want, might be something else. Depending on the application, you
can try a simproc to determine instantiations in ML, or you can try to
implement your very own replacement strategy as a conversion in ML.
The latter is actually easier than writing common tactics, although less
known. The starting point is src/Pure/conv.ML and examples may be found
in existing Isabelle sources, although that structure Conv is exceptional
in being used as "open" in Isabelle/ML, i.e. it requires a hypersearch
with unqualified names.
Makarius
From: Larry Paulson <lp15@cam.ac.uk>
I don’t have anything to explain about the code, other than to note that the comments refer to critical pairs. This is a reminder that a set of rewrite rules can be ambiguous. Instantiating variables is one way to cope with ambiguity, and it also helps if folding the right hand side would involve higher-order matching.
Larry Paulson
Last updated: Nov 21 2024 at 12:39 UTC