From: Makarius <makarius@sketis.net>
De-facto it was an Isabelle/ML users workshop, with a little bit of
outlook towards Isabelle/Scala (but in 2009 the latter was still very
thin). Maybe we should make some "Isabelle proof development workshop"
next time.
Anyway, I've just returned from one that was very interesting: mostly
French Coq users learning about HOL specifications and Isar proofs. See
also http://www.lri.fr/~wenzel/Isabelle_Orleans_2012/ and feel free to
re-use material you might want for your own courses next time.
Makarius
From: Peter Lammich <lammich@in.tum.de>
Hi,
my understanding of conversions is, that "conv ct" either throws an
exception or returns a theorem of the form "ct == ...". Here is a
minimal example where rewr_conv actually returns a theorem that does not
satisfy this condition, and thus, a subsequent then_conv fails
unexpectedly.
definition "I x \<equiv> x"
definition "A f x \<equiv> f x"
ML_val {*
let
open Conv
val cv1 = fun_conv (rewr_conv @{thm I_def})
val cv2 = rewr_conv (@{thm A_def[symmetric]})
in
(cv1 then_conv cv2) @{cterm "I (\<lambda>s::bool. s) True"}
end
(*
*** exception THM 0 raised (line 835 of "thm.ML"):
*** transitive: middle term
*** I (\<lambda>s\<Colon>bool. s) True \<equiv> True
*** True \<equiv> A (\<lambda>s\<Colon>bool. s) True
*)
*}
The reason is, that the term after applying cv1 is not beta-reduced, and
cv2 returns a theorem with a beta-reduced left-hand side. The exception
is thrown when then_conv tries to put the two theorems together.
The issue can be observed when instrumenting then_conv, e.g. as attached
to the end of this mail.
I consider this a bug, because it is nowhere documented that the user of
conversions has to take care of beta-normalization manually. My proposal
for solution is as follows:
What is the status of beta-equality within Isabelle?
Alternative 1) beta-equivalent terms are considered equal: then_conv
should just work modulo beta-equivalence
Alternative 2) They are not considered equal on this low-level:
rewr_conv should be forced to return an equal term.
Anyway, if none of the two alternatives is appropriate, the rules for
composing conversions should be CLEANLY documented.
Regards,
Peter
ML_val {*
let
open Conv
(* Instrumenting then_conv to make the reason for the error visible:
*)
fun (cv1 then_conv cv2) ct =
let
val eq1 = cv1 ct;
val eq2 = cv2 (Thm.rhs_of eq1);
in
if Thm.is_reflexive eq1 then eq2
else if Thm.is_reflexive eq2 then eq1
else (
tracing ("RHS1: "^PolyML.makestring (Thm.rhs_of eq1 |>
term_of));
tracing ("LHS2: "^PolyML.makestring (Thm.lhs_of eq2 |>
term_of));
Thm.transitive eq1 eq2
)
end;
in
let
val cv1 = fun_conv (rewr_conv @{thm I_def})
val cv2 = rewr_conv (@{thm A_def[symmetric]})
in
(cv1 then_conv cv2) @{cterm "I (\<lambda>s::bool. s) True"}
end
end
(* TRACE:
RHS1: Abs ("s", "bool", Bound 0) $ Const ("HOL.True", "bool")
LHS2: Const ("HOL.True", "bool")
*)
*}
From: Makarius <makarius@sketis.net>
On Mon, 22 Oct 2012, Peter Lammich wrote:
my understanding of conversions is, that "conv ct" either throws an
exception or returns a theorem of the form "ct == ...". Here is a
minimal example where rewr_conv actually returns a theorem that does not
satisfy this condition, and thus, a subsequent then_conv fails
unexpectedly.definition "I x \<equiv> x"
definition "A f x \<equiv> f x"ML_val {*
let
open Convval cv1 = fun_conv (rewr_conv @{thm I_def})
val cv2 = rewr_conv (@{thm A_def[symmetric]})
in
(cv1 then_conv cv2) @{cterm "I (\<lambda>s::bool. s) True"}
end(*
*** exception THM 0 raised (line 835 of "thm.ML"):
*** transitive: middle term
*** I (\<lambda>s\<Colon>bool. s) True \<equiv> True
*** True \<equiv> A (\<lambda>s\<Colon>bool. s) True
*)*}
The reason is, that the term after applying cv1 is not beta-reduced, and
cv2 returns a theorem with a beta-reduced left-hand side. The exception
is thrown when then_conv tries to put the two theorems together.
(ML raises exceptions, other languages like Java throw them.)
What happens here is that cv2 gets a cterm with beta redex, but it is
rather well-known in Isabelle practice, that this can cause all kinds of
troubles. So one could argue that the behaviour is correctly undefined.
I consider this a bug, because it is nowhere documented that the user of
conversions has to take care of beta-normalization manually.
Despite tons of manuals, which are often hard to keep in overview anyway,
the real documentaion is the ML source:
(* single rewrite step, cf. REWR_CONV in HOL *)
fun rewr_conv rule ct =
let
val rule1 = Thm.incr_indexes (#maxidx (Thm.rep_cterm ct) + 1) rule;
val lhs = Thm.lhs_of rule1;
val rule2 = Thm.rename_boundvars (Thm.term_of lhs) (Thm.term_of ct) rule1;
in
Drule.instantiate_normalize (Thm.match (lhs, ct)) rule2
handle Pattern.MATCH => raise CTERM ("rewr_conv", [lhs, ct])
end;
This says excplitly that your resulting rule instance will be produced by
Drule.instantiate_normalize, and thus in beta-normal form. So you loose
if you try to make a plain then_conv step based on something that is not
in beta-normal form.
If this is good or bad, or a "bug" is a completely different question.
Further judgement would require careful studies how a change would impact
existing application, and potential interaction with the Simplifier and
the rule composition mechanisms (RS etc.) I can't say much on the spot,
apart from quoting the well-known "Mahabharata" software development
principle: "Dharma eva hato hanti / Dharmo rakshati rakshitah". In other
words, it might be easier to change your expectation about what the system
should do for you.
Seriously: I am now working a lot with Scala on the JVM platform. What
these industrial-strength guys usually do is to declare odd behaviour
official rather quickly, because too many applications depend on it.
What is the status of beta-equality within Isabelle?
Alternative 1) beta-equivalent terms are considered equal: then_conv
should just work modulo beta-equivalence
Alternative 2) They are not considered equal on this low-level:
rewr_conv should be forced to return an equal term.
The system knows alpha, beta, eta conversion.
Some layers use alpha (kernel).
Some layers use alpha + beta + eta (simplifier, resolution) but
normalize in different directions.
Insisting too much in one way or the other causes trouble.
It is possible not to insist too much.
What is your concrete application anyway?
Makarius
From: Lawrence Paulson <lp15@cam.ac.uk>
This module was written long ago for a specific, internal purpose and was never intended for general use, so it's hard to say what is a bug and what isn't. The beauty of conversions is that they are highly modular, so you can write your own primitives that do exactly what you want.
Larry Paulson
From: Thomas Sewell <Thomas.Sewell@nicta.com.au>
An additional beauty of the Isabelle library is how often you get to
build your own primitives because none of the existing ones do what you
would expect.
I have this sitting around somewhere:
fun fix_conv conv ct = let
val thm = conv ct
val eq = Logic.mk_equals (term_of ct, term_of ct) |> head_of
in if (term_of (Thm.lhs_of thm) aconv term_of ct)
then thm
else thm RS trivial
(Thm.mk_binop (cterm_of (theory_of_cterm ct) eq) ct (Thm.rhs_of
thm)) end
That fixes the problem with rewr_conv, so fix_conv Conv.rewr_conv does
what you would expect Conv.rewr_conv to do.
Yours,
Thomas.
From: Brian Huffman <huffman@in.tum.de>
We should not interpret Peter's original post as the question, "how
can I make my concrete application work"; rather, we should treat it
as a proposal to refine the behavior of the Isabelle conversion
library.
Peter neatly stated the behavior that we all expect of conversions:
On Mon, 22 Oct 2012, Peter Lammich wrote:
my understanding of conversions is, that "conv ct" either throws an
exception or returns a theorem of the form "ct == ...".
The current situation is this: Conv.rewr_conv only satisfies this
property if a side-condition is met, namely that the input cterm must
be already in beta-normal form.
The proposal is to modify rewr_conv to remove the side-condition: It
should satisfy the basic property also for non-beta-normal cterms. On
beta-normal cterms, rewr_conv should behave exactly as it did before.
(Thomas's fix_conv would be one possible implementation of this
proposal.)
So far, I haven't seen any good arguments against this change.
On Mon, Oct 22, 2012 at 9:27 PM, Makarius <makarius@sketis.net> wrote:
What happens here is that cv2 gets a cterm with beta redex, but it is rather
well-known in Isabelle practice, that this can cause all kinds of troubles.
So one could argue that the behaviour is correctly undefined.
Makarius argues that it is reasonable to have low expectations of the
conversion library. Yes, but this is no argument against making the
library exceed his low expectations of it.
Despite tons of manuals, which are often hard to keep in overview anyway,
the real documentaion is the ML source:
Reading the ML source, we also find various low-level conversions in thm.ML:
val beta_conversion: bool -> conv
val eta_conversion: conv
val eta_long_conversion: conv
Their existence clearly indicates that conversions are not intended to
work modulo beta- or eta-equivalence.
If this is good or bad, or a "bug" is a completely different question.
Further judgement would require careful studies how a change would impact
existing application, and potential interaction with the Simplifier and the
rule composition mechanisms (RS etc.)
[...]
Seriously: I am now working a lot with Scala on the JVM platform. What
these industrial-strength guys usually do is to declare odd behaviour
official rather quickly, because too many applications depend on it.
A quick search reveals that very little code within Isabelle depends
on rewr_conv; in particular, the simplifier and rule composition
mechanisms like RS predate the conversion library.
Furthermore, remember that we are only proposing to change the
behavior of rewr_conv on non-beta-normal input. I seriously doubt
that any existing code actually depends on the current behavior of
rewr_conv on non-beta-normal cterms. In any case, it is easy enough to
run the usual test suite before committing a change.
The system knows alpha, beta, eta conversion.
Some layers use alpha (kernel).
Some layers use alpha + beta + eta (simplifier, resolution) but
normalize in different directions.
As I pointed out above, the existence of Thm.{beta,eta}_conversion
makes it clear that conversions do not use beta + eta.
Insisting too much in one way or the other causes trouble.
It is possible not to insist too much.
What is your concrete application anyway?
This is good advice for users of Isabelle/ML libraries: Don't expect
too much. For developers of Isabelle/ML libraries, this is no
argument against making improvements.
On 23/10/12 07:40, Lawrence Paulson wrote:
This module was written long ago for a specific, internal purpose and was
never intended for general use, so it's hard to say what is a bug and what
isn't. The beauty of conversions is that they are highly modular, so you can
write your own primitives that do exactly what you want.
If the module was never intended for general use, then we shouldn't be
so worried about breaking user code by modifying the module. I would
read this as an argument in favor of updating the conversion library
(or at least an argument against not updating it).
From: Peter Lammich <lammich@in.tum.de>
I consider this a bug, because it is nowhere documented that the user of
conversions has to take care of beta-normalization manually.Despite tons of manuals, which are often hard to keep in overview anyway,
the real documentaion is the ML source:(* single rewrite step, cf. REWR_CONV in HOL *)
fun rewr_conv rule ct =
let
val rule1 = Thm.incr_indexes (#maxidx (Thm.rep_cterm ct) + 1) rule;
val lhs = Thm.lhs_of rule1;
val rule2 = Thm.rename_boundvars (Thm.term_of lhs) (Thm.term_of ct) rule1;
in
Drule.instantiate_normalize (Thm.match (lhs, ct)) rule2
handle Pattern.MATCH => raise CTERM ("rewr_conv", [lhs, ct])
end;This says excplitly that your resulting rule instance will be produced by
Drule.instantiate_normalize, and thus in beta-normal form. So you loose
if you try to make a plain then_conv step based on something that is not
in beta-normal form.
So what normalizations does instantiate_normalize do (beta?, eta?,
beta-eta?, how deep?). Looking at the source code does not really help
here:
fun instantiate_normalize instpair th =
Thm.adjust_maxidx_thm ~1 (Thm.instantiate instpair th COMP_INCR
asm_rl);
Even if looking 4 levels deep from here (COMP_INCR -> COMP -> compose ->
bicompose), one finds no comment mentioning normal forms, nor any
function whose name would suggest any particular kind of normalization.
If this is good or bad, or a "bug" is a completely different question.
Further judgement would require careful studies how a change would impact
existing application, and potential interaction with the Simplifier and
the rule composition mechanisms (RS etc.) I can't say much on the spot,
apart from quoting the well-known "Mahabharata" software development
principle: "Dharma eva hato hanti / Dharmo rakshati rakshitah". In other
words, it might be easier to change your expectation about what the system
should do for you.
As Brian already mentioned: If there is an easy change that does not
break too much, and makes the system arguably more elegant, why not just
applying it?
- Insisting too much in one way or the other causes trouble.
As my example shows, I have to insist on one way or the other (either
using something like fix-conv, or manually inserting beta_conv after all
conversions that may produce beta-redexes).
It is possible not to insist too much.
What is your concrete application anyway?
Something similar as the minimal example. I need to do some
controllable rewriting in a term, replacing patterns like
"fun_upd f k (Some v)" by "op_map_update$k$v$f" if certain
side-conditions on f, k, and v are met (infixr $ is a custom constant
defined as f$x == f x). Another pattern was
"insert" by "\lambda x s. op_insert$x$s", which caused the beta-redex,
as I first replaced it by "\lambda x s. op_insert x s", and then wanted
to insert the $.
In my concrete case, it was easy to fix by applying beta_conv at the
right places, however, I wondered why conversions do not meet the simple
expectation that most of us seem to have (see Brian's mail). Indeed, it
required me more than an hour to track down why my code did not work, as
I implicitly assumed that the standard conversions behave as expected,
such that I searched at the wrong places first.
From: Tobias Nipkow <nipkow@in.tum.de>
Peter's point is that the given conversions are not as modular as one may wish
them to be and that it would be nice to improve on that. And as Thomas pointed
out, improvement in user space is suboptimal.
Tobias
From: Ondřej Kunčar <kuncar@in.tum.de>
Peter has already mentioned it but I want to do it more explicitly
because this thread clearly shows how the canonical "look at the ML
source"-approach fails.
ML source is not the real documentation!
How should I know that instantiate_normalize do beta-normalization?
By looking at this?
fun bicompose_aux flatten match (state, (stpairs, Bs, Bi, C), lifted)
(eres_flg, orule, nsubgoal) =
let val Thm (sder, {maxidx=smax, shyps=sshyps, hyps=shyps, ...}) = state
and Thm (rder, {maxidx=rmax, shyps=rshyps, hyps=rhyps,
tpairs=rtpairs, prop=rprop,...}) = orule
(How many hyps to skip over during normalization)
and nlift = Logic.count_prems (strip_all_body Bi) + (if eres_flg
then ~1 else 0)
val thy = Theory.deref (merge_thys2 state orule);
( Add new theorem with prop = '[| Bs; As |] ==> C' to thq )
fun addth A (As, oldAs, rder', n) ((env, tpairs), thq) =
let val normt = Envir.norm_term env;
(perform minimal copying here by examining env)
val (ntpairs, normp) =
if Envir.is_empty env then (tpairs, (Bs @ As, C))
else
let val ntps = map (pairself normt) tpairs
in if Envir.above env smax then
(no assignments in state; normalize the rule only)
if lifted
then (ntps, (Bs @ map (norm_term_skip env nlift) As, C))
else (ntps, (Bs @ map normt As, C))
else if match then raise COMPOSE
else (normalize the new rule fully)
(ntps, (map normt (Bs @ As), normt C))
end
val th =
Thm (deriv_rule2
((if Envir.is_empty env then I
else if Envir.above env smax then
(fn f => fn der => f (Proofterm.norm_proof' env
der))
else
curry op oo (Proofterm.norm_proof' env))
(Proofterm.bicompose_proof flatten Bs oldAs As A n
(nlift+1))) rder' sder,
{tags = [],
maxidx = Envir.maxidx_of env,
shyps = Envir.insert_sorts env (Sorts.union rshyps
sshyps),
hyps = union_hyps rhyps shyps,
tpairs = ntpairs,
prop = Logic.list_implies normp,
thy_ref = Theory.check_thy thy})
in Seq.cons th thq end handle COMPOSE => thq;
val (rAs,B) = Logic.strip_prems(nsubgoal, [], rprop)
handle TERM _ => raise THM("bicompose: rule", 0, [orule,state]);
(Modify assumptions, deleting n-th if n>0 for e-resolution)
fun newAs(As0, n, dpairs, tpairs) =
let val (As1, rder') =
if not lifted then (As0, rder)
else
let val rename = rename_bvars dpairs tpairs B As0
in (map (rename strip_apply) As0,
deriv_rule1 (Proofterm.map_proof_terms (rename K) I) rder)
end;
in (map (if flatten then (Logic.flatten_params n) else I) As1,
As1, rder', n)
handle TERM _ =>
raise THM("bicompose: 1st premise", 0, [orule])
end;
val env = Envir.empty(Int.max(rmax,smax));
val BBi = if lifted then strip_assums2(B,Bi) else (B,Bi);
val dpairs = BBi :: (rtpairs@stpairs);
(elim-resolution: try each assumption in turn)
fun eres [] = raise THM ("bicompose: no premises", 0, [orule, state])
| eres (A1 :: As) =
let
val A = SOME A1;
val (close, asms, concl) = Logic.assum_problems (nlift +
1, A1);
val concl' = close concl;
fun tryasms [] _ = Seq.empty
| tryasms (asm :: rest) n =
if Term.could_unify (asm, concl) then
let val asm' = close asm in
(case Seq.pull (Unify.unifiers (thy, env, (asm',
concl') :: dpairs)) of
NONE => tryasms rest (n + 1)
| cell as SOME ((_, tpairs), _) =>
Seq.it_right (addth A (newAs (As, n, [BBi,
(concl', asm')], tpairs)))
(Seq.make (fn () => cell),
Seq.make (fn () => Seq.pull (tryasms rest
(n + 1)))))
end
else tryasms rest (n + 1);
in tryasms asms 1 end;
(ordinary resolution)
fun res () =
(case Seq.pull (Unify.unifiers (thy, env, dpairs)) of
NONE => Seq.empty
| cell as SOME ((_, tpairs), _) =>
Seq.it_right (addth NONE (newAs (rev rAs, 0, [BBi], tpairs)))
(Seq.make (fn () => cell), Seq.empty));
in
if eres_flg then eres (rev rAs) else res ()
end;
From: Makarius <makarius@sketis.net>
I think you refer to an even older version of the module, before my time
working on the Isabelle code base. The current src/Pure/conv.ML is a
reconstruction from historic sources and the HOL counterparts that was
initiated by Amine Chaieb some years ago, and where I engaged myself as
well in doing research on the old LCF/HOL sources. The we tried to push
it forward in time into contemporary Isabelle use.
Makarius
From: Makarius <makarius@sketis.net>
You should explain what "_ RS trivial" does. It is the conventional way
to let Larry's rule composition calculus do its normalization for you.
Thinking in terms of "broken" and "fixed" is unwise. Which is actually my
main complaint on this thread.
Makarius
From: Tobias Nipkow <nipkow@in.tum.de>
This is the essence: rewr_conv only satisfies
"A conversion is any function that maps a term t to a theorem |-t==u."
LCP, 1983
up to beta-equivalence. Since conversions are low-level proof methods which are
sensitive to the precise term structure, this is a wart that causes
non-modularity. This non-modularity only shows up (to refine Brian's analysis)
if the lhs of the rewrite rule has a free variable F in an applied position: F
t. This does not happen very often, but if it does, rewr_conv should still
behave nicely. It should do a non-normalizing instantiation followed by a
beta-normalization.
Tobias
From: Makarius <makarius@sketis.net>
On Tue, 23 Oct 2012, Brian Huffman wrote:
On Mon, 22 Oct 2012, Peter Lammich wrote:
my understanding of conversions is, that "conv ct" either throws an
exception or returns a theorem of the form "ct == ...".The current situation is this: Conv.rewr_conv only satisfies this
property if a side-condition is met, namely that the input cterm must
be already in beta-normal form.The proposal is to modify rewr_conv to remove the side-condition: It
should satisfy the basic property also for non-beta-normal cterms. On
beta-normal cterms, rewr_conv should behave exactly as it did before.
(Thomas's fix_conv would be one possible implementation of this
proposal.)
This is the point where one would have to start thinking and looking
carefully. So far the attitudes towards the system were far too crude, to
even think of it.
Makarius argues that it is reasonable to have low expectations of the
conversion library. Yes, but this is no argument against making the
library exceed his low expectations of it.
Misunderstanding again. Wrong attitudes towards "good or bad", "wrong or
right", "broken or fixed" means. The incident is rather trivial here, but
the general principles are very important.
Reading the ML source, we also find various low-level conversions in thm.ML:
val beta_conversion: bool -> conv
val eta_conversion: conv
val eta_long_conversion: convTheir existence clearly indicates that conversions are not intended to
work modulo beta- or eta-equivalence.
Here you also need to look at the history. When the type conv was
re-introduced after a long time, I merely tried to re-integrate many
existing functions into the framework, without assuming too much semantics
behind it.
Furthermore, remember that we are only proposing to change the behavior
of rewr_conv on non-beta-normal input. I seriously doubt that any
existing code actually depends on the current behavior of rewr_conv on
non-beta-normal cterms. In any case, it is easy enough to run the usual
test suite before committing a change.
It will be probably not too hard, but such things have to be done with the
proper attitude and care. The "bug" fraction on this thread lacks that.
This is good advice for users of Isabelle/ML libraries: Don't expect
too much.
I never said said. You should get your expectations right and realistic.
The code base is generally not so bad, but after > 25 years it defines it
own rules. So as long as you are standing outside somewhere and
pretending to know how to "fix" it, it wont work for you.
Makarius
From: Makarius <makarius@sketis.net>
You also need to include its history.
It is a matter of long practice and endurance to make sense out of all
that. It is important to make a serious start, by giving up what you
think about it in the first impulse.
Makarius
From: Makarius <makarius@sketis.net>
Alternatively one could do the normalization before on the input term.
Empirically, higher-order matching can produce surprises if applied to
terms with beta redexes, it normally is never used like that in the
existing tools. So it is better not to give non-normal stuff to the match
operation.
Again, I don't expect too many fundamental problems in this rather small
incident, but one has to approach it with the proper mindset about how the
Isabelle code base (and its history) works.
(Next time I will tell a story how an efficient and fully verified
merge-sort function included in the core sources caused several days of
worries.)
Makarius
From: Tobias Nipkow <nipkow@in.tum.de>
I just tried my suggestion and it seems to break HOL. I'll investigate more.
Tobias
From: Makarius <makarius@sketis.net>
Larry should be able to tell you.
It is again the same technique to let rule composition perform some
normalization. The above function goes back to Stefan Berghofer, IIRC.
I merely changed its name Drule.instantiate ~> Drule.instantiate_normalize
recently, to make it a bit more explicit what was meant here.
Makarius
From: Makarius <makarius@sketis.net>
I would have given it a > 50% chance of not breaking immediately, since
rewrite_conv is relatively new back in Isabelle/HOL.
Anyway, whoever thinks something needs to be changed in a certain way, the
first move is to make some empirical studies to test the hypothesis
against historical reality.
Makarius
From: Lawrence Paulson <lp15@cam.ac.uk>
My guess (I don't have time to study the code) is that this will perform beta-eta normalisation.
As a historical remark, I'm quite attached to conversions, which were the topic of my very first journal article:
L. C. Paulson.
A higher-order implementation of rewriting. Science of Computer Programming 3 (1983), 119–149.
They made a lot of sense in the context of LCF and HOL, where users routinely wrote code as part of the verification process. Although I included this code in Isabelle, it didn't have a central role and I thought it'd got deleted. Obviously it would be sensible to correct any deficiencies or omissions. But I'm still intrigued regarding what sort of application you could have for them.
Larry
From: Makarius <makarius@sketis.net>
They found again some uses in relatively small-scale tinkering with
sub-term structure, things that would be hard with the Simplifier engine
and its builtin strategies. This is not so much compared to the
ancient times and past glory of LCF conversions.
What I found quite interesting was the presentation by Stefan Berghofer of
Isabelle conversions at the 2009 isabelle-dev workshop at Munich. He
first showed how to make a simplifier with a few conversions and
conversion combinators, then showed how to make it a little faster using a
"Boultonized" version of the same (like "Q" conversions in HOL), and then
showed how to make it really fast using the Isabelle Simplifier techniques
that came after conversions so many years ago.
Makarius
From: Brian Huffman <huffman@in.tum.de>
I can't speak for Peter, but I am currently using conversions in a few places.
My main application is for writing simprocs: See e.g.
HOL/Tools/group_cancel.ML or nat_arith.ML. Rewriting these
cancellation simprocs to use conversions made the code significantly
shorter, simpler, more efficient, and more reliable. There used to be
a few simprocs that would sometimes return an equation whose
left-hand-side did not match the input term; using conversions ensures
that this will not happen.
I would definitely recommend conversions to anyone thinking about
writing a simproc.
I am also using conversions inside the transfer package, to do some
preprocessing steps. For this purpose, conversions are more
predictable and more customizable than using the simplifier.
From: Lawrence Paulson <lp15@cam.ac.uk>
That application is very interesting! And one can argue that a simproc is a very similar concept as a conversion, anyway.
Larry
From: Tobias Nipkow <nipkow@in.tum.de>
The problem comes from my matching code that eta-expands even if it does not
need to. Substituting such a matcher back into the pattern creates new
beta-redexes that were not there beforehand. This is more subtle than expected.
In contrast, it seems that Thomas' solution (fix_conv) does the job, but it is a
bit on the brutal side. I'll see if I can come up with something less brutal.
Tobias
From: Thomas Sewell <Thomas.Sewell@nicta.com.au>
fun fix_conv conv ct = let
val thm = conv ct
val eq = Logic.mk_equals (term_of ct, term_of ct) |> head_of
in if (term_of (Thm.lhs_of thm) aconv term_of ct)
then thm
else thm RS trivial
(Thm.mk_binop (cterm_of (theory_of_cterm ct) eq) ct (Thm.rhs_of thm))
end
You should explain what "_ RS trivial" does. It is the conventional way
to let Larry's rule composition calculus do its normalization for you.
Admittedly this is not beautiful code. I just happened to have it.
What "thm RS trivial P" does is constructs the trivial theorem "P ==> P"
and then discharges its assumption using thm.
The point here is to specify the conclusion we want (Thm.mk_binop ...)
with the original cterm on the left hand side, construct a theorem with
that conclusion, and then "prove" it by resolution with a theorem we
have (which is alpha-beta-eta- equivalent). Yes, that is a big hammer to
apply, but it won't be applied all that often. The major cost here is
checking whether it is needed all the time.
Thinking in terms of "broken" and "fixed" is unwise. Which is actually my
main complaint on this thread.
Some things are neither broken nor fixed, they are just surprising, but
surprising for good reason. This is broken. As others have pointed out,
there is a clear contract for convs to follow so that then_conv can
chain them using Thm.transitive. This contract is not followed by rewr_conv.
This is one solution. An alternative would be to broaden the contract by
having then_conv catch the exception from Thm.transitive and switch to a
more general mechanism (possibly by doing the above). This might create
problems for some other consumers of convs, which I have not taken time
to investigate.
Yours,
Thomas.
From: Christian Sternagel <c-sterna@jaist.ac.jp>
Does anybody know whether slides/thy-files of this talk are still
around? (I did not manage to find a website for any of the Isabelle
Users Workshops besides 2012.) - cheers chris
From: Andreas Lochbihler <andreas.lochbihler@kit.edu>
Hi Christian,
the website still has the theory files.
http://isabelle.in.tum.de/nominal/activities/tphols09/idw.html
Andreas
From: Christian Sternagel <c-sterna@jaist.ac.jp>
Thanks!
No wonder that searching for "Isabelle Users Workshop" did not give the
desired result ;)... I forgot that in 2009 it was a "Developers Workshop".
cheers
chris
From: Christian Urban <christian.urban@kcl.ac.uk>
I do not have the slides of this talk it seems, but
the theory Stefan presented in 2009 is here:
http://isabelle.in.tum.de/nominal/activities/tphols09/IDW/Conversions.thy
The workshops in 2009 and 2010 have their webpage here:
http://isabelle.in.tum.de/nominal/activities/tphols09/idw.html
http://isabelle.in.tum.de/nominal/activities/idw10/idw.html
Christian
From: Mathieu Giorgino <Mathieu.Giorgino@irit.fr>
It seems most of the links on this page are broken. An easy fix is to
replace all occurences of
"tphols.in.tum.de"
by
"isabelle.in.tum.de/nominal/activities/tphols09/"
in the html file.
Thanks for the link.
Last updated: Nov 21 2024 at 12:39 UTC