From: Mathias Fleury <mathias.fleury12@gmail.com>
Hello list,
there are several simprocs that are able to cancel terms, i.e. simplify
"a + b + c = b + d" to "a + c = d".
Current problem: none of the simprocs is activated on
cancel_comm_monoid_add (i.e., the earliest point where the cancellation
is possible).
Here is the situation as far as I can tell:
Simproc
_L_arry _P_aulson's cancellation _s_improc on nat/int/...
Brian Huffman's _G__r_oup cancellation _s_improc
My _M_ultiset cancellation _s_improc
My _G__e_neralised _m_ultiset cancellation _s_improc
Abbreviation
used in this email
LPS
GrS
MS
GeMS
file
~~/src/Provers/Arith/cancel_numerals.ML
~~/Provers/Arith/cancel_numeral_factor.ML
~~/src/HOL/Tools/group_cancel.ML
- (2016-1 only) ~~/src/HOL/Library/multiset*.ML
- (devel only) ~~/src/HOL/Library/Cancellation.thy
~~/src/HOL/Library/Cancellation/cancel*.ML
Works on:
nat / int / ...
cancel_comm_monoid_add, but activated only on groups
cancel_comm_monoid_add, but activated only on multisets
cancel_comm_monoid_add, but activated only on multisets
Special feature
- support the product
- support repeat_mset, replicate_mset,...
- support an iterate_add operator (for multisets: repeat_mset,
replicate_mset,...; for natural numbers, it would be the product)
There are some points to notice:
(1) The aim behind generalising the MS to GeMS is to support multiset
variants with constructors and special operators (like signed multisets
in $AFP/Nested_Multisets_Ordinals/Signed_Multiset.thy). There might be
other types that have a operator that behaves like a multiplication by nat.
(2) We cannot activate the simprocs on cancel_comm_monoid_add.
Otherwise, tactics like linarith fails: linarith relies on the
additional simplification done by LPS (like replacing ‹-1 * x› by ‹- x›).
(3) I am not sure that GrS and GeMS do the exact same thing on the same
formula.
To solve the problem, I have locally activated GeMS where I have
hard-coded that the simproc does not apply on types where LPS applies.
It worked without problem. However, I am not sure that testing the AFP
and the Isabelle distribution is sufficient to declare that (3) does not
cause problems.
Does somebody have an opinion on the matter? Or should I just proceed
and activate my GeMS simproc on all cancel_comm_monoid_add types (except
those where LPS applies)?
Best,
Mathias
From: Mathias Fleury <mathias.fleury12@gmail.com>
Hello Florian,
On 02/03/2017 10:47, Florian Haftmann wrote:
Hi Mathias,
To solve the problem, I have locally activated GeMS where I have
hard-coded that the simproc does not apply on types where LPS applies.
It worked without problem. However, I am not sure that testing the AFP
and the Isabelle distribution is sufficient to declare that (3) does not
cause problems.
Does somebody have an opinion on the matter? Or should I just proceed
and activate my GeMS simproc on all cancel_comm_monoid_add types (except
those where LPS applies)?
operationally this should be OK. But the overall design seems not very
convincing.I have no ready-to-proceed suggestion at hand, but a few thoughts:
- LPS is the only one to operate on nat / int directly rather than an
algebraic structure. Can this be lifted, and how would the overall
picture then look like?
For nat: it cannot be lifted. The replacement from Suc to +1 is done
while iterating over the term. So no simple way to do lifting.
For int: it already works on linordered_idom.
I have to think about the general picture if we really want to reduce
duplication (with the risk of breaking proofs).
- Can there be one implementation of GeMS and LPS with a case
distinction or something like that? That would still be better two
concurring implementations.
I tried that. It should work but getting full compatibility is very hard:
the code is non-trivial (e.g., whether Suc is replaced by +1 is
handled internally).
there is some normalisation done after cancellation (I don't really
know why).
the simprocs on natural numbers are crucial for the linarith, meaning
that even small differences can have huge consequences.
LPS does not always do what people would expect, e.g.
~~/HOL/ex/Simproc_Tests.thy:
(* FIXME "Suc (i + 3) \<equiv> i + 4" *)
assume "4 * Suc 0 + i = u" have "Suc (i + 3) = u"
by (tactic \<open>test @{context} [@{simproc
nat_combine_numerals}]\<close>) fact
If we want to replace LPS, should we aim at full compatibility to avoid
breaking proofs or try to fix the FIXMEs from Simproc_Test?
- Note that technically the cancellation simproc implementations are
notoriously outdated; the state of the are is to implement such a
simproc in a locale and then interpret it onto the desired instances.
Detail can be found in Amine Chaieb and Makarius Wenzel: Context aware
Calculation and Deduction --- Ring Equalities via Gröbner Bases in
Isabelle. And in src/HOL/Semiring_Normalization.thy
That should make the implementation a bit cleaner. I'll look into that.
Best,
Mathias
Best,
Florian
From: Lawrence Paulson <lp15@cam.ac.uk>
Historical remark: this code was written possibly before axiomatic type classes even existed and certainly before they were used to streamline our formalisation of arithmetic theories. And it’s still possible that the natural numbers are sufficiently distinctive to need their own dedicated simproc.
I don’t remember for sure, quite possibly, rewriting i+4 to Suc(i+3) is intentional, with the idea that a subsequent rewrite step will involve something of the form f(Suc k) = …k….
Streamlining and generalising this code will be a big job for somebody.
Larry Paulson
From: Mathias Fleury <mathias.fleury12@gmail.com>
Hello all,
On 03/03/2017 12:39, Lawrence Paulson wrote:
And it’s still possible that the natural numbers are sufficiently
distinctive to need their own dedicated simproc.
Not really. It should be possible to have a simproc that replaces the
Suc by ‹+1› where needed. Then the normal cancellation simproc should be
able to do the job.
I don’t remember for sure, quite possibly, rewriting i+4 to Suc(i+3)
is intentional, with the idea that a subsequent rewrite step will
involve something of the form f(Suc k) = …k….
I think you misunderstood the comment: the simprocs simplifies ‹Suc (i +
3)› in ‹4 * Suc 0 + i›. It does not do anything on ‹i+4›.
However, what you said explains why ‹Suc (i + 1)› is not simplified to
‹i+2›.
Streamlining and generalising this code will be a big job for somebody.
I agree.
Mathias
Larry Paulson
On 3 Mar 2017, at 08:30, Mathias Fleury <mathias.fleury12@gmail.com
<mailto:mathias.fleury12@gmail.com>> wrote:
- Can there be one implementation of GeMS and LPS with a case
distinction or something like that? That would still be better two
concurring implementations.
I tried that. It should work but getting full compatibility is very
hard:- the code is non-trivial (e.g., whether Suc is replaced by +1 is
handled internally).- there is some normalisation done after cancellation (I don't really
know why).- the simprocs on natural numbers are crucial for the linarith, meaning
that even small differences can have huge consequences.
From: Florian Haftmann <florian.haftmann@informatik.tu-muenchen.de>
Hi Matthias,
- the simprocs on natural numbers are crucial for the linarith, meaning
that even small differences can have huge consequences.- LPS does not always do what people would expect, e.g.
~~/HOL/ex/Simproc_Tests.thy:
(* FIXME "Suc (i + 3) \<equiv> i + 4" *)
assume "4 * Suc 0 + i = u" have "Suc (i + 3) = u"
by (tactic \<open>test @{context} [@{simproc
nat_combine_numerals}]\<close>) factIf we want to replace LPS, should we aim at full compatibility to avoid
breaking proofs or try to fix the FIXMEs from Simproc_Test?
linarith is indeed a world on its own and IMHO it would be OK if
linarith would retain its current simproc, even if that is not used any
where else.
For a generalized simproc it would be fine if some proofs have to be
amended, if the number of breaking proofs does not rise to high; this
happens routinely when maintaining proof automation. The usual approach
to figure out is:
Build as much as possible from distro and AFP. Iterate over breaking
proofs.
Over time you get an idea what the impact of the change is:
How big is the amount of »awkward« breaking proofs compared to
»streamlined« breaking proofs?
Is there are recurring pattern how proofs can be repaired?
Is there yet another thing to change e.g. concerning the simplifier
setup to get to a cleaner situation again?
…
- Note that technically the cancellation simproc implementations are
notoriously outdated; the state of the are is to implement such a
simproc in a locale and then interpret it onto the desired instances.
Detail can be found in Amine Chaieb and Makarius Wenzel: Context aware
Calculation and Deduction --- Ring Equalities via Gröbner Bases in
Isabelle. And in src/HOL/Semiring_Normalization.thy
That should make the implementation a bit cleaner. I'll look into that.
I'll appreciate that.
Cheers,
Florian
Best,
MathiasBest,
Florian
From: Lawrence Paulson <lp15@cam.ac.uk>
And preferably, take the opportunity to clean up untidy proofs that are difficult to maintain.
Larry
From: Florian Haftmann <florian.haftmann@informatik.tu-muenchen.de>
Hi Mathias,
operationally this should be OK. But the overall design seems not very
convincing.
I have no ready-to-proceed suggestion at hand, but a few thoughts:
LPS is the only one to operate on nat / int directly rather than an
algebraic structure. Can this be lifted, and how would the overall
picture then look like?
Can there be one implementation of GeMS and LPS with a case
distinction or something like that? That would still be better two
concurring implementations.
Note that technically the cancellation simproc implementations are
notoriously outdated; the state of the are is to implement such a
simproc in a locale and then interpret it onto the desired instances.
Detail can be found in Amine Chaieb and Makarius Wenzel: Context aware
Calculation and Deduction --- Ring Equalities via Gröbner Bases in
Isabelle. And in src/HOL/Semiring_Normalization.thy
Best,
Florian
signature.asc
Last updated: Nov 21 2024 at 12:39 UTC