From: Manuel Eberl <eberlm@in.tum.de>
Hallo,
I occasionally run into problems where a simplification rule leads to
infinite reflexive rewriting, e.g. if I have a constant f and a rule
like "f ?x 1 = f 0 1", then the simplifier will loop on goals like "f x
1 = f y 1", since "f 0 1" will get rewritten to itself infinitely often.
I know how to fix this with NO_MATCH, but my question is this: why do we
not introduce a check in the simplifier that prevents it from rewriting
a term to itself? I don't know much about the simplifier, but I should
be surprised if such a check were difficult to implement, and I do not
think there would be a significant loss of performance. It also should
not break any existing proofs or code.
Cheers,
Manuel
From: Makarius <makarius@sketis.net>
I am also not an expert on the Simplifier myself, but by default the blind
assumptions above need to be inverted. Just empirically, any tiny change
anywhere will cause lots of unexpected problems in obscure places.
This does not mean, reforms should not be done -- we do it all the time.
But it usually takes really long and hard work to push a change through.
Makarius
From: Tobias Nipkow <nipkow@in.tum.de>
I agree with Makarius, although I suspect there wouldn't be many broken proofs.
On the other hand I am sure it would impact performance noticeably. Now that we
have NO_MATCH, I am reluctant to add a special case to the simplifier code.
Tobias
smime.p7s
Last updated: Nov 21 2024 at 12:39 UTC