From: Hanno <hannobecker@posteo.de>
Hi,
In the context of a very large development, I am using some custom notation
and noticed that in a specific instance, removing the notation via no_notation
and then immediately adding it back again — with the same syntax and priorities — leads to a very significant (>10x) slowdown of Isabelle, even in files that do not use the notation.
I was so far under the impression that no_notation
and notation
are inverse to each other when given the same arguments.
Can someone help me understand, and hopefully circumvent, the observed performance degradation?
My real goal is of course not do disable and immediately re-enable the same notation, but to disable and replace it by something else — which however leads to the same performance drop.
Thank you for your time and help,
Hanno
From: Makarius <makarius@sketis.net>
Mathematically they are normally inverse, but physically many side-conditions
apply.
Do you have a concrete example, where the performance drop can be observed?
Or even better, to you have an abstract example, where the problem can be
reproduced in isolation?
(If this is a bigger piece of source text, don't post it here, but provide a
suitable link for download.)
Makarius
From: Manuel Eberl <manuel@pruvisto.org>
I would just like to interject that there was that performance problem
with the p-adic number entry in the AFP by Aaron Crighton, where
entering and exiting locale contexts was exceptionally slow (in the
order of magnitude of minutes).
I don't understand the underlying mechanisms here enough to really say
anything substantial, but I do wonder if this might be related.
Cheers,
Manuel
Last updated: Jan 04 2025 at 20:18 UTC