From: "\"Mulder, Ike\"" <cl-isabelle-users@lists.cam.ac.uk>
Hello,
We’ve been seeing performance problems where just the theory … imports … begin
header is taking anywhere from 5-20 seconds.
The second bottleneck we’re seeing is in the merging of elimination rules of the Claset of two theories.
The merging happens here<https://isabelle.sketis.net/repos/isabelle/file/tip/src/Provers/classical.ML#l604> – merging safe elimination rules specifically turns out to be slow in our case. There are two reasons for that, I think:
1. The complexity of merging the rules seems higher than it should be
2. We apparently have a lot (about 20000) safe elimination rules. Most of these are automatically added when declaring datatype
s
The safe elimination rules added by new datatypes have the form caseI = caseJ ==> R
for all i != j
. Therefore, this mainly becomes problematic when defining enums with a lot of cases: the number of safe elimination rules added is quadratic in the number of cases in the enum. The addition of these elimination rules happens here<https://isabelle.sketis.net/repos/isabelle/file/tip/src/HOL/Tools/Ctr_Sugar/ctr_sugar.ML#l1083>.
As for 1), I think the complexity could be improved because merging simpset
s still seems to be fast, even in the presence of simp
rules that scale quadratically in the number of cases in an Enum.
For now, I have written a small scrappy plugin that deletes these safe elimination rules for newly defined datatype
s. That seems to improve performance a lot.
Best, Ike
From: Manuel Eberl <manuel@pruvisto.org>
I recall that there were some big performance problems in one of the AFP
entries on p-adic numbers as well. Switching in and out of locale
contexts sometimes takes a full minute. I wonder if this is related to
some of the issues you mention.
Manuel
On 20/06/2025 14:17, "Mulder, Ike" (via cl-isabelle-users Mailing List)
wrote:
Hello,
We’ve been seeing performance problems where just the
theory … imports … begin
header is taking anywhere from 5-20 seconds.The second bottleneck we’re seeing is in the merging of elimination
rules of the Claset of two theories.The merging happens here
<https://isabelle.sketis.net/repos/isabelle/file/tip/src/Provers/classical.ML#l604>
– merging safe elimination rules specifically turns out to be slow in
our case. There are two reasons for that, I think:
- The complexity of merging the rules seems higher than it should be
- We apparently have a lot (about 20000) safe elimination rules.
Most of these are automatically added when declaringdatatype
sThe safe elimination rules added by new datatypes have the form
caseI = caseJ ==> R
for alli != j
. Therefore, this mainly becomes
problematic when defining enums with a lot of cases: the number of
safe elimination rules added is quadratic in the number of cases in
the enum. The addition of these elimination rules happens here
<https://isabelle.sketis.net/repos/isabelle/file/tip/src/HOL/Tools/Ctr_Sugar/ctr_sugar.ML#l1083>.As for 1), I think the complexity could be improved because merging
simpset
s still seems to be fast, even in the presence ofsimp
rules that scale quadratically in the number of cases in an Enum.For now, I have written a small scrappy plugin that deletes these safe
elimination rules for newly defineddatatype
s. That seems to improve
performance a lot.Best, Ike
From: Lawrence Paulson <lp15@cam.ac.uk>
Plenty of people have large datatype declarations. We need to find a way of handling them that doesn't generate a quadratic amount of anything. As a stopgap, the having the option of just relying on the (quadratically many) simprules would be good.
The alternative of generating a simproc is probably good only for large datatypes, since having lots of simprocs floating around causes its own issues.
Larry
From: cl-isabelle-users-request@lists.cam.ac.uk <cl-isabelle-users-request@lists.cam.ac.uk> on behalf of "Mulder, Ike" <cl-isabelle-users@lists.cam.ac.uk>
Sent: 20 June 2025 13:17
To: cl-isabelle-users@lists.cam.ac.uk <cl-isabelle-users@lists.cam.ac.uk>
Subject: [isabelle] Merging elimination rules of Claset is slow - performance issue loading multiple theories
Hello,
We’ve been seeing performance problems where just the theory … imports … begin
header is taking anywhere from 5-20 seconds.
The second bottleneck we’re seeing is in the merging of elimination rules of the Claset of two theories.
The merging happens here<https://isabelle.sketis.net/repos/isabelle/file/tip/src/Provers/classical.ML#l604> – merging safe elimination rules specifically turns out to be slow in our case. There are two reasons for that, I think:
1. The complexity of merging the rules seems higher than it should be
2. We apparently have a lot (about 20000) safe elimination rules. Most of these are automatically added when declaring datatype
s
The safe elimination rules added by new datatypes have the form caseI = caseJ ==> R
for all i != j
. Therefore, this mainly becomes problematic when defining enums with a lot of cases: the number of safe elimination rules added is quadratic in the number of cases in the enum. The addition of these elimination rules happens here<https://isabelle.sketis.net/repos/isabelle/file/tip/src/HOL/Tools/Ctr_Sugar/ctr_sugar.ML#l1083>.
As for 1), I think the complexity could be improved because merging simpset
s still seems to be fast, even in the presence of simp
rules that scale quadratically in the number of cases in an Enum.
For now, I have written a small scrappy plugin that deletes these safe elimination rules for newly defined datatype
s. That seems to improve performance a lot.
Best, Ike
From: Jasmin Blanchette <jasmin.blanchette@ifi.lmu.de>
I believe the behavior of mutually recursive datatypes might be worse than quadratic. (For codatatypes, that's definitely the case.) Perhaps Dmitriy could comment on that.
I cannot resist inserting an anecdote here. Last week, I tried out the LeanHammer on the simplest goals I could imagine: the subgoals emerging from an induction proving "xs ~= x # xs". LeanHammer failed to prove "[] ~= [] # xs", because the quadratic distinct rules are not generated as theorem in Lean (presumably because of their quadratic nature).
Lesson: Thinking about "simp" is good, but we should also keep Sledgehammer in mind. :)
Jasmin
--
Prof. Dr. Jasmin Blanchette
Chair of Theoretical Computer Science and Theorem Proving
Ludwig-Maximilians-Universität München
Oettingenstr. 67, 80538 München, Germany
Tel.: +49 (0)89 2180 9341
Web: https://www.tcs.ifi.lmu.de/mitarbeiter/jasmin-blanchette_de.html
On 30. Jun 2025, at 14:20, Lawrence Paulson <lp15@cam.ac.uk> wrote:
Plenty of people have large datatype declarations. We need to find a way of handling them that doesn't generate a quadratic amount of anything. As a stopgap, the having the option of just relying on the (quadratically many) simprules would be good.
The alternative of generating a simproc is probably good only for large datatypes, since having lots of simprocs floating around causes its own issues.
Larry
From: cl-isabelle-users-request@lists.cam.ac.uk <cl-isabelle-users-request@lists.cam.ac.uk> on behalf of "Mulder, Ike" <cl-isabelle-users@lists.cam.ac.uk>
Sent: 20 June 2025 13:17
To: cl-isabelle-users@lists.cam.ac.uk <cl-isabelle-users@lists.cam.ac.uk>
Subject: [isabelle] Merging elimination rules of Claset is slow - performance issue loading multiple theoriesHello,
We’ve been seeing performance problems where just the
theory … imports … begin
header is taking anywhere from 5-20 seconds.
The second bottleneck we’re seeing is in the merging of elimination rules of the Claset of two theories.The merging happens here <https://isabelle.sketis.net/repos/isabelle/file/tip/src/Provers/classical.ML#l604> – merging safe elimination rules specifically turns out to be slow in our case. There are two reasons for that, I think:
The complexity of merging the rules seems higher than it should be
We apparently have a lot (about 20000) safe elimination rules. Most of these are automatically added when declaringdatatype
s
The safe elimination rules added by new datatypes have the formcaseI = caseJ ==> R
for alli != j
. Therefore, this mainly becomes problematic when defining enums with a lot of cases: the number of safe elimination rules added is quadratic in the number of cases in the enum. The addition of these elimination rules happens here <https://isabelle.sketis.net/repos/isabelle/file/tip/src/HOL/Tools/Ctr_Sugar/ctr_sugar.ML#l1083>.As for 1), I think the complexity could be improved because merging
simpset
s still seems to be fast, even in the presence ofsimp
rules that scale quadratically in the number of cases in an Enum.For now, I have written a small scrappy plugin that deletes these safe elimination rules for newly defined
datatype
s. That seems to improve performance a lot.Best, Ike
From: Fabian Huch <huch@in.tum.de>
What about generating a function that enumerates cases of the datatype,
and having a single distinctiveness theorem for that function? Then only
the proof of that theorem is expensive, and it can be used by
Sledgehammer (but proof search might take a bit longer).
Fabian
On 6/30/25 15:20, Jasmin Blanchette wrote:
I believe the behavior of mutually recursive datatypes might be worse
than quadratic. (For codatatypes, that's definitely the case.) Perhaps
Dmitriy could comment on that.I cannot resist inserting an anecdote here. Last week, I tried out the
LeanHammer on the simplest goals I could imagine: the subgoals
emerging from an induction proving "xs ~= x # xs". LeanHammer failed
to prove "[] ~= [] # xs", because the quadratic distinct rules are not
generated as theorem in Lean (presumably because of their quadratic
nature).Lesson: Thinking about "simp" is good, but we should also keep
Sledgehammer in mind. :)Jasmin
--
Prof. Dr. Jasmin Blanchette
Chair of Theoretical Computer Science and Theorem Proving
Ludwig-Maximilians-Universität München
Oettingenstr. 67, 80538 München, Germany
Tel.: +49 (0)89 2180 9341
Web: https://www.tcs.ifi.lmu.de/mitarbeiter/jasmin-blanchette_de.htmlOn 30. Jun 2025, at 14:20, Lawrence Paulson <lp15@cam.ac.uk> wrote:
Plenty of people have large datatype declarations. We need to find a
way of handling them that doesn't generate a quadratic amount of
anything. As a stopgap, the having the option of just relying on the
(quadratically many) simprules would be good.The alternative of generating a simproc is probably good only for
large datatypes, since having lots of simprocs floating around causes
its own issues.Larry
From:cl-isabelle-users-request@lists.cam.ac.uk
<cl-isabelle-users-request@lists.cam.ac.uk> on behalf of "Mulder,
Ike" <cl-isabelle-users@lists.cam.ac.uk>
Sent:20 June 2025 13:17
To:cl-isabelle-users@lists.cam.ac.uk
<cl-isabelle-users@lists.cam.ac.uk>
Subject:[isabelle] Merging elimination rules of Claset is slow -
performance issue loading multiple theories
Hello,We’ve been seeing performance problems where just the
theory … imports … begin
header is taking anywhere from 5-20 seconds.
The second bottleneck we’re seeing is in the merging of elimination
rules of the Claset of two theories.The merging happenshere
<https://isabelle.sketis.net/repos/isabelle/file/tip/src/Provers/classical.ML#l604>–
merging safe elimination rules specifically turns out to be slow in
our case. There are two reasons for that, I think:
- The complexity of merging the rules seems higher than it should be
- We apparently have a lot (about 20000) safe elimination rules.
Most of these are automatically added when declaringdatatype
sThe safe elimination rules added by new datatypes have the form
caseI = caseJ ==> R
for alli != j
. Therefore, this mainly
becomes problematic when defining enums with a lot of cases: the
number of safe elimination rules added is quadratic in the number of
cases in the enum. The addition of these elimination rules
happenshere
<https://isabelle.sketis.net/repos/isabelle/file/tip/src/HOL/Tools/Ctr_Sugar/ctr_sugar.ML#l1083>.As for 1), I think the complexity could be improved because merging
simpset
s still seems to be fast, even in the presence ofsimp
rules that scale quadratically in the number of cases in an Enum.For now, I have written a small scrappy plugin that deletes these
safe elimination rules for newly defineddatatype
s. That seems to
improve performance a lot.Best, Ike
From: Jasmin Blanchette <jasmin.blanchette@ifi.lmu.de>
You mean something like
myfunc Nil = 0
myfunc (Cons _ _) = 1
? If you want to investigate this, be my guest. :) Otherwise, I'll keep it as a project idea for a BSc thesis.
Jasmin
--
Prof. Dr. Jasmin Blanchette
Chair of Theoretical Computer Science and Theorem Proving
Ludwig-Maximilians-Universität München
Oettingenstr. 67, 80538 München, Germany
Tel.: +49 (0)89 2180 9341
Web: https://www.tcs.ifi.lmu.de/mitarbeiter/jasmin-blanchette_de.html
On 30. Jun 2025, at 15:44, Fabian Huch <huch@in.tum.de> wrote:
What about generating a function that enumerates cases of the datatype, and having a single distinctiveness theorem for that function? Then only the proof of that theorem is expensive, and it can be used by Sledgehammer (but proof search might take a bit longer).
Fabian
On 6/30/25 15:20, Jasmin Blanchette wrote:
I believe the behavior of mutually recursive datatypes might be worse than quadratic. (For codatatypes, that's definitely the case.) Perhaps Dmitriy could comment on that.
I cannot resist inserting an anecdote here. Last week, I tried out the LeanHammer on the simplest goals I could imagine: the subgoals emerging from an induction proving "xs ~= x # xs". LeanHammer failed to prove "[] ~= [] # xs", because the quadratic distinct rules are not generated as theorem in Lean (presumably because of their quadratic nature).
Lesson: Thinking about "simp" is good, but we should also keep Sledgehammer in mind. :)
Jasmin
--
Prof. Dr. Jasmin Blanchette
Chair of Theoretical Computer Science and Theorem Proving
Ludwig-Maximilians-Universität München
Oettingenstr. 67, 80538 München, Germany
Tel.: +49 (0)89 2180 9341
Web: https://www.tcs.ifi.lmu.de/mitarbeiter/jasmin-blanchette_de.htmlOn 30. Jun 2025, at 14:20, Lawrence Paulson <lp15@cam.ac.uk> <mailto:lp15@cam.ac.uk> wrote:
Plenty of people have large datatype declarations. We need to find a way of handling them that doesn't generate a quadratic amount of anything. As a stopgap, the having the option of just relying on the (quadratically many) simprules would be good.
The alternative of generating a simproc is probably good only for large datatypes, since having lots of simprocs floating around causes its own issues.
Larry
From: cl-isabelle-users-request@lists.cam.ac.uk <mailto:cl-isabelle-users-request@lists.cam.ac.uk> <cl-isabelle-users-request@lists.cam.ac.uk> <mailto:cl-isabelle-users-request@lists.cam.ac.uk> on behalf of "Mulder, Ike" <cl-isabelle-users@lists.cam.ac.uk> <mailto:cl-isabelle-users@lists.cam.ac.uk>
Sent: 20 June 2025 13:17
To: cl-isabelle-users@lists.cam.ac.uk <mailto:cl-isabelle-users@lists.cam.ac.uk> <cl-isabelle-users@lists.cam.ac.uk> <mailto:cl-isabelle-users@lists.cam.ac.uk>
Subject: [isabelle] Merging elimination rules of Claset is slow - performance issue loading multiple theoriesHello,
We’ve been seeing performance problems where just the
theory … imports … begin
header is taking anywhere from 5-20 seconds.
The second bottleneck we’re seeing is in the merging of elimination rules of the Claset of two theories.The merging happens here <https://isabelle.sketis.net/repos/isabelle/file/tip/src/Provers/classical.ML#l604> – merging safe elimination rules specifically turns out to be slow in our case. There are two reasons for that, I think:
The complexity of merging the rules seems higher than it should be
We apparently have a lot (about 20000) safe elimination rules. Most of these are automatically added when declaringdatatype
s
The safe elimination rules added by new datatypes have the formcaseI = caseJ ==> R
for alli != j
. Therefore, this mainly becomes problematic when defining enums with a lot of cases: the number of safe elimination rules added is quadratic in the number of cases in the enum. The addition of these elimination rules happens here <https://isabelle.sketis.net/repos/isabelle/file/tip/src/HOL/Tools/Ctr_Sugar/ctr_sugar.ML#l1083>.As for 1), I think the complexity could be improved because merging
simpset
s still seems to be fast, even in the presence ofsimp
rules that scale quadratically in the number of cases in an Enum.For now, I have written a small scrappy plugin that deletes these safe elimination rules for newly defined
datatype
s. That seems to improve performance a lot.Best, Ike
From: Fabian Huch <huch@in.tum.de>
Exactly. This sounds like a good thesis project.
Fabian
On 6/30/25 15:56, Jasmin Blanchette wrote:
You mean something like
myfunc Nil = 0
myfunc (Cons _ _) = 1? If you want to investigate this, be my guest. :) Otherwise, I'll
keep it as a project idea for a BSc thesis.Jasmin
--
Prof. Dr. Jasmin Blanchette
Chair of Theoretical Computer Science and Theorem Proving
Ludwig-Maximilians-Universität München
Oettingenstr. 67, 80538 München, Germany
Tel.: +49 (0)89 2180 9341
Web: https://www.tcs.ifi.lmu.de/mitarbeiter/jasmin-blanchette_de.htmlOn 30. Jun 2025, at 15:44, Fabian Huch <huch@in.tum.de> wrote:
What about generating a function that enumerates cases of the
datatype, and having a single distinctiveness theorem for that
function? Then only the proof of that theorem is expensive, and it
can be used by Sledgehammer (but proof search might take a bit longer).Fabian
On 6/30/25 15:20, Jasmin Blanchette wrote:
I believe the behavior of mutually recursive datatypes might be
worse than quadratic. (For codatatypes, that's definitely the case.)
Perhaps Dmitriy could comment on that.I cannot resist inserting an anecdote here. Last week, I tried out
the LeanHammer on the simplest goals I could imagine: the subgoals
emerging from an induction proving "xs ~= x # xs". LeanHammer failed
to prove "[] ~= [] # xs", because the quadratic distinct rules are
not generated as theorem in Lean (presumably because of their
quadratic nature).Lesson: Thinking about "simp" is good, but we should also keep
Sledgehammer in mind. :)Jasmin
--
Prof. Dr. Jasmin Blanchette
Chair of Theoretical Computer Science and Theorem Proving
Ludwig-Maximilians-Universität München
Oettingenstr. 67, 80538 München, Germany
Tel.: +49 (0)89 2180 9341
Web: https://www.tcs.ifi.lmu.de/mitarbeiter/jasmin-blanchette_de.htmlOn 30. Jun 2025, at 14:20, Lawrence Paulson <lp15@cam.ac.uk> wrote:
Plenty of people have large datatype declarations. We need to find
a way of handling them that doesn't generate a quadratic amount of
anything. As a stopgap, the having the option of just relying on
the (quadratically many) simprules would be good.The alternative of generating a simproc is probably good only for
large datatypes, since having lots of simprocs floating around
causes its own issues.Larry
From:cl-isabelle-users-request@lists.cam.ac.uk
<cl-isabelle-users-request@lists.cam.ac.uk> on behalf of "Mulder,
Ike" <cl-isabelle-users@lists.cam.ac.uk>
Sent:20 June 2025 13:17
To:cl-isabelle-users@lists.cam.ac.uk
<cl-isabelle-users@lists.cam.ac.uk>
Subject:[isabelle] Merging elimination rules of Claset is slow -
performance issue loading multiple theories
Hello,We’ve been seeing performance problems where just the
theory … imports … begin
header is taking anywhere from 5-20 seconds.
The second bottleneck we’re seeing is in the merging of elimination
rules of the Claset of two theories.The merging happenshere
<https://isabelle.sketis.net/repos/isabelle/file/tip/src/Provers/classical.ML#l604>–
merging safe elimination rules specifically turns out to be slow in
our case. There are two reasons for that, I think:
- The complexity of merging the rules seems higher than it should be
- We apparently have a lot (about 20000) safe elimination rules.
Most of these are automatically added when declaringdatatype
sThe safe elimination rules added by new datatypes have the form
caseI = caseJ ==> R
for alli != j
. Therefore, this mainly
becomes problematic when defining enums with a lot of cases: the
number of safe elimination rules added is quadratic in the number
of cases in the enum. The addition of these elimination rules
happenshere
<https://isabelle.sketis.net/repos/isabelle/file/tip/src/HOL/Tools/Ctr_Sugar/ctr_sugar.ML#l1083>.As for 1), I think the complexity could be improved because merging
simpset
s still seems to be fast, even in the presence ofsimp
rules that scale quadratically in the number of cases in an Enum.For now, I have written a small scrappy plugin that deletes these
safe elimination rules for newly defineddatatype
s. That seems to
improve performance a lot.Best, Ike
From: Tobias Nipkow <nipkow@in.tum.de>
I have the vague memory that that's what the HOL4 guys do. An "ord_t" function.
And a threshold for ord_t to kick in and replace the quadratically many rules
(which, as Jasmin pointed out, are much better for s/h).
Tobias
On 30/06/2025 15:44, Fabian Huch wrote:
What about generating a function that enumerates cases of the datatype, and
having a single distinctiveness theorem for that function? Then only the proof
of that theorem is expensive, and it can be used by Sledgehammer (but proof
search might take a bit longer).Fabian
On 6/30/25 15:20, Jasmin Blanchette wrote:
I believe the behavior of mutually recursive datatypes might be worse than
quadratic. (For codatatypes, that's definitely the case.) Perhaps Dmitriy
could comment on that.I cannot resist inserting an anecdote here. Last week, I tried out the
LeanHammer on the simplest goals I could imagine: the subgoals emerging from
an induction proving "xs ~= x # xs". LeanHammer failed to prove "[] ~= [] #
xs", because the quadratic distinct rules are not generated as theorem in Lean
(presumably because of their quadratic nature).Lesson: Thinking about "simp" is good, but we should also keep Sledgehammer in
mind. :)Jasmin
--
Prof. Dr. Jasmin Blanchette
Chair of Theoretical Computer Science and Theorem Proving
Ludwig-Maximilians-Universität München
Oettingenstr. 67, 80538 München, Germany
Tel.: +49 (0)89 2180 9341
Web: https://www.tcs.ifi.lmu.de/mitarbeiter/jasmin-blanchette_de.htmlOn 30. Jun 2025, at 14:20, Lawrence Paulson <lp15@cam.ac.uk> wrote:
Plenty of people have large datatype declarations. We need to find a way of
handling them that doesn't generate a quadratic amount of anything. As a
stopgap, the having the option of just relying on the (quadratically many)
simprules would be good.The alternative of generating a simproc is probably good only for large
datatypes, since having lots of simprocs floating around causes its own issues.Larry
From:cl-isabelle-users-request@lists.cam.ac.uk <cl-isabelle-users-
request@lists.cam.ac.uk> on behalf of "Mulder, Ike" <cl-isabelle-
users@lists.cam.ac.uk>
Sent:20 June 2025 13:17
To:cl-isabelle-users@lists.cam.ac.uk <cl-isabelle-users@lists.cam.ac.uk>
Subject:[isabelle] Merging elimination rules of Claset is slow -
performance issue loading multiple theories
Hello,We’ve been seeing performance problems where just the
theory … imports … begin
header is taking anywhere from 5-20 seconds.
The second bottleneck we’re seeing is in the merging of elimination rules of
the Claset of two theories.The merging happenshere <https://isabelle.sketis.net/repos/isabelle/file/tip/
src/Provers/classical.ML#l604>– merging safe elimination rules specifically
turns out to be slow in our case. There are two reasons for that, I think:
- The complexity of merging the rules seems higher than it should be
- We apparently have a lot (about 20000) safe elimination rules. Most of
these are automatically added when declaringdatatype
sThe safe elimination rules added by new datatypes have the form
caseI = caseJ ==> R
for alli != j
. Therefore, this mainly becomes problematic
when defining enums with a lot of cases: the number of safe elimination rules
added is quadratic in the number of cases in the enum. The addition of these
elimination rules happenshere <https://isabelle.sketis.net/repos/isabelle/
file/tip/src/HOL/Tools/Ctr_Sugar/ctr_sugar.ML#l1083>.As for 1), I think the complexity could be improved because merging
simpset
s still seems to be fast, even in the presence ofsimp
rules that
scale quadratically in the number of cases in an Enum.For now, I have written a small scrappy plugin that deletes these safe
elimination rules for newly defineddatatype
s. That seems to improve
performance a lot.Best, Ike
From: Dmitriy Traytel <cl-isabelle-users@lists.cam.ac.uk>
Hi Jasmin, all,
It is correct that the behavior of the internal type construction for (co)datatypes is worse than quadratic in the numbers of mutual types and type variables. However the number of constructors does not influence the internal construction much (there one works with a single constructor anyway). For example, for the below type the internal construction takes 56 milliseconds whereas producing the rest (including 3540 high-level distinctness theorems) takes 27 seconds on my machine (with parallelism disabled for more precise time measurements).
ML ‹Multithreading.parallel_proofs := 0›
declare [[bnf_timing]]
datatype a =
A1 | A2 | A3 | A4 | A5 | A6 | A7 | A8 | A9 | A10 |
B1 | B2 | B3 | B4 | B5 | B6 | B7 | B8 | B9 | B10 |
C1 | C2 | C3 | C4 | C5 | C6 | C7 | C8 | C9 | C10 |
D1 | D2 | D3 | D4 | D5 | D6 | D7 | D8 | D9 | D10 |
E1 | E2 | E3 | E4 | E5 | E6 | E7 | E8 | E9 | E10 |
F1 | F2 | F3 | F4 | F5 | F6 | F7 | F8 | F9 | F10
But distinctness is only one of the culprits (and as discussed might be useful for proof automation): disabling the plugins brings the time down to 8 seconds:
datatype (plugins only:) a = …
Depending on the use case one can also try selectively disable plugins.
Dmitriy
On 30 Jun 2025, at 15.20, Jasmin Blanchette <jasmin.blanchette@ifi.lmu.de> wrote:
I believe the behavior of mutually recursive datatypes might be worse than quadratic. (For codatatypes, that's definitely the case.) Perhaps Dmitriy could comment on that.
I cannot resist inserting an anecdote here. Last week, I tried out the LeanHammer on the simplest goals I could imagine: the subgoals emerging from an induction proving "xs ~= x # xs". LeanHammer failed to prove "[] ~= [] # xs", because the quadratic distinct rules are not generated as theorem in Lean (presumably because of their quadratic nature).
Lesson: Thinking about "simp" is good, but we should also keep Sledgehammer in mind. :)
Jasmin
--
Prof. Dr. Jasmin Blanchette
Chair of Theoretical Computer Science and Theorem Proving
Ludwig-Maximilians-Universität München
Oettingenstr. 67, 80538 München, Germany
Tel.: +49 (0)89 2180 9341
Web: https://www.tcs.ifi.lmu.de/mitarbeiter/jasmin-blanchette_de.html
On 30. Jun 2025, at 14:20, Lawrence Paulson <lp15@cam.ac.uk> wrote:
Plenty of people have large datatype declarations. We need to find a way of handling them that doesn't generate a quadratic amount of anything. As a stopgap, the having the option of just relying on the (quadratically many) simprules would be good.
The alternative of generating a simproc is probably good only for large datatypes, since having lots of simprocs floating around causes its own issues.
Larry
From: cl-isabelle-users-request@lists.cam.ac.uk <cl-isabelle-users-request@lists.cam.ac.uk> on behalf of "Mulder, Ike" <cl-isabelle-users@lists.cam.ac.uk>
Sent: 20 June 2025 13:17
To: cl-isabelle-users@lists.cam.ac.uk <cl-isabelle-users@lists.cam.ac.uk>
Subject: [isabelle] Merging elimination rules of Claset is slow - performance issue loading multiple theories
Hello,
We’ve been seeing performance problems where just the theory … imports … begin
header is taking anywhere from 5-20 seconds.
The second bottleneck we’re seeing is in the merging of elimination rules of the Claset of two theories.
The merging happens here<https://isabelle.sketis.net/repos/isabelle/file/tip/src/Provers/classical.ML#l604> – merging safe elimination rules specifically turns out to be slow in our case. There are two reasons for that, I think:
1. The complexity of merging the rules seems higher than it should be
2. We apparently have a lot (about 20000) safe elimination rules. Most of these are automatically added when declaring datatype
s
The safe elimination rules added by new datatypes have the form caseI = caseJ ==> R
for all i != j
. Therefore, this mainly becomes problematic when defining enums with a lot of cases: the number of safe elimination rules added is quadratic in the number of cases in the enum. The addition of these elimination rules happens here<https://isabelle.sketis.net/repos/isabelle/file/tip/src/HOL/Tools/Ctr_Sugar/ctr_sugar.ML#l1083>.
As for 1), I think the complexity could be improved because merging simpset
s still seems to be fast, even in the presence of simp
rules that scale quadratically in the number of cases in an Enum.
For now, I have written a small scrappy plugin that deletes these safe elimination rules for newly defined datatype
s. That seems to improve performance a lot.
Best, Ike
From: Makarius <makarius@sketis.net>
On 21/06/2025 15:35, Manuel Eberl wrote:
I recall that there were some big performance problems in one of the AFP
entries on p-adic numbers as well. Switching in and out of locale contexts
sometimes takes a full minute. I wonder if this is related to some of the
issues you mention.
Performance problems need constructive existance proofs, otherwise they don't
exist.
Can you point to specific locations in AFP?
Makarius
From: Makarius <makarius@sketis.net>
On 20/06/2025 14:17, "Mulder, Ike" (via cl-isabelle-users Mailing List) wrote:
We’ve been seeing performance problems where just the
theory … imports … begin
header is taking anywhere from 5-20 seconds.
I would not call this slow ...
Many years ago we've seen theory merges > 30 seconds occasionally, without
getting worried.
The second bottleneck we’re seeing is in the merging of elimination rules of
the Claset of two theories.The merging happens here <https://isabelle.sketis.net/repos/isabelle/file/tip/
src/Provers/classical.ML#l604> – merging safe elimination rules specifically
turns out to be slow in our case. There are two reasons for that, I think:
- The complexity of merging the rules seems higher than it should be
- We apparently have a lot (about 20000) safe elimination rules. Most of
these are automatically added when declaringdatatype
s
Whenever there is a performance bottleneck, it needs to be "proven
constructively" by some significant examples. This is important to reproduce
the problem, and cross-check any attempts to improve the situation.
Anything else is just speculation ...
The safe elimination rules added by new datatypes have the form
caseI = caseJ ==> R
for alli != j
. Therefore, this mainly becomes problematic when
defining enums with a lot of cases: the number of safe elimination rules added
is quadratic in the number of cases in the enum. The addition of these
elimination rules happens here <https://isabelle.sketis.net/repos/isabelle/
file/tip/src/HOL/Tools/Ctr_Sugar/ctr_sugar.ML#l1083>.As for 1), I think the complexity could be improved because merging
simpset
s
still seems to be fast, even in the presence ofsimp
rules that scale
quadratically in the number of cases in an Enum.For now, I have written a small scrappy plugin that deletes these safe
elimination rules for newly defineddatatype
s. That seems to improve
performance a lot.
Now the focus has changed to datatypes. This raise a few more questions:
* Is the claset merge operation really slow, or are the involved clasets
just very big?
* Could the datatype package do something differently, with significant
improvements and without too much extra complexity?
* Could your application do something different? Where are the many
datatypes coming from, actually? Can you avoid the expensive merge operations,
e.g. by making the theory graph more linear?
Makarius
From: Makarius <makarius@sketis.net>
On 30/06/2025 19:36, Makarius wrote:
* Could your application do something different? Where are the many
datatypes coming from, actually?
And more: Are the datatypes genuine datatypes (disjoint sums), or just record
types done with the somewhat bulky datatype_record package?
Makarius
From: Manuel Eberl <manuel@pruvisto.org>
The entry in question is "Padic_Field", and the theory that is affected
most severely is "Padic_Field_Powers". We talked about this entry quite
a bit when it was first submitted three years ago. Back then the
performance problems were so severe I was wondering whether we could
even accept it.
The entire entry is still pretty slow for what it is. But it's not as
excessive as it once was. I asked the author to try to reduce the
context switches as much as possible and that already helped a lot. But
I also think some of the other performance tuning you did to Isabelle
since then also helped.
As it stands now, there are still two "context" commands in there that
take about 5 s (elapsed) on my machine, namely in line 41 and line
10404. The "lemma" command on line 2837 and the subsequent "blast" call
also take about 2 seconds each, presumably due to the context switch
induced by "(in padic_fields)". As I recall, this used to be almost an
order of magnitude worse.
Of course, 2 or 5 seconds is not dramatic. But in a development that
uses locales quite a bit, these numbers accumulate to an extent that
makes working with such a development interactively (especially during
refactoring) very annoying.
Cheers,
Manuel
On 30/06/2025 19:38, Makarius wrote:
On 21/06/2025 15:35, Manuel Eberl wrote:
I recall that there were some big performance problems in one of the
AFP entries on p-adic numbers as well. Switching in and out of locale
contexts sometimes takes a full minute. I wonder if this is related
to some of the issues you mention.Performance problems need constructive existance proofs, otherwise
they don't exist.Can you point to specific locations in AFP?
Makarius
From: Michael Norrish <cl-isabelle-users@lists.cam.ac.uk>
For what it’s worth, it’s only with enumerated types (consisting of only nullary constructors), where we use a conversion to solve equations between them. The conversion works by mapping the constructors into numbers.
Michael
On 1/7/2025, 01:30, "cl-isabelle-users-request@lists.cam.ac.uk" <cl-isabelle-users-request@lists.cam.ac.uk> wrote:
I have the vague memory that that's what the HOL4 guys do. An "ord_t" function.
And a threshold for ord_t to kick in and replace the quadratically many rules
(which, as Jasmin pointed out, are much better for s/h).
Tobias
On 30/06/2025 15:44, Fabian Huch wrote:
What about generating a function that enumerates cases of the datatype, and
having a single distinctiveness theorem for that function? Then only the proof
of that theorem is expensive, and it can be used by Sledgehammer (but proof
search might take a bit longer).Fabian
On 6/30/25 15:20, Jasmin Blanchette wrote:
I believe the behavior of mutually recursive datatypes might be worse than
quadratic. (For codatatypes, that's definitely the case.) Perhaps Dmitriy
could comment on that.I cannot resist inserting an anecdote here. Last week, I tried out the
LeanHammer on the simplest goals I could imagine: the subgoals emerging from
an induction proving "xs ~= x # xs". LeanHammer failed to prove "[] ~= [] #
xs", because the quadratic distinct rules are not generated as theorem in Lean
(presumably because of their quadratic nature).Lesson: Thinking about "simp" is good, but we should also keep Sledgehammer in
mind. :)Jasmin
--
Prof. Dr. Jasmin Blanchette
Chair of Theoretical Computer Science and Theorem Proving
Ludwig-Maximilians-Universität München
Oettingenstr. 67, 80538 München, Germany
Tel.: +49 (0)89 2180 9341
Web: https://www.tcs.ifi.lmu.de/mitarbeiter/jasmin-blanchette_de.htmlOn 30. Jun 2025, at 14:20, Lawrence Paulson <lp15@cam.ac.uk<mailto:lp15@cam.ac.uk>> wrote:
Plenty of people have large datatype declarations. We need to find a way of
handling them that doesn't generate a quadratic amount of anything. As a
stopgap, the having the option of just relying on the (quadratically many)
simprules would be good.The alternative of generating a simproc is probably good only for large
datatypes, since having lots of simprocs floating around causes its own issues.Larry
From:cl-isabelle-users-request@lists.cam.ac.uk<mailto:cl-isabelle-users-request@lists.cam.ac.uk> <cl-isabelle-users-
request@lists.cam.ac.uk<mailto:request@lists.cam.ac.uk>> on behalf of "Mulder, Ike" <cl-isabelle-
users@lists.cam.ac.uk<mailto:users@lists.cam.ac.uk>>
Sent:20 June 2025 13:17
To:cl-isabelle-users@lists.cam.ac.uk<mailto:cl-isabelle-users@lists.cam.ac.uk> <cl-isabelle-users@lists.cam.ac.uk<mailto:cl-isabelle-users@lists.cam.ac.uk>>
Subject:[isabelle] Merging elimination rules of Claset is slow -
performance issue loading multiple theories
Hello,We’ve been seeing performance problems where just the
theory … imports … begin
header is taking anywhere from 5-20 seconds.
The second bottleneck we’re seeing is in the merging of elimination rules of
the Claset of two theories.The merging happenshere <https://isabelle.sketis.net/repos/isabelle/file/tip/
src/Provers/classical.ML#l604>– merging safe elimination rules specifically
turns out to be slow in our case. There are two reasons for that, I think:
- The complexity of merging the rules seems higher than it should be
- We apparently have a lot (about 20000) safe elimination rules. Most of
these are automatically added when declaringdatatype
sThe safe elimination rules added by new datatypes have the form
caseI = caseJ ==> R
for alli != j
. Therefore, this mainly becomes problematic
when defining enums with a lot of cases: the number of safe elimination rules
added is quadratic in the number of cases in the enum. The addition of these
elimination rules happenshere <https://isabelle.sketis.net/repos/isabelle/
file/tip/src/HOL/Tools/Ctr_Sugar/ctr_sugar.ML#l1083>.As for 1), I think the complexity could be improved because merging
simpset
s still seems to be fast, even in the presence ofsimp
rules that
scale quadratically in the number of cases in an Enum.For now, I have written a small scrappy plugin that deletes these safe
elimination rules for newly defineddatatype
s. That seems to improve
performance a lot.Best, Ike
From: "\"Becker, Hanno\"" <cl-isabelle-users@lists.cam.ac.uk>
Can this be put into the user’s hand? Allow them to replace the autogeneration of (in)equality related lemmas by a simproc (perhaps through the existing plugin mechanism). This helps applications dealing with large datatypes (esp. enumerated types), while not enforcing change where the current approach works fine.
Hanno
From: <cl-isabelle-users-request@lists.cam.ac.uk> on behalf of Michael Norrish <cl-isabelle-users@lists.cam.ac.uk>
Reply-To: "Michael.Norrish@anu.edu.au" <Michael.Norrish@anu.edu.au>
Date: Tuesday, 1 July 2025 at 00:14
To: Tobias Nipkow <nipkow@in.tum.de>, "cl-isabelle-users@lists.cam.ac.uk" <cl-isabelle-users@lists.cam.ac.uk>
Subject: RE: [EXTERNAL] [isabelle] Merging elimination rules of Claset is slow - performance issue loading multiple theories
CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
For what it’s worth, it’s only with enumerated types (consisting of only nullary constructors), where we use a conversion to solve equations between them. The conversion works by mapping the constructors into numbers.
Michael
On 1/7/2025, 01:30, "cl-isabelle-users-request@lists.cam.ac.uk" <cl-isabelle-users-request@lists.cam.ac.uk> wrote:
I have the vague memory that that's what the HOL4 guys do. An "ord_t" function.
And a threshold for ord_t to kick in and replace the quadratically many rules
(which, as Jasmin pointed out, are much better for s/h).
Tobias
On 30/06/2025 15:44, Fabian Huch wrote:
What about generating a function that enumerates cases of the datatype, and
having a single distinctiveness theorem for that function? Then only the proof
of that theorem is expensive, and it can be used by Sledgehammer (but proof
search might take a bit longer).Fabian
On 6/30/25 15:20, Jasmin Blanchette wrote:
I believe the behavior of mutually recursive datatypes might be worse than
quadratic. (For codatatypes, that's definitely the case.) Perhaps Dmitriy
could comment on that.I cannot resist inserting an anecdote here. Last week, I tried out the
LeanHammer on the simplest goals I could imagine: the subgoals emerging from
an induction proving "xs ~= x # xs". LeanHammer failed to prove "[] ~= [] #
xs", because the quadratic distinct rules are not generated as theorem in Lean
(presumably because of their quadratic nature).Lesson: Thinking about "simp" is good, but we should also keep Sledgehammer in
mind. :)Jasmin
--
Prof. Dr. Jasmin Blanchette
Chair of Theoretical Computer Science and Theorem Proving
Ludwig-Maximilians-Universität München
Oettingenstr. 67, 80538 München, Germany
Tel.: +49 (0)89 2180 9341
Web: https://www.tcs.ifi.lmu.de/mitarbeiter/jasmin-blanchette_de.htmlOn 30. Jun 2025, at 14:20, Lawrence Paulson <lp15@cam.ac.uk<mailto:lp15@cam.ac.uk>> wrote:
Plenty of people have large datatype declarations. We need to find a way of
handling them that doesn't generate a quadratic amount of anything. As a
stopgap, the having the option of just relying on the (quadratically many)
simprules would be good.The alternative of generating a simproc is probably good only for large
datatypes, since having lots of simprocs floating around causes its own issues.Larry
From:cl-isabelle-users-request@lists.cam.ac.uk<mailto:cl-isabelle-users-request@lists.cam.ac.uk> <cl-isabelle-users-
request@lists.cam.ac.uk<mailto:request@lists.cam.ac.uk>> on behalf of "Mulder, Ike" <cl-isabelle-
users@lists.cam.ac.uk<mailto:users@lists.cam.ac.uk>>
Sent:20 June 2025 13:17
To:cl-isabelle-users@lists.cam.ac.uk<mailto:cl-isabelle-users@lists.cam.ac.uk> <cl-isabelle-users@lists.cam.ac.uk<mailto:cl-isabelle-users@lists.cam.ac.uk>>
Subject:[isabelle] Merging elimination rules of Claset is slow -
performance issue loading multiple theories
Hello,We’ve been seeing performance problems where just the
theory … imports … begin
header is taking anywhere from 5-20 seconds.
The second bottleneck we’re seeing is in the merging of elimination rules of
the Claset of two theories.The merging happenshere <https://isabelle.sketis.net/repos/isabelle/file/tip/
src/Provers/classical.ML#l604>– merging safe elimination rules specifically
turns out to be slow in our case. There are two reasons for that, I think:
- The complexity of merging the rules seems higher than it should be
- We apparently have a lot (about 20000) safe elimination rules. Most of
these are automatically added when declaringdatatype
sThe safe elimination rules added by new datatypes have the form
caseI = caseJ ==> R
for alli != j
. Therefore, this mainly becomes problematic
when defining enums with a lot of cases: the number of safe elimination rules
added is quadratic in the number of cases in the enum. The addition of these
elimination rules happenshere <https://isabelle.sketis.net/repos/isabelle/
file/tip/src/HOL/Tools/Ctr_Sugar/ctr_sugar.ML#l1083>.As for 1), I think the complexity could be improved because merging
simpset
s still seems to be fast, even in the presence ofsimp
rules that
scale quadratically in the number of cases in an Enum.For now, I have written a small scrappy plugin that deletes these safe
elimination rules for newly defineddatatype
s. That seems to improve
performance a lot.Best, Ike
From: Tobias Nipkow <nipkow@in.tum.de>
Hi Michael,
Thanks for the correction. You are using a conversion. One could also just
simplify with the thm
ord_t x = ord_t y ==> x = y
The conversion is for efficiency? In particular because it only fires if x and y
are constructors?
Tobias
On 01/07/2025 01:12, Michael Norrish wrote:
For what it’s worth, it’s only with enumerated types (consisting of only nullary
constructors), where we use a conversion to solve equations between them. The
conversion works by mapping the constructors into numbers.Michael
On 1/7/2025, 01:30, "cl-isabelle-users-request@lists.cam.ac.uk" <cl-isabelle-
users-request@lists.cam.ac.uk> wrote:I have the vague memory that that's what the HOL4 guys do. An "ord_t" function.
And a threshold for ord_t to kick in and replace the quadratically many rules
(which, as Jasmin pointed out, are much better for s/h).
Tobias
On 30/06/2025 15:44, Fabian Huch wrote:
What about generating a function that enumerates cases of the datatype, and
having a single distinctiveness theorem for that function? Then only the proof
of that theorem is expensive, and it can be used by Sledgehammer (but proof
search might take a bit longer).
>
>
Fabian
>
On 6/30/25 15:20, Jasmin Blanchette wrote:
I believe the behavior of mutually recursive datatypes might be worse than
quadratic. (For codatatypes, that's definitely the case.) Perhaps Dmitriy
could comment on that.
>
I cannot resist inserting an anecdote here. Last week, I tried out the
LeanHammer on the simplest goals I could imagine: the subgoals emerging from
an induction proving "xs ~= x # xs". LeanHammer failed to prove "[] ~= [] #
xs", because the quadratic distinct rules are not generated as theorem in Lean
(presumably because of their quadratic nature).
>
Lesson: Thinking about "simp" is good, but we should also keep Sledgehammer in
mind. :)
>
Jasmin
>
--
Prof. Dr. Jasmin Blanchette
Chair of Theoretical Computer Science and Theorem Proving
Ludwig-Maximilians-Universität München
Oettingenstr. 67, 80538 München, Germany
Tel.: +49 (0)89 2180 9341
Web: https://www.tcs.ifi.lmu.de/mitarbeiter/jasmin-blanchette_de.html
<https://www.tcs.ifi.lmu.de/mitarbeiter/jasmin-blanchette_de.html>>
>
On 30. Jun 2025, at 14:20, Lawrence Paulson <lp15@cam.ac.uk
<mailto:lp15@cam.ac.uk>> wrote:>
Plenty of people have large datatype declarations. We need to find a way of
handling them that doesn't generate a quadratic amount of anything. As a
stopgap, the having the option of just relying on the (quadratically many)
simprules would be good.
>
The alternative of generating a simproc is probably good only for large
datatypes, since having lots of simprocs floating around causes its own issues.
>
Larry
>
From:cl-isabelle-users-request@lists.cam.ac.uk <mailto:cl-isabelle-users-
request@lists.cam.ac.uk> <cl-isabelle-users-request@lists.cam.ac.uk <mailto:request@lists.cam.ac.uk>> on behalf of
"Mulder, Ike" <cl-isabelle-users@lists.cam.ac.uk <mailto:users@lists.cam.ac.uk>>
Sent:20 June 2025 13:17
To:cl-isabelle-users@lists.cam.ac.uk <mailto:cl-isabelle-
users@lists.cam.ac.uk> <cl-isabelle-users@lists.cam.ac.uk <mailto:cl-isabelle-
users@lists.cam.ac.uk>>Subject:[isabelle] Merging elimination rules of Claset is slow -
performance issue loading multiple theories
Hello,
>
We’ve been seeing performance problems where just the `theory … imports …
begin` header is taking anywhere from 5-20 seconds.
The second bottleneck we’re seeing is in the merging of elimination rules of
the Claset of two theories.
>
The merging happenshere <https://isabelle.sketis.net/repos/isabelle/file/
tip/ <https://isabelle.sketis.net/repos/isabelle/file/tip/>src/Provers/classical.ML#l604>– merging safe elimination rules specifically
turns out to be slow in our case. There are two reasons for that, I think:
>
1. The complexity of merging the rules seems higher than it should be
2. We apparently have a lot (about 20000) safe elimination rules. Most of
these are automatically added when declaring
datatype
s>
The safe elimination rules added by new datatypes have the form `caseI =
caseJ ==> R
for all
i != j`. Therefore, this mainly becomes problematicwhen defining enums with a lot of cases: the number of safe elimination rules
added is quadratic in the number of cases in the enum. The addition of these
elimination rules happenshere <https://isabelle.sketis.net/repos/isabelle/
<https://isabelle.sketis.net/repos/isabelle/>file/tip/src/HOL/Tools/Ctr_Sugar/ctr_sugar.ML#l1083>.
>
As for 1), I think the complexity could be improved because merging
simpset
s still seems to be fast, even in the presence ofsimp
rules thatscale quadratically in the number of cases in an Enum.
>
For now, I have written a small scrappy plugin that deletes these safe
elimination rules for newly defined
datatype
s. That seems to improveperformance a lot.
>
Best, Ike
>
From: Michael Norrish <cl-isabelle-users@lists.cam.ac.uk>
To be honest, I’m not sure of the historic motivation (it was me who wrote the code, but in 2002, and the commit message just says that I preferred to this to the unilateral use of your theorem as an iff, which is what had been being done).
Of course, we could make the conversion fire on just the quadratically many constructor combinations (making the term-net bigger and grosser), but in fact we match against any equality on the type and then check to see that both sides are constants. Only then do we apply the analogue of your ord_t. Looking at the code I see we only do this for cases when #constructors > 15, a number that I’m sure was pulled out of nowhere. For smaller types, we just generate all the rewrites.
Best wishes,
Michael
On 1/7/2025, 16:36, "Tobias Nipkow" <nipkow@in.tum.de> wrote:
Hi Michael,
Thanks for the correction. You are using a conversion. One could also just
simplify with the thm
ord_t x = ord_t y ==> x = y
The conversion is for efficiency? In particular because it only fires if x and y
are constructors?
Tobias
On 01/07/2025 01:12, Michael Norrish wrote:
For what it’s worth, it’s only with enumerated types (consisting of only nullary
constructors), where we use a conversion to solve equations between them. The
conversion works by mapping the constructors into numbers.Michael
On 1/7/2025, 01:30, "cl-isabelle-users-request@lists.cam.ac.uk<mailto:cl-isabelle-users-request@lists.cam.ac.uk>" <cl-isabelle-
users-request@lists.cam.ac.uk<mailto:users-request@lists.cam.ac.uk>> wrote:I have the vague memory that that's what the HOL4 guys do. An "ord_t" function.
And a threshold for ord_t to kick in and replace the quadratically many rules
(which, as Jasmin pointed out, are much better for s/h).
Tobias
On 30/06/2025 15:44, Fabian Huch wrote:
What about generating a function that enumerates cases of the datatype, and
having a single distinctiveness theorem for that function? Then only the proof
of that theorem is expensive, and it can be used by Sledgehammer (but proof
search might take a bit longer).
>
>
Fabian
>
On 6/30/25 15:20, Jasmin Blanchette wrote:
I believe the behavior of mutually recursive datatypes might be worse than
quadratic. (For codatatypes, that's definitely the case.) Perhaps Dmitriy
could comment on that.
>
I cannot resist inserting an anecdote here. Last week, I tried out the
LeanHammer on the simplest goals I could imagine: the subgoals emerging from
an induction proving "xs ~= x # xs". LeanHammer failed to prove "[] ~= [] #
xs", because the quadratic distinct rules are not generated as theorem in Lean
(presumably because of their quadratic nature).
>
Lesson: Thinking about "simp" is good, but we should also keep Sledgehammer in
mind. :)
>
Jasmin
>
--
Prof. Dr. Jasmin Blanchette
Chair of Theoretical Computer Science and Theorem Proving
Ludwig-Maximilians-Universität München
Oettingenstr. 67, 80538 München, Germany
Tel.: +49 (0)89 2180 9341
Web: https://www.tcs.ifi.lmu.de/mitarbeiter/jasmin-blanchette_de.html
<https://www.tcs.ifi.lmu.de/mitarbeiter/jasmin-blanchette_de.html><https://www.tcs.ifi.lmu.de/mitarbeiter/jasmin-blanchette_de.html%3e>>
>
On 30. Jun 2025, at 14:20, Lawrence Paulson <lp15@cam.ac.uk<mailto:lp15@cam.ac.uk>
<mailto:lp15@cam.ac.uk<mailto:lp15@cam.ac.uk>>> wrote:>
Plenty of people have large datatype declarations. We need to find a way of
handling them that doesn't generate a quadratic amount of anything. As a
stopgap, the having the option of just relying on the (quadratically many)
simprules would be good.
>
The alternative of generating a simproc is probably good only for large
datatypes, since having lots of simprocs floating around causes its own issues.
>
Larry
>
From:cl-isabelle-users-request@lists.cam.ac.uk<mailto:cl-isabelle-users-request@lists.cam.ac.uk> <mailto:cl-isabelle-users-
request@lists.cam.ac.uk<mailto:request@lists.cam.ac.uk>> <cl-isabelle-users-request@lists.cam.ac.uk<mailto:request@lists.cam.ac.uk> <mailto:request@lists.cam.ac.uk<mailto:request@lists.cam.ac.uk>>> on behalf of
"Mulder, Ike" <cl-isabelle-users@lists.cam.ac.uk<mailto:users@lists.cam.ac.uk> <mailto:users@lists.cam.ac.uk<mailto:users@lists.cam.ac.uk>>>
Sent:20 June 2025 13:17
To:cl-isabelle-users@lists.cam.ac.uk<mailto:cl-isabelle-users@lists.cam.ac.uk> <mailto:cl-isabelle-
users@lists.cam.ac.uk<mailto:users@lists.cam.ac.uk>> <cl-isabelle-users@lists.cam.ac.uk<mailto:cl-isabelle-users@lists.cam.ac.uk> <mailto:cl-isabelle-
users@lists.cam.ac.uk<mailto:users@lists.cam.ac.uk>>>Subject:[isabelle] Merging elimination rules of Claset is slow -
performance issue loading multiple theories
Hello,
>
We’ve been seeing performance problems where just the `theory … imports …
begin` header is taking anywhere from 5-20 seconds.
The second bottleneck we’re seeing is in the merging of elimination rules of
the Claset of two theories.
>
The merging happenshere <https://isabelle.sketis.net/repos/isabelle/file/
tip/ <https://isabelle.sketis.net/repos/isabelle/file/tip/><https://isabelle.sketis.net/repos/isabelle/file/tip/%3e>src/Provers/classical.ML#l604>– merging safe elimination rules specifically
turns out to be slow in our case. There are two reasons for that, I think:
>
- The complexity of merging the rules seems higher than it should be
- We apparently have a lot (about 20000) safe elimination rules. Most of
these are automatically added when declaring
datatype
s>
The safe elimination rules added by new datatypes have the form `caseI =
caseJ ==> R
for all
i != j`. Therefore, this mainly becomes problematicwhen defining enums with a lot of cases: the number of safe elimination rules
added is quadratic in the number of cases in the enum. The addition of these
elimination rules happenshere <https://isabelle.sketis.net/repos/isabelle/
<https://isabelle.sketis.net/repos/isabelle/><https://isabelle.sketis.net/repos/isabelle/%3e>file/tip/src/HOL/Tools/Ctr_Sugar/ctr_sugar.ML#l1083>.
>
As for 1), I think the complexity could be improved because merging
simpset
s still seems to be fast, even in the presence ofsimp
rules thatscale quadratically in the number of cases in an Enum.
>
For now, I have written a small scrappy plugin that deletes these safe
elimination rules for newly defined
datatype
s. That seems to improveperformance a lot.
>
Best, Ike
>
From: Florian Haftmann <florian.haftmann@cit.tum.de>
(This thread covers a lot of topics which are not obviously covered by
the subject. I am replying to the message which opened this particular
thread, in the hope that it won’t add more confusion.)
Plenty of people have large datatype declarations. We need to find a way
of handling them that doesn't generate a quadratic amount of anything.
As a stopgap, the having the option of just relying on the
(quadratically many) simprules would be good.The alternative of generating a simproc is probably good only for large
datatypes, since having lots of simprocs floating around causes its own
issues.
Concerning the quadratic blowup of distinctness rules: there was once a
mechanism to generate those rules for code generation lazily, and as
early as 2009, we decided to drop it (very likely after a discussion
between Stefan Berghofer an me):
hg log -r cda9a931a46b -p
This seems to indicate that generating those rules is not a serious
issue, although different tools might still have their pitfalls to apply
those rules efficiently.
Cheers,
Florian
OpenPGP_0xA707172232CFA4E9.asc
OpenPGP_signature.asc
From: "\"Mulder, Ike\"" <cl-isabelle-users@lists.cam.ac.uk>
The datatypes are genuine datatype
s. Using Dimitry's example from the other thread, the slowdown we are seeing can be reproduced with three simple files:
File Scratch1.thy:
theory Scratch1
imports
Main
begin
datatype (plugins only:) enum1 =
A1 | A2 | A3 | A4 | A5 | A6 | A7 | A8 | A9 | A10 |
B1 | B2 | B3 | B4 | B5 | B6 | B7 | B8 | B9 | B10 |
C1 | C2 | C3 | C4 | C5 | C6 | C7 | C8 | C9 | C10 |
D1 | D2 | D3 | D4 | D5 | D6 | D7 | D8 | D9 | D10 |
E1 | E2 | E3 | E4 | E5 | E6 | E7 | E8 | E9 | E10 |
F1 | F2 | F3 | F4 | F5 | F6 | F7 | F8 | F9 | F10 |
G1 | G2 | G3 | G4 | G5 | G6 | G7 | G8 | G9 | G10 |
H1 | H2 | H3 | H4 | H5 | H6 | H7 | H8 | H9 | H10 |
I1 | I2 | I3 | I4 | I5 | I6 | I7 | I8 | I9 | I10 |
J1 | J2 | J3 | J4 | J5 | J6 | J7 | J8 | J9 | J10
datatype (plugins only:) enum2 =
A1 | A2 | A3 | A4 | A5 | A6 | A7 | A8 | A9 | A10 |
B1 | B2 | B3 | B4 | B5 | B6 | B7 | B8 | B9 | B10 |
C1 | C2 | C3 | C4 | C5 | C6 | C7 | C8 | C9 | C10 |
D1 | D2 | D3 | D4 | D5 | D6 | D7 | D8 | D9 | D10 |
E1 | E2 | E3 | E4 | E5 | E6 | E7 | E8 | E9 | E10 |
F1 | F2 | F3 | F4 | F5 | F6 | F7 | F8 | F9 | F10 |
G1 | G2 | G3 | G4 | G5 | G6 | G7 | G8 | G9 | G10 |
H1 | H2 | H3 | H4 | H5 | H6 | H7 | H8 | H9 | H10 |
I1 | I2 | I3 | I4 | I5 | I6 | I7 | I8 | I9 | I10 |
J1 | J2 | J3 | J4 | J5 | J6 | J7 | J8 | J9 | J10
end
In file Scratch2.thy copy everything from Scratch1.thy, just replace the name of the theory with Scratch2
.
File Scratch3.thy:
theory Scratch3
imports
Scratch1
Scratch2
begin
(* loading the imports here is slower than I'd expect *)
ML‹
val safe_elims_of_ctx =
Context.Proof #>
Classical.get_cs #>
Classical.rep_cs #> #safeEs #>
Item_Net.content #> List.map #1 ;
writeln ("I see " ^ (@{context} |> safe_elims_of_ctx |> List.length |> Int.toString) ^ " elim rules\n")
›
end
Best, Ike
On 30/06/2025, 19:10, "Makarius" <makarius@sketis.net <mailto:makarius@sketis.net>> wrote:
On 30/06/2025 19:36, Makarius wrote:
- Could your application do something different? Where are the many
datatypes coming from, actually?
And more: Are the datatypes genuine datatypes (disjoint sums), or just record
types done with the somewhat bulky datatype_record package?
Makarius
From: Florian Haftmann <florian.haftmann@cit.tum.de>
Hi Ike,
datatype (plugins only:) enum1 =
A1 | A2 | A3 | A4 | A5 | A6 | A7 | A8 | A9 | A10 |
B1 | B2 | B3 | B4 | B5 | B6 | B7 | B8 | B9 | B10 |
C1 | C2 | C3 | C4 | C5 | C6 | C7 | C8 | C9 | C10 |
D1 | D2 | D3 | D4 | D5 | D6 | D7 | D8 | D9 | D10 |
E1 | E2 | E3 | E4 | E5 | E6 | E7 | E8 | E9 | E10 |
F1 | F2 | F3 | F4 | F5 | F6 | F7 | F8 | F9 | F10 |
G1 | G2 | G3 | G4 | G5 | G6 | G7 | G8 | G9 | G10 |
H1 | H2 | H3 | H4 | H5 | H6 | H7 | H8 | H9 | H10 |
I1 | I2 | I3 | I4 | I5 | I6 | I7 | I8 | I9 | I10 |
J1 | J2 | J3 | J4 | J5 | J6 | J7 | J8 | J9 | J10datatype (plugins only:) enum2 =
A1 | A2 | A3 | A4 | A5 | A6 | A7 | A8 | A9 | A10 |
B1 | B2 | B3 | B4 | B5 | B6 | B7 | B8 | B9 | B10 |
C1 | C2 | C3 | C4 | C5 | C6 | C7 | C8 | C9 | C10 |
D1 | D2 | D3 | D4 | D5 | D6 | D7 | D8 | D9 | D10 |
E1 | E2 | E3 | E4 | E5 | E6 | E7 | E8 | E9 | E10 |
F1 | F2 | F3 | F4 | F5 | F6 | F7 | F8 | F9 | F10 |
G1 | G2 | G3 | G4 | G5 | G6 | G7 | G8 | G9 | G10 |
H1 | H2 | H3 | H4 | H5 | H6 | H7 | H8 | H9 | H10 |
I1 | I2 | I3 | I4 | I5 | I6 | I7 | I8 | I9 | I10 |
J1 | J2 | J3 | J4 | J5 | J6 | J7 | J8 | J9 | J10
I don’t know your particular applications, but depending on the logical
needs such »enumerative« datatype declarations can be constructed
similar to type char in theory HOL.String. Or, if you don't need pattern
patching at all, specified as finite abstract type over natural numbers.
Cheers,
Florian
OpenPGP_0xA707172232CFA4E9.asc
OpenPGP_signature.asc
From: Makarius <makarius@sketis.net>
On 01/07/2025 11:54, Mulder, Ike wrote:
The datatypes are genuine
datatype
s. Using Dimitry's example from the other thread, the slowdown we are seeing can be reproduced with three simple files:
Datatypes with 100 constructors are not very realistic. We need an actual
"constructive proof for performance problems", one that resembles a proper
application.
I looked around in the Isabelle distribution, but did not find any. Even a
merge of the theories from session HOL-Datatype_Benchmark does not exhibit
anything that looks too slow.
Makarius
From: Lawrence Paulson <lp15@cam.ac.uk>
Believe me, these big datatypes exist, even if they are not in the public domain. Isabelle scales extraordinarily well to huge developments. It would be great if it could do so here as well, rather than forcing users into awkward workarounds.
I was wondering how HOL4 and HOL Light coped. In the source files of the letter I found the following:
(* ------------------------------------------------------------------------- *)
(* Also add a cached rewrite of distinctness and injectivity theorems. Since *)
(* there can be quadratically many distinctness clauses, it would really be *)
(* preferable to have a conversion, but this seems OK up 100 constructors. *)
(* ------------------------------------------------------------------------- *)
Larry
On 3 Jul 2025, at 14:33, Makarius <makarius@sketis.net> wrote:
Datatypes with 100 constructors are not very realistic. We need an actual "constructive proof for performance problems", one that resembles a proper application.
I looked around in the Isabelle distribution, but did not find any. Even a merge of the theories from session HOL-Datatype_Benchmark does not exhibit anything that looks too slow.
From: Roland Lutz <rlutz@hedmen.org>
On Thu, 3 Jul 2025, Makarius wrote:
Datatypes with 100 constructors are not very realistic.
This is an actual problem I ran into. I wanted to prove some properties
of a particular state machine, but a datatype with ~50 states would never
finish compiling.
Roland
From: Dominic Mulligan <cl-isabelle-users@lists.cam.ac.uk>
Huge datatypes also arise naturally in modelling the semantics of CPU
instruction sets.
On Thu, 3 Jul 2025 at 15:29, Roland Lutz <rlutz@hedmen.org> wrote:
On Thu, 3 Jul 2025, Makarius wrote:
Datatypes with 100 constructors are not very realistic.
This is an actual problem I ran into. I wanted to prove some properties
of a particular state machine, but a datatype with ~50 states would never
finish compiling.Roland
From: i n <cl-isabelle-users@lists.cam.ac.uk>
I would like to mention that for HOL4, some parts of the CakeML compiler proofs face performance issues when case splitting's datatypes with many constructors. Work on refactoring the datatypes to reduce the number of constructors sped up some proofs by quite a alot. Taking the previous example of enum1 one could define the datatype like this instead.
A1 | A2 | A3 | A4 | A5 | A6 | A7 | A8 | A9 | A10
datatype B =
B1 | B2 | B3 | B4 | B5 | B6 | B7 | B8 | B9 | B10
datatype C =
C1 | C2 | C3 | C4 | C5 | C6 | C7 | C8 | C9 | C10
datatype D =
D1 | D2 | D3 | D4 | D5 | D6 | D7 | D8 | D9 | D10
...
datatype J =
J1 | J2 | J3 | J4 | J5 | J6 | J7 | J8 | J9 | J10
datatype enum1 =
A A | B B | C C | D D | E E | F F | G G | H H |
I I | J J
```Irvin
On Thursday, 3 July 2025 at 01:58:21 pm UTC, Lawrence Paulson <lp15@cam.ac.uk> wrote:
Believe me, these big datatypes exist, even if they are not in the public domain. Isabelle scales extraordinarily well to huge developments. It would be great if it could do so here as well, rather than forcing users into awkward workarounds.
I was wondering how HOL4 and HOL Light coped. In the source files of the letter I found the following:
> (* ------------------------------------------------------------------------- *)
> (* Also add a cached rewrite of distinctness and injectivity theorems. Since *)
> (* there can be quadratically many distinctness clauses, it would really be *)
> (* preferable to have a conversion, but this seems OK up 100 constructors. *)
> (* ------------------------------------------------------------------------- *)
Larry
> On 3 Jul 2025, at 14:33, Makarius <makarius@sketis.net> wrote:
>
> Datatypes with 100 constructors are not very realistic. We need an actual "constructive proof for performance problems", one that resembles a proper application.
>
> I looked around in the Isabelle distribution, but did not find any. Even a merge of the theories from session HOL-Datatype_Benchmark does not exhibit anything that looks too slow.
From: "\"wolff@lmf.cnrsfr\"" <cl-isabelle-users@lists.cam.ac.uk>
My standard example is the C11 abstract syntax - a mutually dependant beast with around 60 constructors or so …
We had to adopt several simplifications to get it through, but at the cost of compatibility ...
Best
Burkhart
On 3 Jul 2025, at 15:33, Makarius <makarius@sketis.net> wrote:
On 01/07/2025 11:54, Mulder, Ike wrote:
The datatypes are genuine
datatype
s. Using Dimitry's example from the other thread, the slowdown we are seeing can be reproduced with three simple files:Datatypes with 100 constructors are not very realistic. We need an actual "constructive proof for performance problems", one that resembles a proper application.
I looked around in the Isabelle distribution, but did not find any. Even a merge of the theories from session HOL-Datatype_Benchmark does not exhibit anything that looks too slow.
Makarius
From: Makarius <makarius@sketis.net>
On 03/07/2025 16:35, wolff@lmf.cnrsfr wrote:
My standard example is the C11 abstract syntax - a mutually dependant beast with around 60 constructors or so …
We had to adopt several simplifications to get it through, but at the cost of compatibility ...
So can you point to concrete locations in Isabelle/AFP to look at?
Makarius
From: Makarius <makarius@sketis.net>
On 03/07/2025 16:36, Dominic Mulligan (via cl-isabelle-users Mailing List) wrote:
Huge datatypes also arise naturally in modelling the semantics of CPU
instruction sets.
This sounds like bad modelling: too concrete, too many flat cases. Has anybody
done it in any other proof assistant?
Makarius
From: Makarius <makarius@sketis.net>
On 03/07/2025 15:58, Lawrence Paulson wrote:
Believe me, these big datatypes exist, even if they are not in the public domain. Isabelle scales extraordinarily well to huge developments. It would be great if it could do so here as well, rather than forcing users into awkward workarounds.
I was asking for "constructive existence proofs of performance problems", in
order to be able to reproduce the problem and see where it really happens (and
if something can be done about it).
People who have non-public applications need to do extra homework to produce
realistic examples that can be made public.
I was wondering how HOL4 and HOL Light coped. In the source files of the letter I found the following:
(* ------------------------------------------------------------------------- *)
(* Also add a cached rewrite of distinctness and injectivity theorems. Since *)
(* there can be quadratically many distinctness clauses, it would really be *)
(* preferable to have a conversion, but this seems OK up 100 constructors. *)
(* ------------------------------------------------------------------------- *)
An interesting comment. I still wonder how to do proofs with so many cases
emerging over and over again ...
Makarius
From: Kevin Kappelmann <kevin.kappelmann@tum.de>
There are several examples in CompCert, e.g.
https://github.com/AbsInt/CompCert/blob/master/x86/Op.v#L73
https://github.com/AbsInt/CompCert/blob/master/x86/Asm.v#L108
Kevin
On 04.07.25 14:29, Makarius wrote:
On 03/07/2025 16:36, Dominic Mulligan (via cl-isabelle-users Mailing
List) wrote:Huge datatypes also arise naturally in modelling the semantics of CPU
instruction sets.This sounds like bad modelling: too concrete, too many flat cases. Has
anybody done it in any other proof assistant?Makarius
From: Frédéric Tuong <cl-isabelle-users@lists.cam.ac.uk>
On Monday, July 7th, 2025 at 9:39 AM, Makarius <makarius@sketis.net> wrote:
>
My standard example is the C11 abstract syntax - a mutually dependant beast with around 60 constructors or so …
We had to adopt several simplifications to get it through, but at the cost of compatibility ...
>
So can you point to concrete locations in Isabelle/AFP to look at?
Unfortunately, we refrained from submitting it to the AFP because the evaluation of the generated datatypes started to take too much time (we probably had to interrupt it at some point):
https://gitlab.lisn.upsaclay.fr/ftuong/citadelle-devel/-/blob/9dc12a1704f18dde1e493736875474d50404f5d0/doc/Meta_C_generated.thy#L775
which was generated from an ancestor version of this file:
https://github.com/visq/language-c/blob/2f7df0f9cbd48f6fa084b967c6fc0ac92c75eccc/src/Language/C/Syntax/AST.hs#L74
Best,
Frédéric
From: Dmitriy Traytel <cl-isabelle-users@lists.cam.ac.uk>
Thanks, Kevin. Not Isabelle, but still interesting. Without any change these examples, especially the second one with ~180 constructors, might be challenging in Isabelle. In this case, I would follow Irvin’s suggestion from this thread: split this type into palatable pieces. There are even comments in the CompCert type grouping the operations logically. So these comments could be formalized as separate datatypes.
Dmitriy
On 4 Jul 2025, at 14.49, Kevin Kappelmann <kevin.kappelmann@tum.de> wrote:
There are several examples in CompCert, e.g.
https://github.com/AbsInt/CompCert/blob/master/x86/Op.v#L73
https://github.com/AbsInt/CompCert/blob/master/x86/Asm.v#L108
Kevin
On 04.07.25 14:29, Makarius wrote:
On 03/07/2025 16:36, Dominic Mulligan (via cl-isabelle-users Mailing List) wrote:
Huge datatypes also arise naturally in modelling the semantics of CPU instruction sets.
This sounds like bad modelling: too concrete, too many flat cases. Has anybody done it in any other proof assistant?
Makarius
From: Dmitriy Traytel <cl-isabelle-users@lists.cam.ac.uk>
Thanks Frédéric, that’s an interesting example. Although a bit off-topic, if the topic is datatypes with many constructors. But at least it is an Isabelle example and moreover an interesting type from the type structure perspective.
What kills this example is the type being highly mutual. Mutuality is a different dimension and as I have mentioned earlier the performance is for sure worse than quadratic in the number of mutual types. Already the type of the recursor here will be monstrous.
There is a workaround: replace mutual recursion by nested recursion. Below I paste an isomorphic collection of types. (Modulo some trivial changes, such as defining the Either type as sum rather than using your definition and using the standard option type instead of the one from your library.) My thought process was as follows: 'a Expression was used by most types, so I moved it out to a separate type and replaced it by the extra type variable ‘e whenever it was used. Then the datatype command told me that the remaining types are not fully mutual and suggested that I split the types in the way that you see in the theory. Of these types none takes more than 10 seconds. Of course, now one has to write ('a, 'a Expression) CDeclaration instead of 'a CDeclaration from your version, but this can easily be hidden behind type synonyms.
At some point Jasmin and I were working on automating such a mutual-to-nested reduction to alleviate performance problems with mutual types. This failed because we had the ambition to fully hide the nested view behind the mutual interface. But the issue was that the mutual interface itself became a bottleneck. So now we are advocating people to use nested types instead—they are beneficial not only for performance but also for abstraction: most of the time one does not want to reason about the entire language. Likely, there are things that can be said about a language's declarations without knowing precisely what the language’s expressions are.
Dmitriy
theory Scratch
imports Main
begin
datatype Position = Position "int" "string" "int" "int"
| NoPosition
| BuiltinPosition
| InternalPosition
type_synonym PosLength = "(Position, int) Product_Type.prod"
datatype Name = Name "int"
datatype NodeInfo = OnlyPos "Position" "PosLength"
| NodeInfo "Position" "PosLength" "Name"
datatype Ident = Ident "string" "int" "NodeInfo"
datatype SUERef = AnonymousRef "Name"
| NamedRef "Ident"
datatype CChar = CChar "char" "HOL.bool"
| CChars "char List.list" "HOL.bool"
datatype CIntRepr = DecRepr
| HexRepr
| OctalRepr
datatype CIntFlag = FlagUnsigned
| FlagLong
| FlagLongLong
| FlagImag
datatype CFloat = CFloat "string"
datatype ClangCVersion = ClangCVersion "string"
datatype CString = CString "string" "HOL.bool"
datatype 'f Flags = Flags "int"
datatype CInteger = CInteger "int" "CIntRepr" "CIntFlag Flags"
datatype CAssignOp = CAssignOp
| CMulAssOp
| CDivAssOp
| CRmdAssOp
| CAddAssOp
| CSubAssOp
| CShlAssOp
| CShrAssOp
| CAndAssOp
| CXorAssOp
| COrAssOp
datatype CBinaryOp = CMulOp
| CDivOp
| CRmdOp
| CAddOp
| CSubOp
| CShlOp
| CShrOp
| CLeOp
| CGrOp
| CLeqOp
| CGeqOp
| CEqOp
| CNeqOp
| CAndOp
| CXorOp
| COrOp
| CLndOp
| CLorOp
datatype CUnaryOp = CPreIncOp
| CPreDecOp
| CPostIncOp
| CPostDecOp
| CAdrOp
| CIndOp
| CPlusOp
| CMinOp
| CCompOp
| CNegOp
datatype 'a CStorageSpecifier = CAuto "'a"
| CRegister "'a"
| CStatic "'a"
| CExtern "'a"
| CTypedef "'a"
| CThread "'a"
type_synonym CStorageSpec = "NodeInfo CStorageSpecifier"
datatype 'a CFunctionSpecifier = CInlineQual "'a"
| CNoreturnQual "'a"
type_synonym CFunSpec = "NodeInfo CFunctionSpecifier"
datatype CStructTag = CStructTag
| CUnionTag
datatype 'a CConstant = CIntConst "CInteger" "'a"
| CCharConst "CChar" "'a"
| CFloatConst "CFloat" "'a"
| CStrConst "CString" "'a"
type_synonym CConst = "NodeInfo CConstant"
type_synonym ('a, 'b) Either = "'a + 'b"
datatype 'a CStringLiteral = CStrLit "CString" "'a"
datatype ('a, 'e) CAttribute = CAttr "Ident" "'e List.list" "'a"
datatype ('a, 'e) CTypeQualifier = CConstQual "'a"
| CVolatQual "'a"
| CRestrQual "'a"
| CAtomicQual "'a"
| CAttrQual "('a, 'e) CAttribute"
| CNullableQual "'a"
| CNonnullQual "'a"
datatype ('a, 'e) CAssemblyOperand = CAsmOperand "Ident option" "'a CStringLiteral" "'e" "'a"
datatype ('a, 'e) CAssemblyStatement = CAsmStmt "('a, 'e) CTypeQualifier option" "'a CStringLiteral" "('a, 'e) CAssemblyOperand List.list" "('a, 'e) CAssemblyOperand List.list" "'a CStringLiteral List.list" "'a"
datatype ('a, 'e) CEnumeration = CEnum "Ident option" "(Ident, 'e option) Product_Type.prod List.list option" "('a, 'e) CAttribute List.list" "'a"
datatype ('a, 'e) CPartDesignator = CArrDesig "'e" "'a"
| CMemberDesig "Ident" "'a"
| CRangeDesig "'e" "'e" "'a"
datatype ('a, 'e) CInitializer = CInitExpr "'e" "'a"
| CInitList "(('a, 'e) CPartDesignator List.list, ('a, 'e) CInitializer) Product_Type.prod List.list" "'a"
datatype ('a, 'e) CArraySize = CNoArrSize "HOL.bool"
| CArrSize "HOL.bool" "'e"
datatype ('a, 'e) CDeclaration = CDecl "('a, 'e) CDeclarationSpecifier List.list" "((('a, 'e) CDeclarator option, ('a, 'e) CInitializer option) Product_Type.prod, 'e option) Product_Type.prod List.list" "'a"
| CStaticAssert "'e" "'a CStringLiteral" "'a"
and ('a, 'e) CDeclarator = CDeclr "Ident option" "('a, 'e) CDerivedDeclarator List.list" "'a CStringLiteral option" "('a, 'e) CAttribute List.list" "'a"
and ('a, 'e) CDerivedDeclarator = CPtrDeclr "('a, 'e) CTypeQualifier List.list" "'a"
| CArrDeclr "('a, 'e) CTypeQualifier List.list" "('a, 'e) CArraySize" "'a"
| CFunDeclr "(Ident List.list, (('a, 'e) CDeclaration List.list, HOL.bool) Product_Type.prod) Either" "('a, 'e) CAttribute List.list" "'a"
and ('a, 'e) CDeclarationSpecifier = CStorageSpec "'a CStorageSpecifier"
| CTypeSpec "('a, 'e) CTypeSpecifier"
| CTypeQual "('a, 'e) CTypeQualifier"
| CFunSpec "'a CFunctionSpecifier"
| CAlignSpec "('a, 'e) CAlignmentSpecifier"
and ('a, 'e) CTypeSpecifier = CVoidType "'a"
| CCharType "'a"
| CShortType "'a"
| CIntType "'a"
| CLongType "'a"
| CFloatType "'a"
| CDoubleType "'a"
| CSignedType "'a"
| CUnsigType "'a"
| CBoolType "'a"
| CComplexType "'a"
| CInt128Type "'a"
| CSUType "('a, 'e) CStructureUnion" "'a"
| CEnumType "('a, 'e) CEnumeration" "'a"
| CTypeDef "Ident" "'a"
| CTypeOfExpr "'e" "'a"
| CTypeOfType "('a, 'e) CDeclaration" "'a"
| CAtomicType "('a, 'e) CDeclaration" "'a"
and ('a, 'e) CAlignmentSpecifier = CAlignAsType "('a, 'e) CDeclaration" "'a"
| CAlignAsExpr "'e" "'a"
and ('a, 'e) CStructureUnion = CStruct "CStructTag" "Ident option" "('a, 'e) CDeclaration List.list option" "('a, 'e) CAttribute List.list" "'a"
datatype ('a, 'e) CFunctionDef = CFunDef "('a, 'e) CDeclarationSpecifier List.list" "('a, 'e) CDeclarator" "('a, 'e) CDeclaration List.list" "('a, 'e) CStatement" "'a"
and ('a, 'e) CStatement = CLabel "Ident" "('a, 'e) CStatement" "('a, 'e) CAttribute List.list" "'a"
| CCase "'e" "('a, 'e) CStatement" "'a"
| CCases "'e" "'e" "('a, 'e) CStatement" "'a"
| CDefault "('a, 'e) CStatement" "'a"
| CExpr "'e option" "'a"
| CCompound "Ident List.list" "('a, 'e) CCompoundBlockItem List.list" "'a"
| CIf "'e" "('a, 'e) CStatement" "('a, 'e) CStatement option" "'a"
| CSwitch "'e" "('a, 'e) CStatement" "'a"
| CWhile "'e" "('a, 'e) CStatement" "HOL.bool" "'a"
| CFor "('e option, ('a, 'e) CDeclaration
[message truncated]
From: Dmitriy Traytel <cl-isabelle-users@lists.cam.ac.uk>
On 3 Jul 2025, at 15.44, Roland Lutz <rlutz@hedmen.org> wrote:
On Thu, 3 Jul 2025, Makarius wrote:
Datatypes with 100 constructors are not very realistic.
This is an actual problem I ran into. I wanted to prove some properties of a particular state machine, but a datatype with ~50 states would never finish compiling.
Was that a datatype with 50 constructors (which should finish relatively quickly—if not, I’d like to see the example) or 50 mutually recursive datatypes (which has no chance)? If the latter, you could consider following the recipe from my response to Frédéric in this thread.
Dmitriy
From: Roland Lutz <rlutz@hedmen.org>
On Fri, 11 Jul 2025, Dmitriy Traytel wrote:
On 3 Jul 2025, at 15.44, Roland Lutz <rlutz@hedmen.org> wrote:
On Thu, 3 Jul 2025, Makarius wrote:
Datatypes with 100 constructors are not very realistic.
This is an actual problem I ran into. I wanted to prove some
properties of a particular state machine, but a datatype with ~50
states would never finish compiling.Was that a datatype with 50 constructors (which should finish relatively
quickly—if not, I’d like to see the example) or 50 mutually recursive
datatypes (which has no chance)? If the latter, you could consider
following the recipe from my response to Frédéric in this thread.
It was something along the lines of
datatype state = Start | Identifier | Comment | ...
The threshold between "usable" and "unusable" performance lay between 11
and 12 constructors on my machine; I would have needed significantly more.
This was with Isabelle 2022, if I recall correctly.
This isn't just about this particular case, though. I'd like to be able
to use Isabelle to automate and "forget about" things, similar to a test
suite. So what I'd really wish for is that I could refactor my code and
have temporarily, say, 200 states, and the proof would still complete in
reasonable time.
Roland
From: Makarius <makarius@sketis.net>
On 30/06/2025 19:36, Makarius wrote:
Now the focus has changed to datatypes. This raise a few more questions:
* Is the claset merge operation really slow, or are the involved clasets
just very big?
To answer this question without inflating this thread once again, see now the
isabelle-dev repository
https://isabelle.sketis.net/repos/isabelle/rev/ca600cbfd4bf
changeset: 82833:ca600cbfd4bf
user: wenzelm
date: Thu Jul 10 17:29:25 2025 +0200
files: src/Provers/classical.ML
description:
more accurate "next" counter for each insert operation: subtle change of
semantics wrt. Item_Net.length, due to delete operation;
avoid costly Item_Net.length, which is linear in size;
That avoids a rather wasteful (and slightly wrong) allocation of a fresh index
for new rule declarations.
The change could probably be applied to the isabelle-release version, but that
is not my job. Isabelle development has only one linear line of history, no
branches, and no actual merges.
After that change (and in the coming release at the end of 2025), it still
remains a bad idea to introduce large flat datatypes without proper
substructure. Tons of individual cases will show up in proofs over and over
again. No performance tuning of basic data structures can avoid that complexity.
Makarius
Last updated: Jul 26 2025 at 12:43 UTC