Stream: Mirror: Isabelle Users Mailing List

Topic: [isabelle] Extreme processing time for function definition


view this post on Zulip Email Gateway (Jul 07 2021 at 14:51):

From: "Eugene W. Stark" <isabelle-users@starkeffect.com>
The 45-line definition of function "resid" in the attached theory takes between 30 and 40 minutes
to process. This time increased very rapidly from a previous value of about 3 minutes when I added
four clauses (the four preceding the final default clause). The original definition was stated with
"fun" but I split it up using "function" to try to see which part was consuming the time.
About one minute is used to process the "function (sequential)" declaration, with the remainder of
the time going to the proof of 46057 subgoals "by pat_completeness auto". The termination proof
does not take significant time. I note that the time-consuming portion appears to be single-threaded.

Is there anything I can do, or am I out of luck? The 30-40 minute processing time is probably
prohibitive for what I was trying to do, so it looks like I will have to abandon the generalization
I was trying to make and revert to a simpler version that processes in reasonable time.

Thanks for any suggestions.
Barf.thy

view this post on Zulip Email Gateway (Jul 07 2021 at 20:02):

From: Tobias Nipkow <nipkow@in.tum.de>
Attached you find a version that works in reasonable time. I reduced the size of
the patterns by replacing all "Var 0" by "vn" and testing if "vn ~= Var 0 \/
...". This got the number of cases just below 10.000. [I believe I did not
change the meaning of the equations as a result]

Note that the problem of minimizing the number of cases is hard:
https://www21.in.tum.de/~krauss/publication/2008-patterns/

Tobias
Barf.thy
smime.p7s

view this post on Zulip Email Gateway (Jul 08 2021 at 01:28):

From: "Eugene W. Stark" <isabelle-users@starkeffect.com>
Thanks! The ideas for reducing the processing time are very helpful.

I think there is something else going on besides the sheer number of subgoals, because removing
the last four clauses in my original example still leaves about 18,000 subgoals, but the processing
time is only about 3 minutes. I guess when the patterns are expanded to the next level of depth
it applies a multiplicative factor to the time taken to discharge the subgoals. One gets used to
the function package often working so well that it is quite startling to see an exponential jump in
processing time with relatively small changes to the definitions.

Right now I am confused, though, and I have to go back and re-do my homework on this definition.
I hit a blocking point in the series of lemmas I was trying to generalize and I jumped to the conclusion
that I had missed some critical pairs (hence the additional four clauses), but at this moment I am not
so sure any more.

Anyway, your ideas at least provide me with some control over my fate. Thanks again!

view this post on Zulip Email Gateway (Jul 08 2021 at 08:10):

From: Manuel Eberl <eberlm@in.tum.de>
Hmm, I wonder if some of the non-mandatory features of the function
package (such as the generation of elimination rules) might affect the
performance negatively here. Perhaps one could try to put an option into
the function package to disable the generation of these and see what
happens.

(Full disclosure: I wrote the code that generates the pelims/elims rules
back in the day, so if this is the issue, it's my fault. :D )

Manuel
smime.p7s

view this post on Zulip Email Gateway (Jul 08 2021 at 08:24):

From: Tobias Nipkow <nipkow@in.tum.de>
Note that up to the point where pat_completeness generates 46057 subgoals,
things are not so bad. Proving all those (simple!) subgoals is the killer.

Tobias
smime.p7s

view this post on Zulip Email Gateway (Jul 08 2021 at 09:50):

From: Manuel Eberl <eberlm@in.tum.de>
I'm not so sure. It's difficult to assess because I for one cannot get
it to build at all.

Even if I replace the "by pat_completeness auto" with "sorry", Isabelle
just crashes after a few minutes. Both with x86_32 and x64. And that's
on my beefy 32 GB machine.

If I replace it with a "apply pat_completeness apply auto oops" it runs
for something like 20 minutes and then freezes up my PC and crashes
(possibly killed by the OOM killer).

The function package does a lot of work after pattern completeness and
non-overlappingness have been proven. This is the stuff that happens
when you write "done". I for one think there is a good chance that this
is just as expensive as the user-contributed proofs, if not more so.

Manuel
smime.p7s

view this post on Zulip Email Gateway (Jul 08 2021 at 10:13):

From: Tobias Nipkow <nipkow@in.tum.de>
On 08/07/2021 11:50, Manuel Eberl wrote:

I'm not so sure. It's difficult to assess because I for one cannot get
it to build at all.

Even if I replace the "by pat_completeness auto" with "sorry", Isabelle
just crashes after a few minutes. Both with x86_32 and x64. And that's
on my beefy 32 GB machine.

If I replace it with a "apply pat_completeness apply auto oops" it runs
for something like 20 minutes and then freezes up my PC and crashes
(possibly killed by the OOM killer).

Eugene and I could get up to "apply auto" in a matter of a very few minutes. It
is the auto (I also tried simp_all) that kills us.

The function package does a lot of work after pattern completeness and
non-overlappingness have been proven. This is the stuff that happens
when you write "done". I for one think there is a good chance that this
is just as expensive as the user-contributed proofs, if not more so.

Maybe so, but we never got to "done".

With the simplified version (with 10.000 cases), "auto" takes 30+ secs, "done"
takes 3 secs.

Tobias

Manuel

On 08/07/2021 10:24, Tobias Nipkow wrote:

Note that up to the point where pat_completeness generates 46057
subgoals, things are not so bad. Proving all those (simple!) subgoals is
the killer.

Tobias

On 08/07/2021 10:10, Manuel Eberl wrote:

Hmm, I wonder if some of the non-mandatory features of the function
package (such as the generation of elimination rules) might affect the
performance negatively here. Perhaps one could try to put an option into
the function package to disable the generation of these and see what
happens.

(Full disclosure: I wrote the code that generates the pelims/elims rules
back in the day, so if this is the issue, it's my fault. :D )

Manuel

On 08/07/2021 03:28, Eugene W. Stark wrote:

Thanks!  The ideas for reducing the processing time are very helpful.

I think there is something else going on besides the sheer number of
subgoals, because removing
the last four clauses in my original example still leaves about
18,000 subgoals, but the processing
time is only about 3 minutes.  I guess when the patterns are expanded
to the next level of depth
it applies a multiplicative factor to the time taken to discharge the
subgoals.  One gets used to
the function package often working so well that it is quite startling
to see an exponential jump in
processing time with relatively small changes to the definitions.

Right now I am confused, though, and I have to go back and re-do my
homework on this definition.
I hit a blocking point in the series of lemmas I was trying to
generalize and I jumped to the conclusion
that I had missed some critical pairs (hence the additional four
clauses), but at this moment I am not
so sure any more.

Anyway, your ideas at least provide me with some control over my
fate.  Thanks again!

On 7/7/21 4:02 PM, Tobias Nipkow wrote:

Attached you find a version that works in reasonable time. I reduced
the size of the patterns by replacing all "Var 0"
by "vn" and testing if "vn ~= Var 0 \/ ...". This got the number of
cases just below 10.000. [I believe I did not change
the meaning of the equations as a result]

Note that the problem of minimizing the number of cases is hard:
https://www21.in.tum.de/~krauss/publication/2008-patterns/

Tobias

On 07/07/2021 16:51, Eugene W. Stark wrote:

The 45-line definition of function "resid" in the attached theory
takes between 30 and 40 minutes
to process.  This time increased very rapidly from a previous value
of about 3 minutes when I added
four clauses (the four preceding the final default clause).  The
original definition was stated with
"fun" but I split it up using "function" to try to see which part
was consuming the time.
About one minute is used to process the "function (sequential)"
declaration, with the remainder of
the time going to the proof of 46057 subgoals "by pat_completeness
auto".  The termination proof
does not take significant time.  I note that the time-consuming
portion appears to be single-threaded.

Is there anything I can do, or am I out of luck?  The 30-40 minute
processing time is probably
prohibitive for what I was trying to do, so it looks like I will
have to abandon the generalization
I was trying to make and revert to a simpler version that processes
in reasonable time.

Thanks for any suggestions.

smime.p7s


Last updated: Dec 08 2021 at 08:24 UTC