Stream: Archive Mirror: Isabelle Users Mailing List

Topic: [isabelle] Proof document to span multiple sessions


view this post on Zulip Email Gateway (Aug 18 2022 at 18:38):

From: Gerwin Klein <gerwin.klein@nicta.com.au>
That is probably the best solution. It only works for polyml, but JinjaThreads won't work in smlnj anyway.

When I made multiple images for Jinja (to speed up development, not so much because of memory problems), we had exactly the problem that there was no one document any more and we introduced a separate session just to build the document. That would defeat the point here, of course.

Cheers,
Gerwin

view this post on Zulip Email Gateway (Aug 18 2022 at 18:44):

From: Andreas Lochbihler <andreas.lochbihler@kit.edu>
Florian suggested to split JinjaThreads in the AFP into multiple sessions. He
hopes that this will reduce the overall memory consumption because Isabelle
forces PolyML to share objects on the heap only when the image is written. I am
currently testing whether this is a promising way. However, I am wondering how
multiple sessions would fit in with the proof document and proof outline in the
AFP. Ideally, there would be one document each that covers all JinjaThreads
theories although they are processed in different sessions. Is there any support
for this?

I guess that I could dump the generated files of all sessions in some directory
and appropriately string them together in root.tex, but that only gives me the
proof document. How can I get to the sources for proof outline?

Cheers,
Andreas

view this post on Zulip Email Gateway (Aug 18 2022 at 18:45):

From: Makarius <makarius@sketis.net>
I can't say anyting specific about the intention of the AFP setup,
although I remember a time when there was a 500MB heap limit (in Poly/ML
4.x) and Gerwin split up the original Jinja session to get the compaction
effect when heaps are dumped and reloaded.

In Poly/ML 5.x the compaction phase has become available separately,
without dumping. So in principle you could just split the invocations of
use_thys in ROOT.ML and make an intermediate share_common_data() in raw ML
in between.

I have already experimented with this occasionally. It basically works
(reducing heap size at runtime from several GB to a few 100 MB). It also
has some limitations: doing it too aggressively can make the process dump
core.

Makarius


Last updated: Nov 21 2024 at 12:39 UTC