I have many "by auto"-proofs that use up a lot of memory. As soon as ML reaches about 8GB (I have 16GB RAM), a "ML cleanup" is performed after every single line in my theory, which in total takes up far more time than the actual proofs. Is there a way to adjust the memory limits or GC frequency?
In theory, you can adjust the min-heap of PolyML. Add to $ISABELLE_HOME/etc/settings
ML_OPTIONS="--minheap 10G"
restart Isabelle, and try. That could backfire however (GCs become slower)
Not sure if that helps though
Other alternatives are splitting the file or changing the proofs…
More generally, I always found that the garbage collector behave strangely in SML (both PolyML and MLton): Not eager enough to allocate memory past 50% of memory (including cases where GC will take up most of the runtime…), but also not eager enough to deallocate memory for low memory usage (in percentage again: Isabelle/polyml can use 50GB if the server has 1TB RAM… completely useless)
Thank you! Adjusting the limit this way worked. And, just as you said, raising it too much backfires immediately. I will experiment with different values to find what works best in my case.
Last updated: Dec 21 2024 at 16:20 UTC