From: Florian Haftmann <florian.haftmann@informatik.tu-muenchen.de>
This refers to fa80d47c6857.
When building the whole distribution once and then once again (build
-a), the build process blows up after excessive machine resource
consumption (something like java.lang.OutOfMemoryError: Java heap
space). I did not try to diagnose it by putting hands on the code but I
am quite sure that this happens due to the log file analysis in
Pure/Tools/build.scala Build/build_results/load_timings.
Maybe the mechanism itself is just not suitable to scale to the overall
amount of tons of log files. Is there any option to skip the log parsing?
My build runs take place on lxbroy10.
Relevant settings might include:
ISABELLE_POLYML=true
ML_IDENTIFIER=polyml-5.5.1_x86-linux
ML_OPTIONS=--heap 500
ISABELLE_SCALA_BUILD_OPTIONS=-nowarn -target:jvm-1.5
-Xmax-classfile-name 130
ISABELLE_BUILD_JAVA_OPTIONS=-Xmx1024m -Xss1m
ISABELLE_JAVA_EXT=/home/haftmann/.isabelle/contrib/jdk-7u40/x86_64-linux/jre/lib/ext
ISABELLE_JAVA_SYSTEM_OPTIONS=-Dfile.encoding=UTF-8 -server
Florian
From: Florian Haftmann <florian.haftmann@informatik.tu-muenchen.de>
When building the whole distribution once and then once again (build
-a)
»whole distribution« is misleading. It is the whole distribution plus
the AFP.
Florian
From: Stefan Berghofer <berghofe@in.tum.de>
Florian Haftmann wrote:
Hi Florian,
I can confirm this problem. I just tried to rebuild the AFP, and admin/testall
crashed after issuing several error messages like
### Ignoring bad log file: "$ISABELLE_OUTPUT/log/Ramsey-Infinite.gz"
java.lang.OutOfMemoryError: GC overhead limit exceeded
Manually deleting the old AFP log files solved the problem.
Greetings,
Stefan
From: Gerwin Klein <Gerwin.Klein@nicta.com.au>
I've just observed this effect below on the old Isabelle2013 on MacOS (Mavericks).
I even nondeterministically got "out of heap memory" and "too small initial heap size" while fiddling with the settings. The machine has 32G memory, and there were on the order of 10 log files only, so there's no way it actually ran out of memory parsing them.
Long story short: I still have no idea what got java so upset, but after a plain reboot everything worked just fine as usual with the default values.
Cheers,
Gerwin
The information in this e-mail may be confidential and subject to legal professional privilege and/or copyright. National ICT Australia Limited accepts no liability for any damage caused by this email or its attachments.
From: David Greenaway <david.greenaway@nicta.com.au>
I have had to manually increase the Java heap size for jEdit for larger
proofs. One the commented out examples in "src/Tools/jEdit/etc/settings"
is:
JEDIT_JAVA_OPTIONS="-Xms512m -Xmx4096m -Xss8m -Dactors.corePoolSize=4 -Dactors.enableForkJoin=false"
which adding to my "~/.isabelle/etc/settings" helps me to avoid the "out
of heap memory" errors.
Cheers,
David
The information in this e-mail may be confidential and subject to legal professional privilege and/or copyright. National ICT Australia Limited accepts no liability for any damage caused by this email or its attachments.
From: Makarius <makarius@sketis.net>
Last year or so some people running really big JVM jobs on x86_64 had
problems that were not there on x86. That was Linux and the solution was
do install 32bit libraries and use the x86 version of the JVM, but this
does not work on Mac OS X since JDK is x86_64 only.
Another possibility is to use the more recent jdk-7u40 from Isabelle2013-1
for Isabelle2013, which still has jdk-7u13. Oracle actually did some
significant work in the meantime, so the situation might be generally
better. (Some people have already noticed that the current jdk-7u40 no
longer works with Snow Leopard.)
Makarius
From: Makarius <makarius@sketis.net>
On Fri, 25 Oct 2013, Florian Haftmann wrote:
When building the whole distribution once and then once again (build
-a), the build process blows up after excessive machine resource
consumption (something like java.lang.OutOfMemoryError: Java heap
space). I did not try to diagnose it by putting hands on the code but I
am quite sure that this happens due to the log file analysis in
Pure/Tools/build.scala Build/build_results/load_timings.My build runs take place on lxbroy10.
Relevant settings might include:
ISABELLE_POLYML=true
ML_IDENTIFIER=polyml-5.5.1_x86-linux
ML_OPTIONS=--heap 500
ISABELLE_SCALA_BUILD_OPTIONS=-nowarn -target:jvm-1.5
-Xmax-classfile-name 130
ISABELLE_BUILD_JAVA_OPTIONS=-Xmx1024m -Xss1m
ISABELLE_JAVA_EXT=/home/haftmann/.isabelle/contrib/jdk-7u40/x86_64-linux/jre/lib/ext
ISABELLE_JAVA_SYSTEM_OPTIONS=-Dfile.encoding=UTF-8 -server
That is a huge 64-bit machine with 128 GB RAM, but the above are mostly
defaults to have the system run almost everywhere. Something like
ISABELLE_BUILD_JAVA_OPTIONS="-Xmx4096m -Xss2m" should make the JVM more
comfortable.
I don't know any better ways to configure JVM resources. Maybe one needs
to pay Oracle for that. It is a bit like Mac OS in the 1980/1990-ies:
before starting the program you need to give a limit for its resource
requirements, but guessing wrong causes severe problems.
Maybe the mechanism itself is just not suitable to scale to the overall
amount of tons of log files. Is there any option to skip the log parsing?
Just delete the log files manually, as Stefan did.
Makarius
Last updated: Nov 21 2024 at 12:39 UTC