Stream: Mirror: Isabelle Users Mailing List

Topic: [isabelle] High CPU usage on PolyML subprocess (Pure/HOL)...


view this post on Zulip Email Gateway (Apr 27 2021 at 08:48):

From: Bo Gan <gan.bo@columbia.edu>
Hi Isabelle users,

I'm fairly new to Isabelle and provers in general. Currently I'm trying to
follow the Programming and Proving Tutorial from Tobias (Nipkow) and having
fun with some very simple theories. However I did notice the GUI JEdit
getting more and more sluggish when left almost idle for a day or two. The
PolyML subprocess (Pure/HOL heap) eats up ~2 cpu cores. Further debugging
reveals that the main PolyML thread and Event_Timer.manager_loop thread
each consume 80-90% the CPU time. Details on the PolyML process:

cmdline=
"/home/***/.isabelle/contrib/polyml-test-f86ae3dc1686/x86_64_32-linux/poly"
"-q" "--minheap" "500" "--gcthreads" "0" "--exportstats" "--eval"
"(PolyML.SaveState.loadHierarchy
["/home/***/.isabelle/heaps/polyml-5.8.2_x86_64_32-linux/Pure",
"/home/***/.isabelle/heaps/polyml-5.8.2_x86_64_32-linux/HOL"];
PolyML.print_depth 0) handle exn => (TextIO.output (TextIO.stdErr,
General.exnMessage exn ^ ": HOL\n"); OS.Process.exit OS.Process.failure)"
"--eval" "Options.load_default ()" "--eval" "Isabelle_Process.init ()"

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+
COMMAND

1325109 *** 20 0 23.236g 696280 3888 R 99.9 4.3 1558:23 poly

1325071 *** 20 0 23.236g 696280 3888 R 86.7 4.3 1229:05 poly

1325101 *** 20 0 23.236g 696280 3888 S 20.0 4.3 107:43.02 poly

1325102 *** 20 0 23.236g 696280 3888 S 13.3 4.3 105:41.89 poly

1325103 *** 20 0 23.236g 696280 3888 S 13.3 4.3 105:39.40 poly

1325108 *** 20 0 23.236g 696280 3888 S 13.3 4.3 101:19.50 poly

1325068 *** 20 0 23.236g 696280 3888 S 6.7 4.3 33:40.44 poly

1325104 *** 20 0 23.236g 696280 3888 S 6.7 4.3 107:10.92 poly

1325107 *** 20 0 23.236g 696280 3888 S 6.7 4.3 99:25.42 poly

1325062 *** 20 0 23.236g 696280 3888 S 0.0 4.3 40:50.67 poly

1325066 *** 20 0 23.236g 696280 3888 S 0.0 4.3 0:00.00 poly

1325067 *** 20 0 23.236g 696280 3888 S 0.0 4.3 33:41.02 poly

1325069 *** 20 0 23.236g 696280 3888 S 0.0 4.3 33:39.01 poly

1325070 *** 20 0 23.236g 696280 3888 S 0.0 4.3 33:41.40 poly

1325072 *** 20 0 23.236g 696280 3888 S 0.0 4.3 0:00.00 poly

1325078 *** 20 0 23.236g 696280 3888 S 0.0 4.3 0:11.03 poly

1325100 *** 20 0 23.236g 696280 3888 S 0.0 4.3 0:48.37 poly

1325105 *** 20 0 23.236g 696280 3888 S 0.0 4.3 104:58.21 poly

1325106 *** 20 0 23.236g 696280 3888 S 0.0 4.3 102:55.48 poly

Here's a snapshot of their ML call stack (collected from my own gdb python
scripts):

Main Poly thread (1325071):
#0 0x7fc57b9064fd in Multithreading.synchronized(3)(1)+0x3e5,
sp=0x7fc67bf54870
#1 0x7fc57b905be0 in Multithreading.synchronized(3)+0x68, sp=0x7fc67bf54888
#2 0x7fc57b8e7456 in Command.fork_print(3)+0x2b6, sp=0x7fc67bf548a8
#3 0x7fc57b8f101f in List.mapPartial(2)()+0x16f, sp=0x7fc67bf548e8
#4 0x7fc57b8eff4a in Document.start_execution(1)(4)(1)+0x20a,
sp=0x7fc67bf54918
#5 0x7fc57b8efcdf in Document.start_execution(1)(4)+0x77, sp=0x7fc67bf54958
#6 0x7fc57b8efc15 in Document.start_execution(1)+0x3d, sp=0x7fc67bf54968
#7 0x7fc57b337e0e in o(2)(1)+0x1e, sp=0x7fc67bf549a8
#8 0x7fc57b939167 in Synchronized.timed_access(3)(1)try_change(1)+0x15f,
sp=0x7fc67bf549b0
#9 0x7fc57b90654b in Multithreading.synchronized(3)(1)+0x433,
sp=0x7fc67bf549e0
#10 0x7fc57b905be0 in Multithreading.synchronized(3)+0x68, sp=0x7fc67bf54a18
#11 0x7fc57b337d57 in Protocol._-(1)+0x87, sp=0x7fc67bf54a38
#12 0x7fc57b93f6bf in Thread_Data.setmp(4)(1)+0x1a7, sp=0x7fc67bf54a40
#13 0x7fc57b93f6bf in Thread_Data.setmp(4)(1)+0x1a7, sp=0x7fc67bf54a80
#14 0x7fc57b9ad319 in Future.future_job(4)job(1)+0x89, sp=0x7fc67bf54ac8
#15 0x7fc57b93fb00 in Future.worker_exec(2)ok-(1)(1)+0x30, sp=0x7fc67bf54af0
#16 0x7fc57b93f6bf in Thread_Data.setmp(4)(1)+0x1a7, sp=0x7fc67bf54b10
#17 0x7fc57b93ab0a in Task_Queue.update_timing(3)(1)+0x6a, sp=0x7fc67bf54b50
#18 0x7fc57b93e62e in Future.worker_exec(2)+0xce, sp=0x7fc67bf54b98
#19 0x7fc57b599156 in Future.task_context(4)(1)+0x13e, sp=0x7fc67bf54bd0
#20 0x7fc57b3241d3 in Protocol_Command.run(2)+0x9b, sp=0x7fc67bf54c20
#21 0x7fc57b32345e in
Isabelle_Process.init_protocol(1)(1)(2)protocol_loop(1)+0x50e,
sp=0x7fc67bf54c50
#22 0x7fc57b95457d in Unsynchronized.setmp(4)(1)+0x3d, sp=0x7fc67bf54c78
<Identical frames discarded>
#35 0x7fc57b31ff2a in Isabelle_Process.init_protocol(1)(1)(2)+0xcaa,
sp=0x7fc67bf54ee0
#36 0x7fc57b905e23 in Thread_Attributes.with_attributes(2)+0xab,
sp=0x7fc67bf54f40
#37 0x7fc57bd02517 in <top level>+0x17, sp=0x7fc67bf54f68
#38 0x7fc57c0354f1 in CODETREE().genCode(3)(1)+0x19, sp=0x7fc67bf54f70
#39 0x7fc57c057f91 in COMPILER_BODY().baseCompiler(4)executeCode(1)+0x19,
sp=0x7fc67bf54f80
#40 0x7fc57c07e8e1 in polyCompiler(2)defaultCompilerResultFun(2)(1)+0x19,
sp=0x7fc67bf54f90
#41 0x7fc57bd5c5c6 in PolyML.rootFunction(1)tryUseFileArguments(1)+0x4c6,
sp=0x7fc67bf54fa0
#42 0x7fc57bd55e67 in PolyML.rootFunction(1)+0xd57, sp=0x7fc67bf54fd0
#43 0x7fc57bd5503b in PolyML.runFunction(1)(1)+0x33, sp=0x7fc67bf54fe8

Event_Timer (1325109):
#0 0x7fc57c0abd70 in VectorSliceOperations().subslice(3)+0x0,
sp=0x7fc67c149958
#1 0x7fc57bd10578 in convStats(2)parseStatistic(2)+0x68, sp=0x7fc67c149960
#2 0x7fc57bd1049f in convStats(2)parseStatistics(1)+0x1f, sp=0x7fc67c149990
#3 0x7fc57bd104b1 in convStats(2)parseStatistics(1)+0x31, sp=0x7fc67c1499b0
<Identical frames discarded>
#31 0x7fc57bd0f412 in convStats(2)+0x15a, sp=0x7fc67c149e10
#32 0x7fc57bd0f0ec in PolyML.Statistics.getLocalStats(1)+0x74,
sp=0x7fc67c149e28
#33 0x7fc57b7a9aab in Event_Timer.manager_loop(1)(1)+0x63, sp=0x7fc67c149ef8
#34 0x7fc57b939167 in Synchronized.timed_access(3)(1)try_change(1)+0x15f,
sp=0x7fc67c149f30
#35 0x7fc57b90654b in Multithreading.synchronized(3)(1)+0x433,
sp=0x7fc67c149f60
#36 0x7fc57b905be0 in Multithreading.synchronized(3)+0x68, sp=0x7fc67c149f98
#37 0x7fc57b7a98ef in Event_Timer.manager_loop(1)+0x57, sp=0x7fc67c149fb8
#38 0x7fc57b9aaec0 in Isabelle_Thread.fork(2)(1)(1)+0x30, sp=0x7fc67c149fc8
#39 0x7fc57b9aae02 in Thread.Thread.fork(2)threadFunction(1)+0x32,
sp=0x7fc67c149fe8

I suspect we are wasting a lot of time invoking GC on PolyML runtime when
collecting stats, but I could be totally wrong. This issue is reproducible
and quite annoying. Has anyone encountered the same?

view this post on Zulip Email Gateway (May 03 2021 at 13:15):

From: Dominique Unruh <unruh@ut.ee>
Hi,

in my experience (and I have heard similar things mentioned on this
mailing list before), Isabelle tends to become very sluggish sometimes
after a while. (I find it especially happens after a session with a lot
of reexecuting theories and a lot of sledgehammer use.) I have not found
a better way than restarting Isabelle when things have become "stuck".
(An alternative to restarting the whole jEdit is to disable and reenable
the plugin in the plugin manager. Then your opened files stay open. But
it needs more clicks.)

Best wishes,
Dominique.

view this post on Zulip Email Gateway (May 03 2021 at 13:23):

From: Simon Wimmer <wimmersimon@gmail.com>
Hi,

if Isabelle eats up two CPU cores when idling, it might be that something
is running forever.
For beginners this is usually a non-terminating proof tactic, specifically
things like blast, fastforce, force,
but also simp and auto can loop if you give them the wrong rules.
You can identify such non-terminating tactics by looking for the purple
background color.
It indicates that something is still running.

Kind regards,
Simon

view this post on Zulip Email Gateway (May 03 2021 at 13:42):

From: Alexander Kogtenkov via Cl-isabelle-users <cl-isabelle-users@lists.cam.ac.uk>
Is there an option in Isabelle/jEdit to treat proof tactics as failing after a specified timeout?

That would make the environment much more convenient and user-friendly. Purple does indeed indicate diverging tactics in a proof text, but 1) if there are several occurrences, one has to find them by hand; 2) the coloring engine sometimes becomes very unresponsive as well, so it’s not that easy to find these occurrences.

Best regards,
Alexander Kogtenkov

view this post on Zulip Email Gateway (May 04 2021 at 09:48):

From: Lawrence Paulson <lp15@cam.ac.uk>
Here is a related suggestion that would be useful for the same reason: to modify the behaviour of “Go to next error” so that it regards any still-running proof method as an error. Then one could quickly jump to non-terminating proof steps.

Larry


Last updated: Dec 05 2021 at 23:19 UTC