From: "Eugene W. Stark" <isabelle-users@starkeffect.com>
The quoted question triggered something that I have been meaning to ask:
In people's experience, what is a useful number of cores to have in a machine
to run Isabelle and what limits the amount of exploitable concurrency?
My main machine has been a 4-core i5-4670K @ 3.40GHz. Though I don't have money
just to throw around for new toys, I have been considering upgrading it to a machine
with more cores. I did some experiments using cloud servers on Digital Ocean, which can be
reconfigured relatively quickly from a cheap 1vCPU/1GB RAM $5/month maintenance level
(just to maintain the configuration so I don't have to reinstall each time I want to use it)
up to a 32vCPU/192GB RAM configuration at $960/mo (or $1.429/hr). The idea is to spend a
couple of minutes reconfiguring up to the expensive configuration at the start of a session,
use the machine for awhile, then reconfigure back down to the maintenance level.
The vCPUS identify as the following:
vendor_id : GenuineIntel
cpu family : 6
model : 85
model name : Intel(R) Xeon(R) Gold 6140 CPU @ 2.30GHz
stepping : 4
microcode : 0x1
cpu MHz : 2294.608
cache size : 25344 KB
bogomips : 4589.21
address sizes : 40 bits physical, 48 bits virtual
I tried doing "isabelle build" of my projects using configurations with various numbers of vCPUs
and watching the load average, CPU utilization, number of threads, and so on. I found that it
was difficult to find situations where more than about 8 CPUs were in use and the overall average
speedup factor was more like about 3x. So it is hard to justify paying for something larger
than the 8vCPU/32GB RAM configuration at $0.238/hr or maybe the 12vCPU/48GB configuration at $0.357/hr.
Also, the 2.30GHz Xeon CPUs on the cloud server are somewhat slower than my 3.40GHz i5 CPU at home,
so it actually turns out that I still get stuff done faster on my home system than if I use the
cloud server with double the number of cores. The cloud server is still useful when traveling,
though (assuming a low-latency internet connection so that the GUI is sort of useable) because
my old i3 laptop no longer has the horsepower to do anything useful with the projects that I have
been working on.
A question that comes to mind is what is it that usually limits the amount of concurrency when
using Isabelle? Although obviously the source has to be parsed and type-checked before anything else
can be done, and that seems to occur in a initial pass over the code that runs ahead of the actual
proof checking, once that has been done it seems like as many of the basic proof steps as you like
ought to be doable in parallel. So I wonder why that doesn't happen.
From: Makarius <makarius@sketis.net>
On 17/06/2019 17:00, Eugene W. Stark wrote:
My main machine has been a 4-core i5-4670K @ 3.40GHz.
Currently the high end for home machines is something like 8-core i7 /
i9 @ 3.60 GHz. It also helps to have extra-fast RAM and SSD.
I did some experiments using cloud servers on Digital Oceanhe maintenance level.
The vCPUS identify as the following:
model name : Intel(R) Xeon(R) Gold 6140 CPU @ 2.30GHz
cpu MHz : 2294.608
cache size : 25344 KBI tried doing "isabelle build" of my projects using configurations with various numbers of vCPUs
and watching the load average, CPU utilization, number of threads, and so on. I found that it
was difficult to find situations where more than about 8 CPUs were in use and the overall average
speedup factor was more like about 3x.
For a single session, 8 cores is indeed the optimal number for most
applications. Typical speedup is in the range of 3..7.
For full-scale maintenance of many (parallel) sessions, notably AFP, it
makes sense to have 80 cores around. Usually these are arranged in a
non-uniform manner. Often, server hardware is actually just a
single-board cluster of 8-core nodes.
Also, the 2.30GHz Xeon CPUs on the cloud server are somewhat slower than my 3.40GHz i5 CPU at home,
so it actually turns out that I still get stuff done faster on my home system than if I use the
cloud server with double the number of cores.
Yes, I guess this is normal for most clouds.
A question that comes to mind is what is it that usually limits the amount of concurrency when
using Isabelle? Although obviously the source has to be parsed and type-checked before anything else
can be done, and that seems to occur in a initial pass over the code that runs ahead of the actual
proof checking, once that has been done it seems like as many of the basic proof steps as you like
ought to be doable in parallel. So I wonder why that doesn't happen.
Processing of Isabelle document sources (not "code") has many aspects.
It depends a lot on the structure of the application how much implicit
parallelism is in there by default. Scalable parallel executation is one
of the big challenges of computer-science. Isabelle is already quite
advanced in this respect, but we now also see the limits of hardware:
parallel shared-memory access is a typical bottle-neck.
Makarius
Last updated: Nov 21 2024 at 12:39 UTC