Does anyone have a rough idea of the speedup obtained by having Thm.bicompose and Thm.biresolution in the kernel?
And is performance being slow outside of the kernel a natural consequence of the tactics being implemented via thm's compared to hol4 (term list * term) and hol-light's (string * thm) list * term
I think the most expensive step would be the certification of the unifier.
I once tried to move the unifier out of the trusted code base of bicompose_aux, resulting in an overhead of 1.5 (check if unifier returns convertible results) up to 1.9 (unifier returns an equality theorem explicitly).
But those have not been thorough experiments. Maybe you can do much better with some engineering.
Was that a measure of just the raw performance or building a bunch of isabelle theories? I would think the bi net will filter out most candidates anyway
Total build time of session HOL
Didn't Simon Wimmer and his student investigate this thoroughly like 8 years ago or so?
And aren't flex flex pairs another issue?
Yes, there was a BSc thesis attempting to move bicompose_aux out of the kernel. The thesis isn't publicly available I believe. The final CPU build time increased by a factor of 2.4 and flex-flex pairs were smashed as a simplifying step indeed (some sessions did not build anymore as a result).
Related work on flex-flex pairs by Simon: https://home.cit.tum.de/~wimmers/papers/Smashing_Isabelle.pdf
Last updated: Jan 16 2026 at 08:33 UTC