From: "\"Mulder, Ike\"" <cl-isabelle-users@lists.cam.ac.uk>
Hello,
When using Word_Lib.More_Word (or anything that imports it), proving inequalities between concrete words can take an unacceptable amount of time. Consider
theory Scratch
imports
Word_Lib.Word_Lib_Sumo
begin
lemma slow_ineq:
shows ‹(0xC00000 :: 64 word) ≠ 0xC10000›
by simp
end
The cause seems to be an unfortunate interaction between the unsigned_norm
simprocs, and the simplification rule word_eq_numeral_iff_iszero
from the HOL.Word standard library. This simplification rule is basically (a = b) <-> (a – b = 0)
. When this subtraction is further simplified, the word normalization procedure kicks in n times for words of length n, causing the slowdown.
We have used two possible ‘fixes’ in our proofs so far:
* Disable the simprocs with notes [[simproc del: unsigned_norm unsigned_norm_neg0 unsigned_norm_neg1]]
* Swap out word_eq_numeral_iff_iszero
for word_unat_eq_iff [where ?v=‹numeral _› and ?w=‹numeral _›]
. This avoids the subtraction – it normalizes the words, then compares them as naturals.
For 64-bit words, these options seem to be about 34x faster and 13x faster than the default on my machine, respectively. The latter option keeps the normalization procedure around, so I think that one has the least impact on proofs, besides a possible speedup.
I have also tried to modify the simpset used by unsigned_word
to a completely determined one. This speeds up the normalization procedure itself about 3x, but is still very slow in conjunction with word_eq_numeral_iff_iszero
. For reference, I ended up with something like val ss = simpset_of (put_simpset HOL_ss @{context} addsimps @{thms bintrunc_numeral len_bit0 len_num1 One_nat_def mult_Suc_right mult_0_right add.right_neutral mult_num_simps numeral_times_numeral pred_numeral_simps bin_last_numeral_simps of_bool_eq Num.BitM.simps numeral_div_numeral divmod_cancel divmod_trivial prod.case prod.sel div_0 take_bit_0 times_int_code mult_1_right add.left_neutral one_plus_numeral add_num_simps minus_numeral_div_numeral minus_one_div_numeral Parity.adjust_div_eq odd_neg_numeral1 numeral_One one_neq_zero bintrunc_Suc_numeral of_int_numeral})
.
If anyone else knows of a better solution, we’d be interested to hear it. A downside to both of our fixes is that I don’t think they propagate across theory files, since ‘simp del’ does not persist.
Best, Ike
From: Florian Haftmann <florian.haftmann@cit.tum.de>
Hi Ike,
thanks for figuring that out.
As far I can see, the best solution is to disregard
word_eq_numeral_iff_iszero [simp] and calculate equality on word values
in the same manner as <= and < is calculated by default in HOL-Library.Word.
@list Any objections?
* Swap out
word_eq_numeral_iff_iszero
forword_unat_eq_iff[where ? v=‹numeral _› and ?w=‹numeral _›]
. This avoids the subtraction – it
normalizes the words, then compares them as naturals.
I have also tried to modify the simpset used by
unsigned_word
to a
completely determined one. This speeds up the normalization procedure
itself about 3x, but is still very slow in conjunction withword_eq_numeral_iff_iszero
. For reference, I ended up with something
likeval ss = simpset_of (put_simpset HOL_ss @{context} addsimps @{thms bintrunc_numeral len_bit0 len_num1 One_nat_def mult_Suc_right mult_0_right add.right_neutral mult_num_simps numeral_times_numeral pred_numeral_simps bin_last_numeral_simps of_bool_eq Num.BitM.simps numeral_div_numeral divmod_cancel divmod_trivial prod.case prod.sel div_0 take_bit_0 times_int_code mult_1_right add.left_neutral one_plus_numeral add_num_simps minus_numeral_div_numeral minus_one_div_numeral Parity.adjust_div_eq odd_neg_numeral1 numeral_One one_neq_zero bintrunc_Suc_numeral of_int_numeral})
.
By »completely determined« you mean static, with no effect of later simp
declarations? This would indeed by interesting to be included by default.
@list Opinions?
Cheers,
Florian
OpenPGP_0xA707172232CFA4E9.asc
OpenPGP_signature.asc
From: Gerwin Klein <cl-isabelle-users@lists.cam.ac.uk>
On 15 Apr 2025, at 17:53, Florian Haftmann <florian.haftmann@cit.tum.de> wrote:
As far I can see, the best solution is to disregard word_eq_numeral_iff_iszero [simp] and calculate equality on word values in the same manner as <= and < is calculated by default in HOL-Library.Word.
@list Any objections?
Yes, I think that makes sense. Generally word_eq_numeral_iff_iszero as [simp] has popped up as inconvenient a few times for us as well.
- Swap out
word_eq_numeral_iff_iszero
forword_unat_eq_iff[where ? v=‹numeral _› and ?w=‹numeral _›]
. This avoids the subtraction – it
normalizes the words, then compares them as naturals.I have also tried to modify the simpset used by
unsigned_word
to a completely determined one. This speeds up the normalization procedure itself about 3x, but is still very slow in conjunction withword_eq_numeral_iff_iszero
. For reference, I ended up with something likeval ss = simpset_of (put_simpset HOL_ss @{context} addsimps @{thms bintrunc_numeral len_bit0 len_num1 One_nat_def mult_Suc_right mult_0_right add.right_neutral mult_num_simps numeral_times_numeral pred_numeral_simps bin_last_numeral_simps of_bool_eq Num.BitM.simps numeral_div_numeral divmod_cancel divmod_trivial prod.case prod.sel div_0 take_bit_0 times_int_code mult_1_right add.left_neutral one_plus_numeral add_num_simps minus_numeral_div_numeral minus_one_div_numeral Parity.adjust_div_eq odd_neg_numeral1 numeral_One one_neq_zero bintrunc_Suc_numeral of_int_numeral})
.By »completely determined« you mean static, with no effect of later simp declarations? This would indeed by interesting to be included by default.
Inside the unsigned_word procedure? Yes, having the simp set explicit there as above sounds like a good idea in any case — at least if there are good enough test cases to spot it breaking down if something changes in the background implementation (which hopefully is rare these days). It might even be useful to give this list of lemmas a name outside the procedure (just with lemmas .. = ..),
Cheers,
Gerwin
Last updated: May 06 2025 at 08:28 UTC