From: Lawrence Paulson via isabelle-dev <isabelle-dev@mailman.proof.cit.tum.de>
Neural_Networks is badly broken. I took a look and the failure is in a proof that tests whether a certain generated construction is equal to an explicit construction. They are supposed to be literally identical and are not. I think we should get back to the authors to find out why it failed and how it can be made robust.
Larry
From: "Achim D. Brucker" <adbrucker@0x5f.org>
Hi,
I did a first investigation. Overall, this part of the theory is
intended to serve as regression test. Thus, it worked as it should:
acting as early warning for subtle changes that otherwise would go
undetected, potentially creating "bit rot".
The change is actually due to a difference in Poly/ML used in Isabelle
2025 vs the current development version (and, 2025-1, for that matter).
Namely:
ML\<open>
PackReal32Little.fromBytes (Word8Vector.fromList [0wxC5, 0wx10,
0wxE8, 0wx3B])
\<close>
returns a different value, namely:
As the rule-of-thumb regarding the precision of 32 IEEE-745 floating
point numbers is around 7-8 decimal digits, the new behaviour is OK.
Hence, the fix to get the AFP entry working again is to update the
theories to reflect the new values.
We will take care of this tomorrow.
Best,
Achim (and Amy)
On 19/11/2025 16:25, Lawrence Paulson via isabelle-dev wrote:
Neural_Networks is badly broken. I took a look and the failure is in a
proof that tests whether a certain generated construction is equal to
an explicit construction. They are supposed to be literally identical
and are not. I think we should get back to the authors to find out why
it failed and how it can be made robust.Larry
From: Lawrence Paulson via isabelle-dev <isabelle-dev@mailman.proof.cit.tum.de>
I find it surprising that the meaning of a specific bit pattern as a real number according to an established floating-point standard can change from year to year. How do we know it won't change again?
Larry
On 19 Nov 2025, at 22:37, Achim D. Brucker <adbrucker@0x5f.org> wrote:
Hi,
I did a first investigation. Overall, this part of the theory is intended to serve as regression test. Thus, it worked as it should: acting as early warning for subtle changes that otherwise would go undetected, potentially creating "bit rot".The change is actually due to a difference in Poly/ML used in Isabelle 2025 vs the current development version (and, 2025-1, for that matter). Namely:
ML\<open>
PackReal32Little.fromBytes (Word8Vector.fromList [0wxC5, 0wx10, 0wxE8, 0wx3B])
\<close>returns a different value, namely:
- Isabelle 2025: val it = 0.007082077209: PackReal32Little.real
- Isabelle development/2025-1: val it = 0.007082077: PackReal32Little.real
As the rule-of-thumb regarding the precision of 32 IEEE-745 floating point numbers is around 7-8 decimal digits, the new behaviour is OK. Hence, the fix to get the AFP entry working again is to update the theories to reflect the new values.
We will take care of this tomorrow.Best,
Achim (and Amy)On 19/11/2025 16:25, Lawrence Paulson via isabelle-dev wrote:
Neural_Networks is badly broken. I took a look and the failure is in a proof that tests whether a certain generated construction is equal to an explicit construction. They are supposed to be literally identical and are not. I think we should get back to the authors to find out why it failed and how it can be made robust.
Larry
Last updated: Dec 10 2025 at 12:50 UTC