From: Andreas Lochbihler <andreas.lochbihler@inf.ethz.ch>
Dear all,
We noticed that the code setup for the theory Fraction_Field in HOL/Library is broken, at
least since Isabelle2013. We would like to use code generation in his formalisation of
knot theory where the elements of the fraction fields are polynomials over integers. I had
a brief look at the theory Fraction_Field and noticed that there is a smart
pseudo-constructor Fract, which is declared as code_datatype. So this feels as if the code
generation was working at some time in the distant past and got broken somewhen.
Does anyone know about the status of Fraction_Field?
I believe that it would be fairly easy to "fix" the problem of code generator setup by
deriving a few code equation from the lemmas, but this will clearly result in suboptimal
code for two reasons. First, we need tests whether the denominator is 0 all over the
place. Second, the elements of the fraction fields will not be normalised automatically.
For the application on polynomials, this means that we would need some Euclidean algorithm
for cancelling common factors of polynomials. I dimly remember that there is some support
for the Euclidean algorithm in the pipeline. What is the status there?
Best,
Andreas and Prathamesh
From: Manuel Eberl <eberlm@in.tum.de>
It's all there in ~~/src/HOL/Number_Theory/Euclidean_Algorithm. However,
if you want to do this for polynomials with integer coefficients, that
is not going to work: polynomials over the integers are /not/ a
Euclidean ring; they are not even a principal ideal domain.
The Euclidean algorithm can be adapted to work; the easiest way is
probably to use subresultants, because this way, all intermediate
results are also integer polynomials, whereas the classic Euclidean
algorithm would have fractional coefficients.
Cheers,
Manuel
From: Florian Haftmann <florian.haftmann@informatik.tu-muenchen.de>
Hi Andreas and Prathamesh,
We noticed that the code setup for the theory Fraction_Field in
HOL/Library is broken, at least since Isabelle2013. We would like to use
code generation in his formalisation of knot theory where the elements
of the fraction fields are polynomials over integers. I had a brief look
at the theory Fraction_Field and noticed that there is a smart
pseudo-constructor Fract, which is declared as code_datatype. So this
feels as if the code generation was working at some time in the distant
past and got broken somewhen.Does anyone know about the status of Fraction_Field?
I dimly remember that initially Fraction_Field just took over the
then-used code setup from the rationals, and maybe it never worked as
intended (would need a closer investigation to find out actually).
When code generation had been equipped with invariants, I revisited
Fraction_Field to make a code setup close to the rationals but soon
realized that…
I believe that it would be fairly easy to "fix" the problem of code
generator setup by deriving a few code equation from the lemmas, but
this will clearly result in suboptimal code for two reasons. First, we
need tests whether the denominator is 0 all over the place. Second, the
elements of the fraction fields will not be normalised automatically.
…I need a generalized gcd for that, and so I let everything stand as it is.
For the application on polynomials, this means that we would need some
Euclidean algorithm for cancelling common factors of polynomials. I
dimly remember that there is some support for the Euclidean algorithm in
the pipeline. What is the status there?
I think you can build on the corresponding theory in Number_Theory, but
that needs some further rounds of polishing before I would recommend to
turn it into a »hard« dependency. Instead I suggest to put the code
setup into a separate theory »Executable_Fraction_Field«. This is what
we did 8 years ago when code generation was still highly experimental…
Hope this helps,
Florian
Best,
Andreas and Prathamesh
From: Andreas Lochbihler <andreas.lochbihler@inf.ethz.ch>
Dear Manuel and Florian,
Thank you for your quick replies. Let me summarise how I see the situation:
Fraction fields are defined over integral domains (type class idom). With little
effort, we could get a horribly inefficient setup for code generation by just massaging
the existing equations a bit.
For euclidean rings, we can get with reasonable effort a working code setup with
reasonable efficiency. If we take that route, code generation only works for euclidean
rings, but polynomials over integers are not such a ring (which is our application at the
moment).
For fraction fields over polynomials over rings (rather than fields), one can use
subresultants, but I have not been able to find them formalised in Isabelle. Are they
hidden somewhere? If not, are there any plans to formalise them?
Even if we had subresultants, I do not yet see how they can be integrated with the
Isabelle type classes. Obviously, we should have a uniform code setup for fraction fields,
i.e., it should work for all instances the same way. AFAICS, this means that we need
another type class which provides some sort of normalisation function normalise :: 'a =>
'a => 'a * 'a. For euclidean rings, this could just be delegated to Euclid's algorithm.
For polynomials, the situation is not as clear. Either the instantiation always implements
some normalisation algorithm (e.g. using subresultants) or it tests whether the base type
is a field and if so, it uses Euclid's algorithm again. I do not know which of these is
preferable in terms of efficiency. Do you have any idea?
What's your take on this?
Best,
Andreas
From: Wenda Li <wl302@cam.ac.uk>
Dear Andreas,
Are subresultants really necessary for this? From my understanding,
subresultant remainder sequences are just an improved version of
pseudo-remainder sequences, by controlling coefficients growth. However,
if not implemented properly (i.e. merely calculating subresultants
through determinants), I don't think there is any advantage for
subresultants over pseudo-divisions, while pseudo-divisions on
polynomials are much easier to formalize. In fact, I did a formalization
of polynomial pseudo-divisions by imitating Euclidean divisions in
Library/Polynomial.thy, and the relation looks like:
definition ppdivmod_rel :: "'a::idom poly ⇒ 'a poly ⇒ 'a poly ⇒ 'a poly
⇒ 'a ⇒ bool"
where
"ppdivmod_rel x y q r m ⟷ (
smult m x = q * y + r ∧ (if y = 0 then q = 0 else r = 0 ∨ degree r
< degree y)
∧ ( m= (if x=0 ∨ y=0 then 1 else (lead_coeff y) ^ (degree x + 1 -
degree y))))"
Hope this helps,
Wenda
From: Andreas Lochbihler <andreas.lochbihler@inf.ethz.ch>
Dear Wenda,
I am not an expert on computer algebra, so I am happy to use whatever seems reasonable.
Does your formalisation of pseudo-division also include an executable algorithm? Is your
formalisation available?
Best,
Andreas
From: Wenda Li <wl302@cam.ac.uk>
The formalization is executable. Please give me a day or two, I will
clean it a little bit and then upload it to my Sturm-Tarski Theorem
entry in AFP.
Wenda
From: Andreas Lochbihler <andreas.lochbihler@inf.ethz.ch>
Hi Florian and Manuel,
I have experimented a bit with a normalisation function to be used in the operations over
fraction fields and had a look at Euclidean_Algorithm in Number_Theory.
I noticed that there are the type classes euclidean_ring and euclidean_ring_gcd. Most
of the theorems about gcd which I need are in euclidean_ring_gcd, but not in
euclidean_ring itself. Unfortunately, the instantiations for nat, int, etc. are only done
for euclidean_ring, not for euclidean_ring_gcd. What are the plans for these type classes?
Should I spend any effort on adding instances for euclidean_ring_gcd? Or will this become
obsolete with the polishing that is still due?
I am no longer sure that an invariant-based approach is the optimal thing for fraction
field. In the code equation for the arithmetic operation, I don't see any big
opportunities to exploit the invariant that nominator and denominator are normalised in
some unspecified form. Only the equality tests become simpler (as normal forms are unique
and we thus save two multiplications). Instead, it seems much simpler to treat
cancellation of common factors as an optimisation without logical significance. For
example, we could just add a call to a simplification function before calls to the Fract
constructor. The simplification function only has to return an element of the same
equivalence class, so for nat and int, we could use a normalisation function based on
Euclid's algorithm. For other types, the simplification function could also just be the
identity. What do you think?
Best,
Andreas
From: Andreas Lochbihler <andreas.lochbihler@inf.ethz.ch>
Hi Manuel,
On 04/09/15 11:44, Manuel Eberl wrote:
- I noticed that there are the type classes euclidean_ring and
euclidean_ring_gcd. Most of the theorems about gcd which I need are in
euclidean_ring_gcd, but not in euclidean_ring itself. Unfortunately,
the instantiations for nat, int, etc. are only done for
euclidean_ring, not for euclidean_ring_gcd. What are the plans for
these type classes? Should I spend any effort on adding instances for
euclidean_ring_gcd? Or will this become obsolete with the polishing
that is still due?euclidean_ring_gcd is nothing but euclidean_ring with the additional
assumption that gcd and Gcd (which are introduced in a syntactical
typeclass and therefore have no a priori properties associated with
them) are the same as the euclidean GCD (and the same for LCM).So, in a sense, there is usually no proof work to be done to make
anything an instance of euclidean_ring_gcd if you already have
euclidean_ring; you can just define "gcd = gcd_eucl", "Gcd = Gcd_eucl"
and so on and you get the instance for free.
That's clear. However, nat and int already have their own implementation of the GCD...
If there is already an existing definition of gcd, you have to prove its
equivalence to the Euclidean GCD etc., which you can do by showing that
it fulfils the defining properties of the GCD and that it returns a
normalised GCD, i.e. "1" and not "-1" for integers.
Well, it is exactly this equivalence proof of which I am uncertain. If Florian
restructures the whole hierarchy such that gcd/lcm definitions for int and nat are done in
terms of gcd_eucl, then the equivalence proof becomes irrelevant.
In fact, from what I can see, the equivalences are not provable at all in Isabelle2015,
because euclidean_ring_gcd also talks about the set versions Gcd and Lcm. And in
Euclidean_Algorithm, Lcm_eucl and Gcd_eucl are defined such that they also work for
infinite sets whereas in GCD, Lcm for nat is defined to be 0 for infinite sets. So what is
the plan here?
Best,
Andreas
On 04/09/15 11:32, Andreas Lochbihler wrote:
Hi Florian and Manuel,
I have experimented a bit with a normalisation function to be used in
the operations over fraction fields and had a look at
Euclidean_Algorithm in Number_Theory.
- I am no longer sure that an invariant-based approach is the optimal
thing for fraction field. In the code equation for the arithmetic
operation, I don't see any big opportunities to exploit the invariant
that nominator and denominator are normalised in some unspecified
form. Only the equality tests become simpler (as normal forms are
unique and we thus save two multiplications). Instead, it seems much
simpler to treat cancellation of common factors as an optimisation
without logical significance. For example, we could just add a call to
a simplification function before calls to the Fract constructor. The
simplification function only has to return an element of the same
equivalence class, so for nat and int, we could use a normalisation
function based on Euclid's algorithm. For other types, the
simplification function could also just be the identity. What do you
think?Best,
AndreasOn 27/08/15 12:02, Florian Haftmann wrote:
Hi Andreas and Prathamesh,
We noticed that the code setup for the theory Fraction_Field in
HOL/Library is broken, at least since Isabelle2013. We would like to
use
code generation in his formalisation of knot theory where the elements
of the fraction fields are polynomials over integers. I had a brief
look
at the theory Fraction_Field and noticed that there is a smart
pseudo-constructor Fract, which is declared as code_datatype. So this
feels as if the code generation was working at some time in the distant
past and got broken somewhen.Does anyone know about the status of Fraction_Field?
I dimly remember that initially Fraction_Field just took over the
then-used code setup from the rationals, and maybe it never worked as
intended (would need a closer investigation to find out actually).When code generation had been equipped with invariants, I revisited
Fraction_Field to make a code setup close to the rationals but soon
realized that…I believe that it would be fairly easy to "fix" the problem of code
generator setup by deriving a few code equation from the lemmas, but
this will clearly result in suboptimal code for two reasons. First, we
need tests whether the denominator is 0 all over the place. Second, the
elements of the fraction fields will not be normalised automatically.…I need a generalized gcd for that, and so I let everything stand as
it is.For the application on polynomials, this means that we would need some
Euclidean algorithm for cancelling common factors of polynomials. I
dimly remember that there is some support for the Euclidean
algorithm in
the pipeline. What is the status there?I think you can build on the corresponding theory in Number_Theory, but
that needs some further rounds of polishing before I would recommend to
turn it into a »hard« dependency. Instead I suggest to put the code
setup into a separate theory »Executable_Fraction_Field«. This is what
we did 8 years ago when code generation was still highly experimental…Hope this helps,
FlorianBest,
Andreas and Prathamesh
From: Jose Divasón <jose.divasonm@unirioja.es>
Dear all,
Some months ago I noticed the same thing. As Manuel says,
euclidean_ring_gcd is just euclidean_ring + gcd = gcd_euclid (and the same
for Gcd, lcm and Lcm).
Anyway, in my AFP entry called Echelon_Form there is another variant of the
Manuel's file (see
http://afp.sourceforge.net/browser_info/current/AFP/Echelon_Form/Euclidean_Algorithm.html).
I just slighly modified his version to move most of theorems presented in
the euclidean_ring_gcd class to the euclidean_ring one. In other file (
http://afp.sourceforge.net/browser_info/current/AFP/Echelon_Form/Euclidean_Algorithm_Extension.html),
the following instances were also proven:
instantiation nat :: euclidean_semiring_gcd
instantiation int :: euclidean_ring_gcd
instantiation poly :: (field) euclidean_ring
instantiation poly :: (field) euclidean_ring_gcd
Nevertheless, it's worth noting that my files are independent with respect
the Manuel's one (which is the one presented in the stardard library). In
addition, my changes were done with respect to an old version of the
Manuel's file, so probably there have been some
improvements/restructurations on his file.
Best,
Jose
From: Manuel Eberl <eberlm@in.tum.de>
I just looked at all of this again and found the situation to be much
more painful than I had remembered. In the long run, we definitely want
to replace the definitions of the gcd for natural numbers and integers
with the gcd_eucl, and then add the current definitions as alternative
definitions and code equations.
I just slighly modified his version to move most of theorems presented in
the euclidean_ring_gcd class to the euclidean_ring one.
I also concluded that that is pretty much the only viable option (other
than doing the change I mentioned above, which is probably also what
Andreas meant initially).
I guess Florian had planned to do this anyway in the course of his
refactoring of algebraic type classes, but I don't know what his current
status is on that.
Cheers,
Manuel
From: Florian Haftmann <florian.haftmann@informatik.tu-muenchen.de>
Hi all,
I am somehow lost how the »algebraic stack« is supposed to look like in
the whole example. Something like Fraction over Poly over Int?
Since Poly over Int forms a factorial ring but not an euclidean ring,
the question is how to generalize normalization of quotients a / b
accordingly. It won't work naturally with type classes:
Poly over Rat --> normalization via gcd / eucl
Poly over Poly over Rat --> somehow different
Hence, some type class magic would be needed to detect whether to use
the rather efficient euclidean algorithm or something different – or
even refrain from normalization at all!?
Cheers,
Florian
signature.asc
From: Florian Haftmann <florian.haftmann@informatik.tu-muenchen.de>
This discussion is leaving the scope of isabelle-users since we are
talking about ongoing developments in the Isabelle repository,
nevertheless here a short preview:
The next Isabelle release will (in Complex-Main) contain type classes
for generic gcd, lcm, Gcd, Lcm with the most fundamental instances
(hence no euquivalence problems).
Before we can continue to move the euclidean algorithm itself to
Complex-Main, the existing theories must be cleaned-up by removing
duplicates, generalizing what can be generalized etc concerning
gcd/lcm/Gcd/Lcm.
After that, the euclidean algorithm can be absorbed into Complex-Main.
I have this on my agenda, but not prioritized. Any help concerning the
GCD cleanup is appreciated.
Detaisl could be discussed on isabelle-dev.
Cheers,
Florian
signature.asc
Last updated: Nov 21 2024 at 12:39 UTC