Stream: Mirror: Isabelle Users Mailing List

Topic: [isabelle] Isabelle standard mathematics library, HOL-Ana...


view this post on Zulip Email Gateway (Jul 22 2020 at 06:48):

From: Manuel Eberl <eberlm@in.tum.de>
I fear that there are no canonical references for any of these – at
least not that I know of.

You can cite Isabelle as a whole, or you can try to specifically cite
e.g. the generated HTML [1] or the proof document [2]. For HOL-Analysis,
there is also a manual [3] (which only contains the most important bits).

There was an idea in the past to publish some kind of overview of
HOL-Analysis but this will require a lot more cleaning up first.

The filter paper that you gave as a reference fo the standard library
only covers a very small part of the standard library, of course, namely
filters. I always give that reference whenever I talk about limits or
topological concepts (which inevitably involve filters in Isabelle), but
for anything else, it hardly makes sense to cite that.

The measure theory paper should be cited when talking about measure
theory, probability theory, and Lebesgue integration.

Sorry for the following rambly bit:

It's generally a bit of a problem that working on the standard library
is a somewhat thankless task. It's hard to get any publications out of
it (which are the bread and butter in academia) because all you're doing
is formalising routine stuff – often not even very difficult stuff, just
a lot of it. And there's not anything interesting to be said about it in
a paper. You can write tens of thousands of lines of library material
without the people deciding whether or not to give you a grant or tenure
or whatever taking notice of it. The best one can usually hope for is to
mention it in a paper on some more interesting application.

The situation is of course even worse for library maintenance and – as
Makarius reminds us occasionally – infrastructure tasks. There's no
glory in keeping the machine running – but if it stops, people show up
at your door and complain.

Well, I personally am doing all right I think; I've struck a good
balance between library work and publishable things, but I still think
it's a problem in general.

Cheers,

Manuel

[1]: https://isabelle.in.tum.de/library/HOL/HOL-Analysis/index.html
[2]: https://isabelle.in.tum.de/library/HOL/HOL-Analysis/document.pdf
[3]: https://isabelle.in.tum.de/library/HOL/HOL-Analysis/manual.pdf

view this post on Zulip Email Gateway (Jul 22 2020 at 22:43):

From: Mikhail Chekhov <mikhail.chekhov.w@gmail.com>
Dear Manuel Eberl/All,

Thank you for your reply.

On 22/07/2020 08:47, Manuel Eberl wrote:

You can cite Isabelle as a whole, or you can try to specifically cite
e.g. the generated HTML [1] or the proof document [2]. For HOL-Analysis,
there is also a manual [3] (which only contains the most important bits).

Indeed, it looks like it makes sense to cite the generated HTML files. I
wonder, however, how would one order the list of the authors for such
citations.

The filter paper that you gave as a reference for the standard library
only covers a very small part of the standard library, of course, namely
filters.

The reason why I mentioned this paper is that it provides an overview of a
significant part of the standard library. However, I found it slightly
difficult to tell from the paper how significant was the contribution of
the authors to the part of the library that they mention. From the
'summary' section: "starting with topological spaces, over metric spaces to
Euclidean spaces". It appears that this list also implicitly includes some
material on vector spaces. The type classes that they mention seem to
correspond to a significant part of the standard library that I need to
discuss in my own development. In any case, this is merely an explanation
for why I thought that it may be an appropriate reference for a significant
part of the standard library that I needed to cite. However, as you have
stated, it may best to cite the generated HTML files.

It's generally a bit of a problem that working on the standard library
is a somewhat thankless task. It's hard to get any publications out of
it (which are the bread and butter in academia) because all you're doing
is formalising routine stuff – often not even very difficult stuff, just
a lot of it. And there's not anything interesting to be said about it in
a paper. You can write tens of thousands of lines of library material
without the people deciding whether or not to give you a grant or tenure
or whatever taking notice of it. The best one can usually hope for is to
mention it in a paper on some more interesting application.

Thank you for mentioning this. It is useful to know the general attitude to
library development/maintenance in the community.

Luckily for me, however, your comments are hardly applicable to my own
situation. I need to cite the various libraries that are available in the
main distribution of Isabelle in my work on Types-to-Sets. In essence, as
part of this work, I demonstrate how one can synthesize something similar
to (a small part of) HOL-Algebra from the type classes in the standard
library in a semi-automated manner. Moreover, I use the elements of both
libraries in certain explanations/examples. Thus, this work hardly falls
into the category of library development/maintenance.

Kind Regards,
Mikhail Chekhov

view this post on Zulip Email Gateway (Jul 23 2020 at 06:04):

From: "Chun Tian (binghe)" <binghe.lisp@gmail.com>
Hi Manuel,

I thought the biggest contribution of formalizing math theorems was
supposed to verify the correctness of their informal proofs, where
mathematicians sometimes may overlook some subtle details which may
eventually turn the whole proof into false statements.

Of course, mistakes in the proofs of classical results (which must be old
and fundamental enough to be able to be covered) must be very rare. But if
anything of this kind is discovered by a proof engineer, the work should be
publicable (to mathematicians), isn't it? Or let's take one step back, if
the theorem statements are correct but the proof in textbooks are wrong,
and the proof engineer gives a correct one, this should also be something
notable, right?

Regards,

Chun Tian

view this post on Zulip Email Gateway (Jul 23 2020 at 06:23):

From: Richard Waldinger <waldinger@AI.SRI.COM>
i’ve heard it said that Shankar found holes in Gödel’s proof of his incompleteness theorem. Shakar’s thesis involved a machine-check (via one of the boyer-moore theorem provers) of that proof. Don’t worry, the theorem is still true.

view this post on Zulip Email Gateway (Jul 23 2020 at 06:58):

From: Manuel Eberl <eberlm@in.tum.de>

Of course, mistakes in the proofs of classical results (which must be
old and fundamental enough to be able to be covered) must be very rare.

I cannot remember any problem like that ever arising with textbook
maths. These things are usually hundreds of years old and extremely well
understood. Sure, there might be a problem with the proof in one
particular textbook, but then there are a hundred other textbooks with
different proofs.

Even if you do find a mistake in something like that, I don't think many
people would find it very significant if there are other books that
don't have that mistake.

But if anything of this kind is discovered by a proof engineer, the work
should be publicable (to mathematicians), isn't it?

Yes, indeed such things have happened. I recall Sébastian Gouëzel
finding a mistake in a published maths proof that was nontrivial to fix,
and he then published a paper with the original author of that proof in
order to fix it. However, this was hardly "standard library material".

Or let's take one
step back, if the theorem statements are correct but the proof in
textbooks are wrong, and the proof engineer gives a correct one, this
should also be something notable, right?

That depends on whom you ask. "Proof engineers" like us tend to be
extremely pedantic about what constitutes a correct proof.
Mathematicians are more flexible; usually, if the "proof" conveys the
right idea to the reader, it is considered a correct proof. Even if
there are some (fixable) arithmetic mistakes in it, or maybe a few cases
were missed, or there are some gaps that you have to think about for a
bit as a reader.

The majority of the "mistakes" that one finds with formalisation are of
this nature because there are so many of them. I think I haven't done a
formalisation of any big-ish result that hasn't turned up at least one
such issue. "Genuine" mistakes are much rarer especially in "standard
library material", but I expect they are more prevalent in cutting-edge
mathematical research.

By the way, there is an overview thread on Mathoverflow about mistakes
found using ITPs:

https://mathoverflow.net/questions/291158/proofs-shown-to-be-wrong-after-formalization-with-proof-assistant

Cheers,

Manuel

view this post on Zulip Email Gateway (Jul 23 2020 at 07:03):

From: Manuel Eberl <eberlm@in.tum.de>
Well I would hardly consider Gödel's incompleteness theorems
standard-library material, nor is their formalization a routine task.

"Standard library material", to me, is things like undergraduate-level
algebra, analysis, and number theory. Basic group and ring theory,
L'Hospital's theorem, greatest common divisor, that sort of stuff.
Nobody doubts that it can be done and many such things have been done
before and there is nothing to be learnt from doing them.

30 years ago, formalising such results was perhaps a great achievement
worthy of publication, but these days not so much. Especially if it was
already done in another system – even though a Coq library doesn't help
Isabelle users much, someone still /has/ to do it in Isabelle.

(Just to be clear: I'm not saying such routine things /should/ be
publishable, at least not in the same venues as "novel" formalisations.
I'm just saying it's a pity because such unglamorous things also need to
be done and there is currently no mechanism to reward it.)

Manuel

view this post on Zulip Email Gateway (Jul 23 2020 at 07:15):

From: Manuel Eberl <eberlm@in.tum.de>
You can try looking at the author lists of the files in question (at the
very beginning of each file). These typically list the people who first
worked on these topics, although perhaps not everyone who contributed
material and maintained the libraries since. I at least only add my name
to the list if I make same very significant additions to a file.

Much of our analysis was also (at least initially) ported from HOL
Light, so the authors of HOL Light's analysis library (probably mostly
John Harrison?) also deserve a lot of credit for Isabelle's library.

By the way, as I see it, the main reason why this filter paper was
worthy of publication is that it (to my knowledge) introduced
filter-based topology into the ITP world, which makes so many things
much more convenient and is /not/ simply a routine formalization of some
well-known maths. Filters are known in mathematics, but mathematicians
do not use them in undergraduate textbooks on analysis. And there's the
description of the type class hierarchy, which is also an ITP issue that
goes a bit beyond routine formalisation of maths and has more of a
"software engineering" flavour.

It is always good if one can include such aspects in formalisations.
Clemens Ballarin also recently had a paper on formalising standard group
theory that focused heavily on what a convenient idiomatic
representation in "modern" Isabelle/HOL could look like.

Manuel

view this post on Zulip Email Gateway (Jul 23 2020 at 10:43):

From: Lawrence Paulson <lp15@cam.ac.uk>
This brings up the interesting point of the gauge integral. It has its fans and John Harrison was clearly convinced. But I’ve since gotten the impression that for mainstream mathematicians, the Lebesgue integral is the only integral. (E.g. it’s the foundation of measure theory and everything built upon that.) So now we have two theories of integration and unfortunately many developments use both sorts of integrals. Transitioning to a 100% Lebesgue integral library will require a huge amount of absolutely thankless work.

Larry

view this post on Zulip Email Gateway (Jul 23 2020 at 12:59):

From: Manuel Eberl <eberlm@in.tum.de>
It certainly has its advantages. The multivariate change of variables as
ported from HOL Light has a very nice, general, and simple form for the
Gauge integral:

theorem has_absolute_integral_change_of_variables:
  fixes f :: "real^'m::{finite,wellorder} ⇒ real^'n" and g ::
"real^'m::_ ⇒ real^'m::_"
  assumes S: "S ∈ sets lebesgue"
    and der_g: "⋀x. x ∈ S ⟹ (g has_derivative g' x) (at x within S)"
    and inj: "inj_on g S"
  shows "(λx. ¦det (matrix (g' x))¦ *⇩R f(g x)) absolutely_integrable_on S ∧
           integral S (λx. ¦det (matrix (g' x))¦ *⇩R f(g x)) = b
     ⟷ f absolutely_integrable_on (g S) ∧ integral (g S) f = b"

It demands measurability of the integration domain and injectivity and
differentiability of the variable change. You don't even need g to be C1
or g^-1 to be differentiable.

For the Lebesgue integral, the situation is a lot worse. There are
various different forms of change of variables, and all of them (except
the linear one) are univariate and have various additional side
conditions like monotonicity. I've done several proofs where the only
recourse seemed to be to convert Lebesgue to Gauge, apply COV there, and
convert back.

I'm not entirely sure whether that is inherently so or simply because no
one has sat down and proved a sufficiently general version of change of
variables for the Lebesgue integral. But I think when I last discussed
this with someone, they (perhaps Fabian Immler?) said that that you
probably need at least some additional preconditions for Lebesgue…

By the way, to make matters worse, there are also things like the
integral of sin(x)/x from -∞ to ∞, which are Gauge-integrable but not
Lebesgue-integrable and which /do/ come up in mathematics.
Mathematicians will then simply call these "improper integrals" and
implicitly take the limit of the integral from -a to a for a → ∞. And
then there's the Cauchy Principal Value, which (as Wenda pointed out to
me) is not subsumed by the Gauge integral but apparently an entirely
different generalisation of Lebesgue integrals…

Manuel

view this post on Zulip Email Gateway (Jul 28 2020 at 15:30):

From: Manuel Eberl <eberlm@in.tum.de>
John Harrison kindly pointed out something that I should have spotted
myself: the version of change of variables that we have in Isabelle and
HOL Light is technically about the Gauge integral, but it demands
absolute integrability. Absolute Gauge-integrability is equivalent to
Lebesgue-integrability, so at the end of the day, it really is a theorem
about change of variables for Lebesgue integration. He also said that to
the best of his knowledge, the proof does not use any Gauge-specific things.

His version is considerably sharper than that found in most textbooks.
The only one that is essentially equivalent to his knowledge is the one
from Fremlin's "Measure Theory" volume 2, theorem 263D, p312.

By the way, another issue with COV that we should resolve eventually is
that our current version works on "real ^ 'n" like in HOL Light, whereas
the more idiomatic thing to do would be "'a :: euclidean_space". This
can either be done by generalising the existing proof or trying to lift
the result using types-to-sets.

Cheers,

Manuel


Last updated: Apr 19 2024 at 12:27 UTC