Stream: Archive Mirror: Isabelle Users Mailing List

Topic: [isabelle] Issues of Isabelle


view this post on Zulip Email Gateway (Aug 22 2022 at 17:58):

From: Askar Safin via Cl-isabelle-users <cl-isabelle-users@lists.cam.ac.uk>
Hello. I want to share my personal experience with Isabelle. I will list
its issues and give some ideas, which can be implemented in Isabelle and
other proof assistants. This letter is attempt to help you, I don't want to
blame you.

The following issues are very annoying, they are simply distract me from
using Isabelle, but unfortunately I simply don't see better alternative at
the moment. All they must be considered when developing new proof
assistants. Unfortunately it seems they cannot be fixed in Isabelle,
because this would require its full rewriting. It will be very nice if some
of this problems will be addressed in Larry Paulson's (CC'd) project
Alexandria.

In 2015 I formalized in Isabelle part of our Moscow State University
lectures on mathematical logic (using latest Isabelle at that moment).
(Unfortunately resulting proof script doesn't fully work in Isabelle 2017.)
This is lectures: http://lpcs.math.msu.su/vml2010/lectures1-8.pdf and this
is my formalization in Isabelle: http://paste.debian.net/hidden/eb960c2e (I
will call it vml2010_fol.thy). I formalized section 3 which proves some
theorems about FOL. Even if you don't know Russian, I think you will easily
understand what is going in that Isabelle source. I will use this source as
an example of Isabelle problems.

Issues:

  1. Lack of true partial functions.

There is a lot of definitions at that section 3 which make sense only if
some assumptions apply. For example, I have a datatype for FOL terms and a
function "Tm", which returns "True" if a term is indeed valid. And lots of
subsequent definitions assume that "Tm" is true. Unfortunately I don't have
a way to specify this assumption in code, so I have to simply put it to a
comment. Here is example:

(* assumes "Tm \<Sigma> t" *)
primrec
Tm_vars :: "[Var set, raw_Tm] \<Rightarrow> bool" and
Tm_list_vars :: "[Var set, raw_Tm list] \<Rightarrow> bool"
where
"Tm_vars xs (Var x) \<longleftrightarrow> x \<in> xs"
| "Tm_vars xs (Fapp f args) \<longleftrightarrow> Tm_list_vars xs args"
| "Tm_list_vars xs [] \<longleftrightarrow> True"
| "Tm_list_vars xs (t # ts) \<longleftrightarrow> Tm_vars xs t \<and>
Tm_list_vars xs ts"

This was definition of "variables of this term" which makes sense only if
the term is actually valid (technically the definition will work even if
term is "wrong", but this is not important).

There are even more complex cases. For example:

(*
assumes "model \<Sigma> M"
assumes "image_in g xs (domain M)"
assumes "Tm \<Sigma> t"
assumes "Tm_vars xs t"
*)
primrec
Tm_value :: "[raw_model, raw_Tm ] \<Rightarrow> ((Var
\<Rightarrow> ZF) \<Rightarrow> ZF )" and
Tm_list_value :: "[raw_model, raw_Tm list] \<Rightarrow> ((Var
\<Rightarrow> ZF) \<Rightarrow> ZF list)"
where
"Tm_value _ (Var x) g = g x"
| "Tm_value M (Fapp f args) g = Finterpret M f (Tm_list_value M args g)"
| "Tm_list_value _ [] _ = []"
| "Tm_list_value M (t # ts) g = Tm_value M t g # Tm_list_value M ts g"

This was definition of "value of term in this model". As you can see it
assumes that:

When I write new definition I must be careful to write in comments all
assumptions for definitions I use. When I write a lemma I must write all
assumptions used by definitions. And it is possible that I will forget one
and Isabelle will not tell me about this. You may say: "If Isabelle
accepted your lemma, then it is true in its form, even if you forgot some
assumption". Yes, but I want a lemma to convey information I meant. And I
meant that proposition only if all assumptions apply, i. e. only if whole
formula makes sense.

You may say "just use option". Well, yes, it is possible that this will
make all assumptions explicit, and I will not be able to forget some
assumption. But the code will become even larger and it will be even harder
to read and write it.

And we still have 0 / 0 = 0. This proposition alone will distract any
"working mathematician". Even computer algebra system Mathematica deal with
division better. (Note that Alexandria announced computer algebra
integration).

What I want is functions which take proof object as an argument. I. e.
division should take as arguments "a", "b" and proof object of proposition
"b ~= 0". You will say that the syntax will be impossible to read and
write. No. Division will look the same, i. e. "a / b", A proof assistant
will automatically search proof object for "b ~= 0" in the context and pass
it to division function. I. e. we will write this: "a = 0 ==> b ~= 0 ==> a
/ b = 0" and the proof assistant will transform this to: "a = 0 ==> !!! p
::: (b ~= 0). (divide a b p = 0)". Here this "!!!" construct is usual type
theory Pi-term. Also we will be able to write something like this:

have "b ~= 0" by ...
...
have "a / b = c" by ... (* here the proof assistant found proof object for
b ~= 0 in the context *)

Moreover, "a &&& b" in Isabelle is "!! C. (A ==> B ==> C) ==> C". So, proof
object for A is available in B. So we can write, say, "b ~= 0 &&& a / b =
1". But not "a / b = 1 &&& b ~= 0". You will say that we break commutation.
But this matches actual mathematics practice. Suppose you have read in some
mathematical article: "Let's assume that b is not 0 and a / b = 1". Now
imagine "Let's assume that a / b = 1 and b is not 0". You see? The first
variant is natural and the second is not. I think that proof-object-powered
partial functions really match actual mathematics. We actually assume them
when we talk about mathematics. Especially compared to 0 / 0 = 0.

True partial function will be especially useful in ZF. Nearly all functions
will take proof objects. Say, sum for natural numbers will take proof that
both arguments are actually natural numbers. Working in ZF may become easy
like HOL. This proof objects will be something like alternative to HOL
type-checking.

Of course, this is not complete solution. I don't know how to deal with
meta-implication/object-implication distinction, with non-classical logics
etc.

  1. There is no proof method which works for all simple cases.

When I write proof scripts, it is nearly impossible to figure which proof
method (auto, blast etc) I should use and why. Usually I first try "auto",
then "blast", then, say, "fastforce", then "metis" etc. This is very
annoying. For example, see this example:
https://en.wikipedia.org/wiki/Isabelle_(proof_assistant) . You see auto,
fastforce, simp, rule, blast.

Often "auto" and "blast" doesn't work, but "rule" or "fact" or "." or ".."
surprisingly does.

For example:

lemma Fm_vars_minus:
assumes "Fm_vars xs A"
shows "Fm_vars (xs - {x}) (any x A)" (is ?a)
and "Fm_vars (xs - {x}) (exists x A)" (is ?e)
proof -
have "Fm_vars (xs \<union> {x}) A" using assms and Fm_vars_subset by
blast
thus ?a and ?e by auto
qed

The first step can be proved by "blast", but not "auto", the second - by
"auto" and not by blast, and even not by "blast+".

Yes, I know that auto uses simplification rules, and blast does not. And
thus I cannot put "blast" instead of "auto" in that example. But why we
just don't have one method which combines power of blast and auto's
simplification rules?

Also, consider this code:

notepad begin
have A and B sorry
hence A and B by auto
end

"auto" works, but blast and metis do not. Yes, I know that I must put "+"
after blast and metis. But why auto doesn't require such "+"? This is
inconsistency.

Consider this:

notepad begin
have "A &&& B" sorry
hence "A &&& B" by auto
end

Here I simply could not find working method. "auto+", "blast+", "metis+",
"rule+", "fact+" - all they fail.

Consider this (from vml2010_fol.thy):

have "(THE x. x \<in> X \<and> f x = y) \<in> X \<and> f (THE x. x \<in>
X \<and> f x = y) = y" using x_def and uqAdvInq by (rule theI)

Why I should write (rule theI) here? Why just "auto" doesn't work? Why
"auto" doesn't know anything about theI? Why even "using theI by auto"
doesn't work?

Consider this:

have "partial_bij_betw f X Y \<Longrightarrow> f x \<in> Y
\<Longrightarrow> the_inv_into X f (f x) \<in> X" by (rule ff1yy)

Why I cannot write "using ff1yy by auto" here? "auto" is not powerful
enough to replace "y" with "f x"? Why "using ff1yy by blast" doesn't work?
"using ff1yy by metis" surprisingly works. But metis doesn't support
simplification, and so will not work in similar situation where
simplification will be also required.

Consider this:

also have "... \<longleftrightarrow> (\<forall> obj \<in> domain M'.
calc M' A (\<lambda> y. if y = x then obj else (\<phi> \<circ> g) y))"
using partial_bij_betw_any and bij .

Why auto and blast don't work, but "." does?! As well as I know "." is
simple method, so why it works when more powerful methods such as auto and
blast doesn't work?

Isabelle is too similar to programming. Hypothetical "working
mathematician" will simply cry.


Recently I found out Alexandria. I think you should take this project as
opportunity to develop better proof assistant and possible rewrite
everything. Instead of basing everything on top of Isabelle in its current
form (0 / 0 = 0). I don't mean I want to participate in the project, I just
gave some ideas. I hope you will use this opportunity to do something good.

Feel free to publish this letter or any parts of it anywhere, say, in some
mathematical magazine.

==
Askar Safin
http://vk.com/safinaskar

view this post on Zulip Email Gateway (Aug 22 2022 at 17:58):

From: José Manuel Rodriguez Caballero <josephcmac@gmail.com>
The beautiful formula 0 / 0 = 0 makes sense to a mathematician, although
this was not the original motivation. I interprete it as the identification
between 0 and infinity. Indeed, let G be the multiplicative group of
non-zero real numbers. This is a Lie group with two connected components:
the positive real numbers and the negative ones. Let H be the union of G
with the zero. In H we take the topology from G and we define the set of
fundamental neighbourhoods of zero to be the set of real numbers either
larger than n or smaller than 1/n.

Consider the function f(x) = 1/x if x is non-zero and f(0) = k. This
function transform H into itself. It is easy to prove that f(x) is
continuous in H if and only if k = 0. Notice that f(0) = 0 can be rewritten
as 0/0 = 0 using a slight abuse of notation.

Jose M.

view this post on Zulip Email Gateway (Aug 22 2022 at 17:58):

From: José Manuel Rodriguez Caballero <josephcmac@gmail.com>
typographical correction (I forgot to multiply by 0 at the right)

0*f(0) = 0 can be rewritten as 0/0 = 0 using a slight abuse of notation.

Jose M.

2018-08-15 5:28 GMT-04:00 José Manuel Rodriguez Caballero <
josephcmac@gmail.com>:

view this post on Zulip Email Gateway (Aug 22 2022 at 17:58):

From: José Manuel Rodriguez Caballero <josephcmac@gmail.com>
another typographical correction
in place of
the set of real numbers either larger than n or smaller than 1/n.

read

set of real numbers which in absolute value are either larger than n or
smaller than 1/n.

2018-08-15 5:33 GMT-04:00 José Manuel Rodriguez Caballero <
josephcmac@gmail.com>:

view this post on Zulip Email Gateway (Aug 22 2022 at 17:58):

From: Lawrence Paulson <lp15@cam.ac.uk>
Dear Askar, thanks for your thoughtful message. It's always valuable to have feedback from new users.

  1. The question of partial functions has been around for a long time, and the following 20 year old survey is still worth reading: Treating Partiality in a Logic of Total Functions <http://wwwbroy.in.tum.de/publ/papers/MS97.pdf> (Müller and Slind, 1996). I won't repeat the arguments here. I would merely mention that other approaches have been tried in other systems. PVS <http://pvs.csl.sri.com/> can support partial functions (by generating validity conditions), as did IMPS <https://link.springer.com/article/10.1007/BF00881906> (using a calculus of definedness). As I understand it, type theory-based systems such as Coq <https://coq.inria.fr/> also handle partial functions (using the implicit argument approach you outline, along with irrelevance of proofs). The penalty for using more complicated approaches is that proofs become longer and more difficult.

It's a good bet that anybody using Isabelle or HOL is willing to live with a calculus of total functions only. As that survey article describes, we can formalise what we want to express well enough, without being forced to repeatedly prove that subexpressions are defined. You mention your wish to make assumptions explicit (e.g., that an FOL term is well-defined), even when those assumptions aren't needed to prove the given property. But one could equally say that the formalism itself is telling us which assumptions are needed and which are not. Your theory only regards an FOL term as well-defined if each function application has the required number of arguments, and yet the set of variables in a raw term is perfectly meaningful regardless of that.

Division is another excellent example. It's easy to imagine that postulating x/0 = 0 introduces inconsistency, but instead it reveals which identities are sensitive to division by zero and which are not. We discover that x/x=1 still requires the assumption that x is nonzero, while (u+v)/x = u/x + v/x does not. Most of us are happy to have as few assumptions as possible, as this makes proofs easier.

  1. We'd certainly like to have a proof method that works for all simple cases, but it isn't easy. In theory, auto should do it, but as you note, it doesn't (and auto is unlikely to change very much). Work on better automation has been ongoing for decades now. Sledgehammer is one attempt at a magic bullet, but it's also defeated in some surprisingly easy cases. But a lot of new work is promising, e.g. Zhan's auto2 <https://arxiv.org/abs/1707.04757> and some recent work by Nagashima, e.g. PSL <https://arxiv.org/abs/1606.02941>.

Mathematicians who use Isabelle today are pioneers, which means that some things will be painful, but on the other hand, they will be the first to make new discoveries.

Larry Paulson

view this post on Zulip Email Gateway (Aug 22 2022 at 17:59):

From: "Dr A. Koutsoukou-Argyraki" <ak2110@cam.ac.uk>
Dear Askar,

just a small practical tip (in case you don't use this already)
it saves plenty of time to use the keyword : "try0 "
which tries several methods and tells you if one of them works,
so the users don't need to try them all one by one themselves.

Also the keyword "try" does the above plus moreover runs sledgehammer
and looks for counterexamples with quickcheck and nitpick,
so you don't need to try all the proof methods separately one by one
yourself.

Best wishes,
Angeliki

view this post on Zulip Email Gateway (Aug 22 2022 at 18:04):

From: Askar Safin via Cl-isabelle-users <cl-isabelle-users@lists.cam.ac.uk>
Lawrence, current approach with partial functions lacks any mathematical
accuracy.

You said that "x / x = 1" requires assumption "x ~= 0", but "(u+v)/x = u/x

Look at this:

(* Current behavior *)
lemma l1: "(u+v)/x = u/x + v/x" sorry
notepad begin
assume "c ~= 0" (* Usually we work with assumption that denominator is non-zero anyway *)
... (* Some steps *)
have "(a+b)/c = a/c + b/c" using l1 by auto

(* With implicit proof object arguments *)
lemma l1: assumes "x ~= 0" shows "(u+v)/x = u/x + v/x" sorry
notepad begin
assume a1: "c ~= 0"
... (* Some steps *)
have "(a+b)/c = a/c + b/c" using l1 and a1 by auto

As you can see the only thing that my proposal adds is "and a1". So, proofs
become just slightly bigger, but a lot more sound. Moreover, if we mandate
all assumptions, some proofs will become even shorter! For example, actual
lemma inverse_inverse_eq "inverse (inverse a) = a" uses proof by cases "a =
0", it will be simpler if we know that "a ~= 0".

Finally, I'm making another proposal. Let all proof methods (auto, blast
etc) automatically extract from environment facts about well-definedness of
its premises and goals. So, under this new proposal you will not need even
write "and a1" like in example above. Moreover, you will not need to
explicitly pass similar assumption even if you use lemma "x/x = 1", so
proof will always become not bigger than currently. But in all cases you
will need to have all assumptions available in context, of course.

Taking proof objects as arguments will allow us to make true "THE"
construct and true Hilbert's choice operator. It will be possible to write
"THE x. P x" only if we know that "?! x. P x" and similar condition for
Hilbert's choice. Current THE and SOME are absolutely strange to "working
mathematician".

I often see argument form some people that proof assistants should be as
close to "real mathematics" as possible, because:

  1. This is why we create proof assistants
  2. This is only way to get mathematicians to work with proof assistants

Thus these people argue that proof assistants should be based on ZFC,
because "real math" uses ZFC, that proof assistants should be classical,
because "real math" is classical. I partially agree with such position. And
in "real math" we have true partial functions (and 0/0=0 have nothing to do
with "real math"). So, we need them [i. e. partial functions] here, in
proof assistants.

But my arguments are not purely theoretical. See: first of all, with true
partial functions you will never handle strange cases like division by zero
(as opposed to current proof of inverse_inverse_eq). Second (and this is
very important!), you will catch errors early. You will not be even state
lemma if you didn't list all assumptions. Thus you will catch error when
you state your lemma and not when you will unable to prove it after several
hours.

My proposals can dramatically change proofs in ZF. Consider this:

(* Common part *)
lemma add_nat: "a : nat ==> b : nat ==> a + b : nat" sorry (* Here I use "+" for raw_add from standard library *)
lemma add_comm: "a : nat ==> b : nat ==> a + b = b + a" sorry
lemma add_assoc: "a : nat ==> b : nat ==> c : nat ==> (a + b) + c = a + (b + c)" sorry

(* Current behavior *)
notepad begin
assume a: "a : nat" and b: "b : nat" and c: "c : nat"
hence "b + a : nat" using add_nat by auto
have "(a + b) + c = (b + a) + c" using a and b and add_comm by auto
also have "... = c + (b + a)" using ba and c and add_comm by auto
also have "... = (c + b) + a" using a and b and c and add_assoc by auto
finally have "a + b + c = c + b + a" .
end

Now I'm making third proposal. Not only prover should extract
well-definedness facts for terms from context, but also it should try to
prove them.

(* If all 3 my proposals apply *)
notepad begin
assume "a : nat" and "b : nat" and "c : nat"
have "(a + b) + c = (b + a) + c" using add_comm by auto
also have "... = c + (b + a)" using add_comm by auto
also have "... = (c + b) + a" using add_assoc by auto
finally have "a + b + c = c + b + a" .
end

As you can see, proof became considerably smaller and simpler. And now we
don't need tricks like "natify" from standard lib. That "natify" version
from standard library is nonsense (why add two non-numbers?) and doesn't
early detect errors.

So, working with sets like "nat" will become as easy as working with types.
And thus ZFC will became as easy to work with as HOL!!!

Treating Partiality in a Logic of Total Functions (Müller and Slind, 1996)
All methods proposed in this article are not satisfactory as opposed to
true partial function. Because all they allow one to write term "0 / 0",
and it will type-check.

It's a good bet that anybody using Isabelle or HOL is willing to live with a calculus of total functions only
So, you mean that everybody wanting partiality already moved to other
systems? Unfortunately, I cannot move to systems supporting partiality,
because they lack some Isabelle features. I tried Coq, it allows
constructing true partial functions. But its standard library has total
division instead of partial one. And Coq has very strange type system as
opposed to natural Isabelle's HOL. Coq's proofs are unreadable. I like
Isabelle's foundations, logic framework, nice declarative proofs, IDE. I
want something like Isabelle, but with partiality, but as well as I know
there is no currently such system. And it would be very good if you will
use Alexandria project to create such system.

Freek Wiedijk [CC'd] in his slides "The next generation of proof
assistants: ten questions" ( http://www.cs.ru.nl/~freek/talks/lsfa.pdf )
gives the following Tobias Nipkow's (CC'd) quote: "Occasionally I do [hate
the totality of the HOL logic]. But mostly not. The next generation of
proof assistants will take it into account".

So, when this "next generation of proof assistants" will emerge?! How long
should I wait for them?! Why not to create them right now?!!! Why not use
Alexandria's money to create the future now?!!!

==
Askar Safin
http://vk.com/safinaskar

view this post on Zulip Email Gateway (Aug 22 2022 at 18:04):

From: Lars Hupel <hupel@in.tum.de>

You said that "x / x = 1" requires assumption "x ~= 0", but "(u+v)/x = u/x
+ v/x" does not. And you present this as feature. This is not a feature,
this is absolutely strange behavior. What will "working mathematician" say
about this? Let both theorems require "x ~= 0". This makes slightly harder
to use this theorems, but this will make whole system more sound.

As Larry has already said, this has literally nothing to do with
soundness. See also <https://www.hillelwayne.com/post/divide-by-zero/>
for a survey on this.

So, when this "next generation of proof assistants" will emerge?! How long
should I wait for them?! Why not to create them right now?!!!

Nobody prevents you from working on a next-generation proof assistant.
In fact, I think many people would be very excited about a proof of concept.

Cheers
Lars

view this post on Zulip Email Gateway (Aug 22 2022 at 18:04):

From: Askar Safin via Cl-isabelle-users <cl-isabelle-users@lists.cam.ac.uk>
I will repeat: main drawback of Isabelle's totality is need to manually
keep track of assumptions, i. e. see my first letter, where I had to write
all assumptions in comments without any help from Isabelle. This was main
reason I gave up on developing this proof script more.

And the second drawback is that what I write doesn't correspond to what I
mean. When I write "(x + y) / z = x / z + y / z" I mean "only if z ~= 0",
but Isabelle doesn't require such assumption. So, I have to manually make
sure that such assumption is always present in the context and thus what I
write is actually correspond to what I mean.

So, current approach lacks some degree of automation.

And finally, I will say again that division is partial in actual
mathematics. Actual proofs assume partial division, and proof assistants
must do the same to be close to real math. I will repeat that phrase "Let's
assume that b ~= 0 and a / b = 4" is OK for mathematical text, but "Let's
assume that a / b = 4 and b ~= 0" is not, and thus the same should apply to
proof assistants. It should be possible to directly convert any mathematical
text to proof script.

==
Askar Safin
http://vk.com/safinaskar

view this post on Zulip Email Gateway (Aug 22 2022 at 18:04):

From: Peter Lammich <lammich@in.tum.de>
On Do, 2018-08-23 at 18:35 +0300, Askar Safin via Cl-isabelle-users
wrote:

As Larry has already said, this has literally nothing to do with
soundness. See also < https://www.hillelwayne.com/post/divide-by-ze
ro/ >
for a survey on this.
I will repeat: main drawback of Isabelle's totality is need to
manually
keep track of assumptions, i. e. see my first letter, where I had to
write
all assumptions in comments without any help from Isabelle. This was
main
reason I gave up on developing this proof script more.

And the second drawback is that what I write doesn't correspond to
what I
mean. When I write "(x + y) / z = x / z + y / z" I mean "only if z ~=
0",
but Isabelle doesn't require such assumption. So, I have to manually
make
sure that such assumption is always present in the context and thus
what I
write is actually correspond to what I mean.

Another option would be to encode partiality explicitly, on top of HOL,
e.g., by lifting everything into an option monad. Then, you could use
Isabelle's standard reasoning tools to reason about definedness. I
doubt that definedness is trivial enough to do it fully automatic,
e.g., what is about "1/f(x)", with a very complicated f, that has a
very complicated proof that !x. f(x) ~= 0.

However, current Isabelle lacks automation or syntax support for such
an embedding, but maybe this might be a more feasible short-term
solution than writing a new proof assistant from scratch.

So, current approach lacks some degree of automation.

And finally, I will say again that division is partial in actual
mathematics. Actual proofs assume partial division, and proof
assistants
must do the same to be close to real math. I will repeat that phrase
"Let's
assume that b ~= 0 and a / b = 4" is OK for mathematical text, but
"Let's
assume that a / b = 4 and b ~= 0" is not, and thus the same should
apply to
proof assistants. 

This imposes an execution order! But don't you also want to have 
"a&b = b&a", and substitution? then both terms must have the same
meaning.

Of course, in the option monad, you could explicitly define a 
short-circuit version of "&", which, however, would not be commutative.

view this post on Zulip Email Gateway (Aug 22 2022 at 18:05):

From: Makarius <makarius@sketis.net>
On 23/08/18 16:44, Askar Safin via Cl-isabelle-users wrote:

lemma l1: assumes "x ~= 0" shows "(u+v)/x = u/x + v/x" sorry
notepad begin
assume a1: "c ~= 0"
... (* Some steps *)
have "(a+b)/c = a/c + b/c" using l1 and a1 by auto

Just a side remark on proper naming of facts in Isabelle/Isar: if you
need meaningless names, you can just use , , or decimal numerals
1, 2, 3, ...

There is no need to have a prefix like "l" for "lemma" or "a" for
assumption above. In fact, it is bad style to decorate fact names
according to there provenience, especially in Isar proofs, where facts
often change there origin between 'assume', 'obtain', 'case', 'have',
'show', 'note' etc.

have "(a+b)/c = a/c + b/c" using l1 and a1 by auto

More stylistic notes: Isar goals can have before and after it. Before
you have important things from the local proof context, after it
relevant global things from the library. According to this rule of of
thumb, the above becomes:

from a1 have "(a+b)/c = a/c + b/c" using l1 by auto

This discipline works more smoothly, if you always expand the old
(obsolete) command abbreviations "hence" == "then have" and "thus" ==
"then show" -- on average they make proof texts longer, and harder to write.

Makarius

view this post on Zulip Email Gateway (Aug 22 2022 at 18:07):

From: Lawrence Paulson <lp15@cam.ac.uk>
Dear Askar,

The problem is that different mathematicians have quite different ideas of what a proof assistant should do. Some people want mathematics to be typed while other people firmly reject type systems. Some would like to see actual texts formalised, including syntactic elements such as parentheses. Some would like to be notified when the statement of a theorem includes assumptions not needed for the proof. You have the opposite wish: you'd like some definitions to be conditional on certain well-formedness assumptions, even when they aren't strictly necessary. It's not a common wish, but note that once you leave the world of definitions, you can include additional assumptions in theorems and they will be checked when you use the theorems.

An interesting observation about division is that when we put x/0 = 0, we do not change the meaning of definition where it is defined but merely extend its domain. Contrast this with the way division is implemented in computer hardware and in most computer languages: in OCaml I can type

-4 mod 3;;

val it : int = -1

and this violates an elementary property of division: that the remainder lies between 0 and the divisor. In other words, division on a computer (any computer) regularly delivers the wrong answer. Somehow people cope with this. Similarly people are able to cope with the formalisms of today's proof assistants, none of which are a perfect match to mathematical reasoning.

It may be that in the future, somebody will come up with a natural and practical formalisation of partial functions. Probably people are getting research funding for this sort of work. But I have funding for a different research programme.

Larry

view this post on Zulip Email Gateway (Aug 22 2022 at 18:08):

From: "Chun Tian (binghe)" <binghe.lisp@gmail.com>
Hi,

I want to point out a similar problem in Isabelle’s Extended_Real theory, in which the sum operator as a total function permits that ∞ + (-∞) = (∞::ereal), the case in which textbooks said it should be avoided (i.e. no definition):

function plus_ereal where
"ereal r + ereal p = ereal (r + p)"
| "∞ + a = (∞::ereal)"
| "a + ∞ = (∞::ereal)"
| "ereal r + -∞ = - ∞"
| "-∞ + ereal p = -(∞::ereal)"
| "-∞ + -∞ = -(∞::ereal)"

HOL4’s extrealTheory [1] currently has the same problem, but its original authors have fixed the definition, and soon it [2] will be submit to HOL4 official.

What I observed is, having a total function here, many related theorems have simpler proofs and less assumptions, because otherwise lemmas like "a + b = b + a” is not true any more, if we don’t put extra assumptions that a and b are not mixing of ∞ and (-∞). I found that, theorem prover doesn’t forbid its user to write down a term which has no definition, just with that term it is not possible to manipulate it with any theorem.

Is above definition of “plus_ereal” wrong? I think so, but I don’t think everyone agrees with me.

—Chun

[1] https://github.com/HOL-Theorem-Prover/HOL/blob/master/src/probability/extrealScript.sml <https://github.com/HOL-Theorem-Prover/HOL/blob/master/src/probability/extrealScript.sml>
[2] https://github.com/binghe/HOL/blob/HOL-Probability/src/probability/extrealScript.sml <https://github.com/binghe/HOL/blob/HOL-Probability/src/probability/extrealScript.sml>
signature.asc


Last updated: Nov 21 2024 at 12:39 UTC