Stream: Archive Mirror: Isabelle Users Mailing List

Topic: [isabelle] Simpler theorem statements, and proofs for the...


view this post on Zulip Email Gateway (Aug 19 2022 at 09:05):

From: Christoph LANGE <c.lange@cs.bham.ac.uk>
2012-10-31 20:09 Lawrence Paulson:

On 31 Oct 2012, at 18:28, Christoph LANGE <c.lange@cs.bham.ac.uk> wrote:

It is almost never necessary or helpful to state a theorem in that format.

Thanks for your advice! However simply changing my statements to …

I suggest

lemma "p x ==> q x"

for a straightforward proof, or

lemma assumes "p x" shows "q x"

for a more complicated structured proof.

… such a structure doesn't always work; I think the proofs will also
need some adaptation.

The following lemma (reduced to the structural outline) has a
(anti-)pattern that is typical for my formalisation:

lemma skip_index_keeps_non_negativity :
fixes n::nat and v::real_vector
assumes non_empty: "n > 0"
and non_negative: "non_negative_real_vector n v"
shows "\<forall>i::nat . in_range n i \<longrightarrow>
non_negative_real_vector (n-(1::nat)) (skip_index v i)"
proof
fix i::nat
show "in_range n i \<longrightarrow> non_negative_real_vector
(n-(1::nat)) (skip_index v i)"
proof
assume "in_range n i"
...
show "non_negative_real_vector (n-(1::nat)) (skip_index v i)" sorry
qed
qed

How would I have to adapt the proof when rephrasing the statement as
shows "in_range n i \<Longrightarrow> ..." ?

(I'll be happy to accept "RTFM" as an answer, if you could give me a
pointer.)

Cheers, and thanks,

Christoph

view this post on Zulip Email Gateway (Aug 19 2022 at 09:08):

From: Lawrence Paulson <lp15@cam.ac.uk>
You should look at the documentation on the induct/induction proof methods. They achieve the effect of performing induction on a formula such as

"\<forall>i::nat . P i \<longrightarrow> Q i"

while completely hiding the tedious manipulations of these logical connectives that would normally be required.

Larry Paulson

view this post on Zulip Email Gateway (Aug 19 2022 at 09:09):

From: Christoph LANGE <c.lange@cs.bham.ac.uk>
Dear Larry, dear all,

I'm now back at our auction formalisation and catching up with emails.

I have taken into account the advice I got from you and the others;
basically I managed to change my statements from

shows "!i . p x --> q x"

to

fixes ... and i
assumes "p x"
shows "q x"

This and similar changes helped to shorten the overall formalisation
from 1145 to 1045 lines (see
http://cs.bham.ac.uk/research/projects/formare/code/auction-theory/Vickrey.thy).
I'm sure I can do even better.

I have not yet made any further simplifications, such as doing a lot of
things at once in a style like

... by (induct i arbitrary: xs) (case_tac xs, simp_all)+

that Chris used in his example of a lemma about lists. At the moment I
don't even know how to get started using such proof methods.

2012-11-01 12:03 Lawrence Paulson:

You should look at the documentation on the induct/induction proof methods.

Where can I find that documentation? The Tutorial mainly uses
apply(induct_tac), which is probably something else. Is there a single,
up to date reference manual that documents all proof methods?

BTW, changing induction proofs to "assumes ... shows ..." did not always
make them shorter. Is there a way of not having to explicitly restate
the assumption for (Suc n) in the induction step?

E.g. I have one lemma of the following structure, which proves some
property of a function "fun maximum":

lemma maximum_sufficient :
fixes n::nat and ...
assumes assm1: "p n"
and assm2: "q n"
and assm3: "r n"
shows "s n"
using assms (* <-- now this became necessary,
otherwise even "case 0" would fail, but why? *)
proof (induct n)
case 0
then show ?case by simp
next
case (Suc n)
(* and now I have to explicitly restate all assumptions: *)
assume assm1: "p (Suc n)"
assume assm2: "q (Suc n)"
assume assm3: "r (Suc n)"
...
show "s (Suc n)" ...
qed

Cheers, and thanks for any help,

Christoph

view this post on Zulip Email Gateway (Aug 19 2022 at 09:09):

From: Tobias Nipkow <nipkow@in.tum.de>
Am 22/11/2012 15:13, schrieb Christoph LANGE:

Dear Larry, dear all,

I'm now back at our auction formalisation and catching up with emails.

I have taken into account the advice I got from you and the others; basically I
managed to change my statements from

shows "!i . p x --> q x"

to

fixes ... and i
assumes "p x"
shows "q x"

Unless you want to give the type of i, you don't need to fix it.

This and similar changes helped to shorten the overall formalisation from 1145
to 1045 lines (see
http://cs.bham.ac.uk/research/projects/formare/code/auction-theory/Vickrey.thy).
I'm sure I can do even better.

I have not yet made any further simplifications, such as doing a lot of things
at once in a style like

... by (induct i arbitrary: xs) (case_tac xs, simp_all)+

that Chris used in his example of a lemma about lists. At the moment I don't
even know how to get started using such proof methods.

2012-11-01 12:03 Lawrence Paulson:

You should look at the documentation on the induct/induction proof methods.

Where can I find that documentation? The Tutorial mainly uses
apply(induct_tac), which is probably something else.

This is what I suggested to you 3 weeks back:

"If you look at the documentation page
http://isabelle.in.tum.de/documentation.html you will find that the first
document is the relatively new Programming and Proving in Isabelle/HOL, which
does cover structured proofs (which is why I wrote it). The Tutorial is now only
second choice because of the foucus on "apply"."

That is also where you find the induction method.

Is there a single, up to
date reference manual that documents all proof methods?

Yes, the reference manual (more or less).

BTW, changing induction proofs to "assumes ... shows ..." did not always make
them shorter. Is there a way of not having to explicitly restate the assumption
for (Suc n) in the induction step?

Indeed, for inductions, the most convenient way is stating them as implications
(with ==>). The manuals above will also tell you about the naming conventions of
assumptions in each case.

Tobias

E.g. I have one lemma of the following structure, which proves some property of
a function "fun maximum":

lemma maximum_sufficient :
fixes n::nat and ...
assumes assm1: "p n"
and assm2: "q n"
and assm3: "r n"
shows "s n"
using assms (* <-- now this became necessary,
otherwise even "case 0" would fail, but why? *)
proof (induct n)
case 0
then show ?case by simp
next
case (Suc n)
(* and now I have to explicitly restate all assumptions: *)
assume assm1: "p (Suc n)"
assume assm2: "q (Suc n)"
assume assm3: "r (Suc n)"
...
show "s (Suc n)" ...
qed

Cheers, and thanks for any help,

Christoph

view this post on Zulip Email Gateway (Aug 19 2022 at 09:09):

From: Lars Noschinski <noschinl@in.tum.de>
On 22.11.2012 15:13, Christoph LANGE wrote:

assumes assm1: "p n"
and assm2: "q n"
and assm3: "r n"
shows "s n"
using assms (* <-- now this became necessary,
otherwise even "case 0" would fail, but why? *)
proof (induct n)
[...]
case (Suc n)
(* and now I have to explicitly restate all assumptions: *)
assume assm1: "p (Suc n)"
assume assm2: "q (Suc n)"
assume assm3: "r (Suc n)"
...

You don't need to restate your assumptions here: They are all stored in
"Suc" (or Suc.prems, which omits the hypotheses added by induction).

-- Lars

view this post on Zulip Email Gateway (Aug 19 2022 at 09:09):

From: Makarius <makarius@sketis.net>
A few more hints on the version that happens to be behind this inversion
link right now:

* 'definition' with Pure equality (==) is quite old-fashioned. Normally
you just use HOL = or its abbreviation for bool <-> here, as you would
for 'primrec', 'fun', 'function'

(In contrast HOL --> and ! are old-fashioned/cumbersome, where Pure
==> and !! would do the job better.)

* I recommend to put the whole 'head' of some definition on one line,
this works best with jEdit folding:

definition a :: A
where "a = t"

fun b :: B
where "b x = t"

Where the 'where' goes depends on the length of A and t; it is not
informative for the head, so I prefer to have it second by default.

* 'done' should be indented like the 'apply' script. Don't ask why,
Isar indentation is an arcane discipline, and still awaits tool
support in Isabelle/jEdit.

BTW, the shortest structured proof that is not a script looks like
this:

lemma A unfolding a_def b_def c_def auto

instead of your apply unfold, apply auto, done

* About your funny comments (* by contradiction *): according to the
Isar philosophy, you always strive to make things clear by formal
means, and avoid comments. Thus can be done by putting a suitable
"proof (rule ...)" standard step here.

According to my experience, informal or semi-formal people often have
problems to understand what not-introduction, not-elimination, and
classical reasoning means. The following examples explore this in the
formal playground:

notepad
begin

have "¬ A"
proof (rule notI)
assume A
then show False sorry
qed

next

have "¬ A" sorry
have A sorry
from ¬ A and A have C by (rule notE)

from ¬ A and A have C by contradiction
from A and ¬ A have C by contradiction

next

have C
proof (rule ccontr)
assume "¬ C"
then show False sorry
qed

next

have C
proof (rule classical)
assume "¬ ?thesis"
then show ?thesis sorry
qed

next

have C
proof (rule classical)
assume "¬ ?thesis"
then have False sorry
then show ?thesis by (rule FalseE)
qed

end

Here the rule steps with notI, notE, FalseE have been made explicit for
illustration: these default rules are normally omitted, i.e. you would
just write 'proof' without anything or '..' in instead of 'by (rule ...)'.

The Isar method "contradiction" allows to present to two preconditions in
either order -- this often happens in practice. For notE the negated
formula has to come first, to eliminate it properly.

There is nothing special behind any of these intro/elim steps so far: just
plain intuitionistic natural deduction. Nonetheless, people sometimes
think that notI is "proof by contradiction", because they have to show
False in the end.

This might also stem from the ccontr rule above, which is often seen in
text books (without this Isabelle-specific name). I usually prefer the
one called "classical", because it lacks the builtin False step and
illustrates the brutality of classical reasoning in a pure way: you may
assume that the thesis does not hold, then you have to show that it holds.
The latter proof often works "by contradiction" in the above formal sense,
to explain once more while you might get confused informally.

Formally, everything should be clear at this pure Isar level, because
there is no magic built-in apart from higher-order unification and
proof-by-assumption to solve trivial "end-games" in natural deduction.

Makarius

view this post on Zulip Email Gateway (Aug 19 2022 at 09:09):

From: Christoph LANGE <c.lange@cs.bham.ac.uk>
2012-11-22 16:04 Lars Noschinski:
Thanks, that's very helpful!

For readability I'd sometimes like to make it explicit to what
assumption I'm referring when there is more than one.

When explicitly restating them with 'assume "... (Suc n) ..."' that's no
problem. With "Suc" I found, e.g., Suc.prems(1) to work. A symbolic
name, e.g. Suc.prems(foo), if the original statement says 'assumes foo:
"..."', would be even nicer – but that's not possible, I suppose.

Cheers,

Christoph

view this post on Zulip Email Gateway (Aug 19 2022 at 09:09):

From: Lars Noschinski <noschinl@in.tum.de>
On 22.11.2012 17:56, Makarius wrote:

(In contrast HOL --> and ! are old-fashioned/cumbersome, where Pure
==> and !! would do the job better.)

Is there actually a drawback when using == instead of = or is it a mere
matter of style? I like using Pure equality because it saves me a level
of parentheses when I have a binder on the right hand side.

BTW, the shortest structured proof that is not a script looks like
this:

lemma A unfolding a_def b_def c_def auto

To avoid confusion: There is a "by" missing:

lemma A unfolding a_def b_def c_def by auto

The Isar method "contradiction" allows to present to two preconditions
in either order -- this often happens in practice. For notE the negated
formula has to come first, to eliminate it properly.

Never saw this method before. I'll have a look.

-- Lars

view this post on Zulip Email Gateway (Aug 19 2022 at 09:09):

From: Florian Haftmann <florian.haftmann@informatik.tu-muenchen.de>
It is simpler:

a) Uniformity

definition foo :: …
where "foo … = …"

vs.

primrec foo :: …
where "foo … = …"
| …
| "foo … = …"

vs.

fun(ction) foo :: …
where "foo … = …"
| …
| "foo … = …"

b) Less symbols

Why care about »foo« anyway?

Florian
signature.asc

view this post on Zulip Email Gateway (Aug 19 2022 at 09:10):

From: Lars Noschinski <noschinl@in.tum.de>
On 22.11.2012 18:41, Florian Haftmann wrote:

(In contrast HOL --> and ! are old-fashioned/cumbersome, where Pure
==> and !! would do the job better.)

Is there actually a drawback when using == instead of = or is it a mere
matter of style? I like using Pure equality because it saves me a level
of parentheses when I have a binder on the right hand side.

It is simpler:

a) Uniformity
b) Less symbols

Ok, these are stylistic reasons and both only hold when always using =,
and not <-> (<-> can be used for all of definition, primrec, fun(ction),
but only for boolean valued functions. And I don't like it for
definitions anyway ;)).

Why care about »foo« anyway?

This I don't understand. The situation I'm talking about is

definition foo :: …
where "foo … = (∀x. …)"

vs.

definition foo :: …
where "foo … ≡ ∀x. …"

(there are also other cases)

-- Lars

view this post on Zulip Email Gateway (Aug 19 2022 at 09:10):

From: Florian Haftmann <florian.haftmann@informatik.tu-muenchen.de>

Is there actually a drawback when using == instead of = or is it a mere
matter of style? I like using Pure equality because it saves me a level
of parentheses when I have a binder on the right hand side.

It is simpler:

a) Uniformity
b) Less symbols

Ok, these are stylistic reasons and both only hold when always using =,
and not <-> (<-> can be used for all of definition, primrec, fun(ction),
but only for boolean valued functions. And I don't like it for
definitions anyway ;)).

The matter <-> vs. = is orthogonal to == vs. =

Why care about »foo« anyway?

This should have read »Why care about »==« anyway?«. The statement is
that end-users need not to even now about »==« in most situations (the
only exception known to me are rules like split_paired_All (all?) where
you rewrite framework-level connectives). So why burden them with it?

Florian
signature.asc

view this post on Zulip Email Gateway (Aug 19 2022 at 09:11):

From: Makarius <makarius@sketis.net>
Using == for definitions is one of these things that can be explained only
historically. Many many years ago, you would have used consts/defs or
even consts/axioms or even consts/rules to spell out definitional axioms
using directly the == primitive of Pure. Larry explained it > 20 yeas ago
as "Pure == is used to represent definitions".

If you look at modern times, a "definitional package" is some mechanism
that accepts a specification (via terms or propositions) about what should
be provided by the system by means of some derived definitional concept.
So in 'primrec' and 'fun' you state equations, and the system will give
them you as theorems (and more things), in 'inductive' you write rules
about some relation, and you get them as theorems for introduction (and
more things for elimination and induction).

Long ago, the old 'defs' primitive has been superseded by such a
definitional package called 'definition', to do the same for basic
definitions without pattern matching or recursion. So 'definition' is
more like 'function' or 'theorem' than the primitive def that happens at
the bottom. That foundation of the definition does use Pure ==
internally, but the user never has to see it in practice. In Isabelle2012
these primitive defs behind 'definition' are particulary hard to retrieve,
and nobody has noticed so far :-)

Makarius

view this post on Zulip Email Gateway (Aug 19 2022 at 09:11):

From: Makarius <makarius@sketis.net>
Restating case/assume is not a big deal, it is done occasionally and not
bad style, only a bit redundant and less formally checked than could be
done otherwise.

To make it more tight, you can do it via literal fact references or the
corresponing "fact" method like this:

notepad
begin

fix n :: nat

have "P n"
proof (induct n)
case 0
then show ?case sorry
next
case (Suc n)
then show ?case sorry
qed

have "P n"
proof (induct n)
case 0
show "P 0" sorry
next
case (Suc n)
from P n show "P (Suc n)" sorry
qed

have "P n"
proof (induct n)
case 0
show "P 0" sorry
next
case (Suc n)
have "P n" by fact
then show "P (Suc n)" sorry
qed

have "P n"
proof (induct n)
case 0
show "P 0" sorry
next
case (Suc n)
assume "P n"
then show "P (Suc n)" sorry
qed

have "P n"
proof (induct n)
show "P 0" sorry
next
fix n
assume "P n"
then show "P (Suc n)" sorry
qed

end

BTW, according to the way the logical framework works, you can have
redundant 'assume' statements in your proof body, but not redundant 'fix'.
The "fix n" is already part of the case (Suc n). Another fix n would
postulate a different hyptothetical entity in the context, that is not the
same, and its assumptions unrelated to what you have already after
invoking the case.

The funny thing is that textbook logicians often omit the formal "fix n"
altogether. I've seen this even in some implementations of logic, where
the global absence of a declaration over the whole text is taken as an
implicit fix of a local variable.

Makarius

view this post on Zulip Email Gateway (Aug 19 2022 at 09:12):

From: Makarius <makarius@sketis.net>
Recall that 'definition', 'primrec', 'fun', 'function' are all the same
category of "derived definitional concept", although of different strength
and power invested in the mechanisms behind it.

Makarius

view this post on Zulip Email Gateway (Aug 19 2022 at 09:17):

From: Christian Sternagel <c-sterna@jaist.ac.jp>
Dear Christoph,

Consider the proofs:

lemma 1:
"ALL i. i < length xs --> xs ! i = hd (drop i xs)"
proof
fix i
show "i < length xs --> xs ! i = hd (drop i xs)"
proof
assume "i < length xs"
then show "xs ! i = hd (drop i xs)"
by (induct i arbitrary: xs) (case_tac xs, simp_all)+
qed
qed
thm 1
thm 1 [rule_format]

lemma 2:
"!!i. i < length xs ==> xs ! i = hd (drop i xs)"
proof -
fix i
assume "i < length xs"
then show "xs ! i = hd (drop i xs)"
by (induct i arbitrary: xs) (case_tac xs, simp_all)+
qed
thm 2

lemma 3:
assumes "i < length xs"
shows "xs ! i = hd (drop i xs)"
using assms
by (induct i arbitrary: xs) (case_tac xs, simp_all)+
thm 3

I hope the example proofs still cover the structure that was important
to you (I changed them only to have some lemma I could actually prove).
Note the following points:

1) In lemma 3 we do not explicitly all-quantify i, thus it is handled as
"arbitrary but fixed" throughout the proof (and implicitly
all-quantified after the proof is finished; in fact "thm 2" and "thm 3"
give exactly the same result). This can always be done when we do not
need to vary/instantiate i inside the proof itself.

2) The attribute [rule_format] turns (some) HOL connectives into Pure
connectives (which are used to present natural deduction style rules in
Isabelle, hence the name).

3) For Isar proofs (i.e., structured proofs, as opposed to
proof-scripts) the style of lemma 3 is the most idiomatic, if such a
style is not possible (since we need to generalize a variable like i),
then the style of lemma 2 is the next most idiomatic. As Larry already
pointed out, the style of lemma 1 is the least idiomatic nowadays.

hope that helps,

chris


Last updated: Mar 28 2024 at 12:29 UTC