Stream: Archive Mirror: Isabelle Users Mailing List

Topic: [isabelle] Eliminating two quantifiers in structural proof


view this post on Zulip Email Gateway (Aug 19 2022 at 08:36):

From: Edward Schwartz <edmcman@cmu.edu>
Hi,

I am new to Isabelle, and am stuck on a fairly trivial part of my
proof. I have an assumption "\forall x y. P x y". I'd simply like to
show (P a b) for two terms I have (a and b). Surprisingly, I am
having trouble doing this in a structural way.

With one quantified variable, there is no problem:

lemma single_proof : "(\<forall>x. P x) \<Longrightarrow> P a"
proof (erule allE)
fix c
assume A: "P c"
from A show "P c" by assumption
qed

My understanding of what is happening is that allE removes the
quantifier, and so we really just prove /\ c. P c ==> P c. It seems
like a better proof would be of /\ c. P ?c ==> P c. Is there any way
to do this instead?

lemma double : "(\<forall>x y. P x y) \<Longrightarrow> P a b"
apply (erule allE, erule allE)
apply assumption
done

Here is a way to do the proof with tactics. Unfortunately, in my
larger proof, the erule commands do not find the correct assumption,
and so I must specify P manually, which is very ugly.

lemma double_proof : "(\<forall>x y. P x y) \<Longrightarrow> P a b"
proof (erule allE)
fix c
assume "\<forall>y. P c y"
have "P c b"
proof (erule allE) (* error: proof command failed *)

Finally, I try to do a structural proof, eliminating one quantifier at
a time, and fail. I've tried many variations, but can't seem to
understand how to do this. If someone could show how to finish this
proof, and explain why it works, I would greatly appreciate it.

Overall, Isabelle/HOL seems to make it very difficult to instantiate
free variables in foralls. In Coq, this is very easy, since \forall
x. P x is actually a function that accepts a variable v and returns
proof that (P v). What am I missing?

Thanks,

Ed

view this post on Zulip Email Gateway (Aug 19 2022 at 08:36):

From: Edward Schwartz <edmcman@cmu.edu>
Hi all,

I made a few breakthroughs in understand Isar:

  1. "proof" is not the same thing as "proof -". In particular, proof
    looks for a classical logic introduction rule that matches the current
    goal. It will not work with goals like "A ==> B" as I had. "proof -"
    works on these. This is really confusing, and if any Isabelle devs
    read this, please consider changing it!

  2. Here are the modified proofs of my examples:

The first example can be rewritten where the assumption is more general:

lemma single_proof : "(\<forall>x. P x) \<Longrightarrow> P a"
proof -
assume A: "\<forall>x. P x"
fix c
from A show "P c" ..
qed

And here is the two variable example:

lemma double_proof : "(\<forall>x y. P x y) \<Longrightarrow> P a b"
proof -
assume H: "\<forall>x y. P x y"
fix a b
from H have "\<forall>y. P a y" ..
thus "P a b" ..
qed

This is a big improvement over my previous attempts, but still seems
overly verbose for just instantiating bound variables. Is there a
quicker way that works on multiple bound variables at the same time?

Thanks again,

Ed

view this post on Zulip Email Gateway (Aug 19 2022 at 08:36):

From: Tobias Nipkow <nipkow@in.tum.de>
Try to use the meta-quantifier !! instead of \<forall> whenever you can, and in
your examples you can. Now if you have an assumption A: "!!x y. P x y" you can
instantiate the quantifiers very easily: just write A[of "t1" "t2"]. Then it is
hardly more verbose than Coq. Note that the instantiation order is not the order
of quantification but the order in which x and y occur in P (from left to
right). In fact, if you inspect A in the proof (eg via command thm) you see that
the !! have disappeared and x/y became ?x/?y.

Unfortunately the "of" operator only works for !!, and !! can only occur at the
"outermost" level, ie not inside HOL formulas. But in many situations that suffices.

Tobias

view this post on Zulip Email Gateway (Aug 19 2022 at 08:37):

From: Alfio Martini <alfio.martini@acm.org>
Hi Edward,

I am not sure if this is what you what, but in Isar is fairly natural to do
such proofs. I use it to teach natural deduction to students.
Here goes three proofs of this conjecture:

theorem th11a:
assumes prem: "∀x. ∀y. P x y"
shows "P a b"
proof -
from prem have "∀y. P a y" by (rule spec)
thus "P a b" by (rule spec)
qed

theorem th11b:
assumes prem: "∀x. ∀y. P x y"
shows "P a b"
proof -
from prem have "∀y. P a y" by (rule allE)
thus "P a b" by (rule allE)
qed

theorem th11c:
assumes prem: "∀x. ∀y. P x y"
shows "P a b"
proof -
from prem have "∀y. P a y" by (rule spec[where x =a])
from this show "P a b" by (rule spec[where x=b])
qed

I assume you hardly have to use the qualifieres (e(limination),
d(estruction)) when
working in Isar. I use this style only when I am playing with linear
scripts.

All the Best!

view this post on Zulip Email Gateway (Aug 19 2022 at 08:37):

From: Alfio Martini <alfio.martini@acm.org>
Forgot this one:

theorem th11d:
"∀x. ∀y. P x y ==> P a b"
proof -
assume "∀x. ∀y. P x y"
from this have "∀y. P a y" by (rule spec[where x =a])
from this show "P a b" by (rule spec[where x=b])
qed

view this post on Zulip Email Gateway (Aug 19 2022 at 08:40):

From: Makarius <makarius@sketis.net>
This is an important observation. The "erule" and "drule" rule
application forms are never required in structure proof. Instead you
indicate the forward deduction explicitly via the 'then' language element,
and then just use a normal "rule" step, which is often not spelled out but
left implicit.

Roughly speaking a tactic script with "apply (erule r)" corresponds to an
Isar proof text fragment "then show proof (rule r)", and "apply (drule r)"
corresponds to "then have proof (rule r)".

Note that the abbreviations "thus == then show" and "hence == then have"
are merely historical accidents. They require fewer bytes in memory, but
more typing by the user and more explanations to newcomers. The reason is
that the chaining or not chaining for elementary 'show' and 'have'
elements are often changed during the proof development. And there are
further combinators like 'also' and 'moreover' that can be combined with
'have' or 'show', and other goal elements like 'obtain' that can
participate in the chaining of facts in the same manner.

So there is a large combinatorial space of

(then | from | with | ... | also | finally | moreover | ultimately)
(have | show | obtain | interpret ...)

which is better spelled out as such explicitly, without the somewhat
pointless shortcuts 'hence' and 'thus' getting in the way.

Makarius

view this post on Zulip Email Gateway (Aug 19 2022 at 08:41):

From: Makarius <makarius@sketis.net>
Yet more examples on this thread, using native natural deduction
statements in Isabelle/Pure, without any auxiliary HOL connectives getting
in the way:

lemma
assumes "⋀x y. P x y"
shows "P a b"
proof -
from P a b show ?thesis .
qed

Using the Pure quantifier in the assumption, you get arbitrary instances
of the proposition for free, using unification behind the scenes. The
backquote notation is a "literal fact reference", stating an instance of
something you have already visibly in the proof context.

Here is the same, without the initial goal statement getting between you
and the understanding of natural deduction reasoning in a local context:

notepad
begin

assume "⋀x y. P x y"

note P a b

have "P a b" by fact
end

So you build up a context with some general fact, and then project
instances from it. The backquote and "by fact" form are operationally
equalivalent, but the latter works via a local result with nested goal
state. Both forms have there uses in practical applications.

Makarius


Last updated: Apr 26 2024 at 20:16 UTC