From: Chris Capel <pdf23ds@gmail.com>
OK, I gave Isar another try and I'm making some progress with it. Anyway.
I have some statements like
fix ...
assume ... and ...
thus "EX x y. ..."
proof
The "proof" line fails, with "empty result sequence", unless I change
it to "proof -" or change "thus" to "show", hence taking the
assumptions out of "this". Why are my assumptions getting in the way
of what should be a simple application of exI? This makes no sense to
me, as exI only cares about the conclusion. Let me know if I need to
send an example.
On a completely different note, it would be cool if abbreviations
could be made that are local to proofs. Often I have expressions
repeated several times throughout a goal that make it quite large,
when it would be manageable with some abbreviations. Let statements
are nice, but they're expanded in the goal display. Such a command
might not semantically differ from the let command.
Chris Capel
From: Florian Haftmann <florian.haftmann@informatik.tu-muenchen.de>
Hi Chris,
fix ...
assume ... and ...
thus "EX x y. ..."
proof
"proof" is short for "proof default", where the "default" method performs:
introduction using a suitable predeclared rule if no facts are
chained in.
elimination using a suitable predeclared rule if any facts are
chained in.
So in your case it fails because there are no suitable elimination rules
for your assumptions.
What you can try instead:
fix ...
assume P: ... and Q: ...
show "EX x y. ..."
proof
from P Q ...
or perhaps
fix ...
assume ... and ...
show "EX x y. ..."
proof (intro exI)
Hope this helps
Florian
signature.asc
From: Brian Huffman <brianh@cs.pdx.edu>
Hi Chris,
I have often used local definitions to help manage the size of large
subgoals. There are two ways to use them. First, you can use "defines"
in the theorem statement:
lemma foo:
fixes x and y ...
assumes ...
defines wibble_def: "wibble == ... x ... y ..."
shows "... wibble ..."
Note that if the rhs of the definition mentions any free variables,
they must be previously declared using "fixes". Once you finish the
proof, the local definitions are unfolded, and the locally-defined
constants do not appear in the resulting theorem.
The other method is to use the "def" command. Note the unusual
placement of the quote marks---only the rhs is quoted.
lemma foo: ...
proof -
fix x and y
def wibble == "... x .. y ..."
show ...
Local definitions are not abbreviations; you will still need to
fold/unfold the definitions manually within the proofs. But if your
subgoal terms are really big, this is an advantage: Not only do your
goals look smaller, they really are smaller---which makes
simplification and other proof tactics run faster.
On the other hand, it would be nice to have both local definitions and
local abbreviations. I'm sure that it would be possible to implement
an "abbrev" command that works much like "def", but I'm just not sure
how to do it.
From: Florian Haftmann <florian.haftmann@informatik.tu-muenchen.de>
Hi Chris,
fix ...
assume ... and ...
thus "EX x y. ..."
proof
"proof" is short for "proof default", where the "default" method performs:In the Isar documentation it says "Unless given explicitly by the
user, the default initial method is “rule”". Is this inaccurate?
Indeed, "default" is almost "rule", except if the proposition to prove
involves class or locale predicates; this we can ignore here.
But why does it also fail for "proof (rule exI)"? What's the
difference between rule and intro that allows the latter to succeed?
"rule" performs introduction if no facts are chained in, and elimination
if some facts are chained in. "intro" only performs introduction.
Hope this helps
Florian
signature.asc
From: Chris Capel <pdf23ds@gmail.com>
On Fri, May 22, 2009 at 02:23, Florian Haftmann
<florian.haftmann@informatik.tu-muenchen.de> wrote:
Hi Chris,
fix ...
assume ... and ...
thus "EX x y. ..."
proof"proof" is short for "proof default", where the "default" method performs:
In the Isar documentation it says "Unless given explicitly by the
user, the default initial method is “rule”". Is this inaccurate? I
suppose if the mode is not proof(chain) then default is the same as
rule? And otherwise it's the same as erule? In fact, the "default"
method isn't in the documentation's index at all.
introduction using a suitable predeclared rule if no facts are
chained in.elimination using a suitable predeclared rule if any facts are
chained in.
Thanks, I totally didn't know that at all. It does explain things,
though, except...
So in your case it fails because there are no suitable elimination rules
for your assumptions.
But why does it also fail for "proof (rule exI)"? What's the
difference between rule and intro that allows the latter to succeed?
Chris Capel
From: Makarius <makarius@sketis.net>
On Fri, 22 May 2009, Chris Capel wrote:
"proof" is short for "proof default", where the "default" method performs:
In the Isar documentation it says "Unless given explicitly by the
user, the default initial method is “rule”". Is this inaccurate?
The manual is indeed less accurate than the true source text.
Conceptually, 'proof' without argument performs a "default" step, but the
method of that name is essentially dynamically rebound by some later
stages (2 times in Isabelle/HOL).
The main idea of a default step is represented by the Pure "rule" method;
the later additions for type classes and classical reasoning can be
ignored for now.
I suppose if the mode is not proof(chain) then default is the same as
rule? And otherwise it's the same as erule?
No, method "rule" is always just that. Any proper Isar method is
sensitive to the facts being chained. Thus chaining + rule acts like an
elimination, and no chaining + rule like a plain introduction.
The variants "erule" etc. are from a slightly different (tactical) layer,
which are never required in properly structured Isar proofs.
But why does it also fail for "proof (rule exI)"? What's the difference
between rule and intro that allows the latter to succeed?
The "intro" method repeatedly unfolds introduction rules. As a weakly
structured tool, it merely inserts facts into the goal initially, without
taking them a seriously as "rule". This is analogous to "simp", "auto"
etc.
You should be able to ignore the "intro" and "elim" methods most of the
time -- they stem from a very early stage of Isar when structured proofs
were not fully understood yet.
Makarius
From: "Dr. Brendan Patrick Mahony" <brendan.mahony@dsto.defence.gov.au>
Is there any source I can access that describes this full
understanding of structured proofs?
Generally, I see the structured proof game as coaxing the proof engine
into returning the particular proof state that I feel best explains
the truth of my proposition to the human reader. To this end, I find
intro and elim necessary quite often, when the default reasoning set
returns a proof state that is incomprehensible to a human reader.
Since it is very hard to massage the default set into a manageable
shape it is easiest to use a number of intro, elim and simp methods to
get the desired proof state. Obviously, it is suboptimal to have long
convoluted chains of methods that result in the "obvious" goal state,
since this will tend to make the reader uneasy about the "formal
nonsense" that is meant to be reassuring them about the quality of the
proof being offered. However, it is certainly better than having a
short chain of methods that result it an unintuitive or
incomprehensible or even unexpectedly non-existent proof state.
I still must say, that I can't follow the reasoning behind the way
default/rule is implemented. My confusion is well described by the
following examples, the first of which is a valid proof
lemma
assumes
a1: "A" and
a2: "B"
shows
"A & B"
using a1 a2 ..
lemma
assumes
a1: "A" and
a2: "B"
shows
"A & B"
using a2 a1 ..
and the second of which is not.
This can be seriously annoying when trying to chain facts
assume b1: "B"
have b2: "A"
-- proof
with b1 show "A & B" ..
IMPORTANT: This email remains the property of the Australian Defence Organisation and is subject to the jurisdiction of section 70 of the CRIMES ACT 1914. If you have received this email in error, you are requested to contact the sender and delete the email.
From: Makarius <makarius@sketis.net>
On Mon, 25 May 2009, Dr. Brendan Patrick Mahony wrote:
On 23/05/2009, at 7:13 PM, Makarius wrote:
You should be able to ignore the "intro" and "elim" methods most of the
time -- they stem from a very early stage of Isar when structured
proofs were not fully understood yet.Is there any source I can access that describes this full understanding
of structured proofs?
Let's say, the understanding has greatly improved since then, but is not
really a "full" one.
An up-to-date exposition of general Isar concepts and machinery is now in
section 2 of the isar-ref manual. This spot has been empty for ten years
and has been filled only a few months ago.
Nonetheless, there are many things that are not written anywhere, notably
about the deeper reasons for various design details. I am already glad
enough to have occasional readers of the more basic stuff, and did not see
a market yet for the really arcane lore.
Generally, I see the structured proof game as coaxing the proof engine
into returning the particular proof state that I feel best explains the
truth of my proposition to the human reader.
This basic principle has been there in Isar from the very beginning, but
its focus has shifted more and more away from goals, towards more direct
composition of facts (with nice derived concepts like 'also' and 'obtain',
free-form blocks { ... } etc.).
To this end, I find intro and elim necessary quite often, when the
default reasoning set returns a proof state that is incomprehensible to
a human reader.
The "intro" and "elim" methods are probably best understood as logical
counterparts to "unfold". This category of methods can be used at the
start of an Isar proof, but the demand for this has greatly diminished
over the years.
For example, in 1999 (when doing the HahnBanach case-study) we found
ourselves using "elim" a lot to simulate what 'obtain' would become a bit
later. Moreover, free-form blocks with moreover/ultimately + auto/blast
are often more convenient for arbitrary composition of intermediate
stepping stones than fiddling on goals with "intro" etc.
Nonetheless, intro/elim/unfold are in no way blacklisted as initial proof
step (in contrast to bad things like "proof auto" which is really just a
leftover from a very ancient attempt of mine at structure reasoning in
1997).
Obviously, it is suboptimal to have long convoluted chains of methods
that result in the "obvious" goal state, since this will tend to make
the reader uneasy about the "formal nonsense" that is meant to be
reassuring them about the quality of the proof being offered.
Maybe this paper http://www4.in.tum.de/~wenzelm/papers/isar-reasoning.pdf
helps to reduce formal nonsense a bit. (After the presentation at the
conference some Coq expert told me that they have essentially always done
the same, although with quite different means and ends.)
I still must say, that I can't follow the reasoning behind the way
default/rule is implemented. My confusion is well described by the
following examples, the first of which is a valid prooflemma
assumes
a1: "A" and
a2: "B"
shows
"A & B"
using a1 a2 ..lemma
assumes
a1: "A" and
a2: "B"
shows
"A & B"
using a2 a1 ..and the second of which is not.
This can be seriously annoying when trying to chain facts
assume b1: "B"
have b2: "A"
-- proof
with b1 show "A & B" ..
Here the "rule" method (of the '..' step) asks you to present facts in
proper order, i.e. "from this and by".
In 1998, I did have a version of "rule" that was much more liberal, but
never worked out so well. The critical question is how much variance
(degrees of freedom) is acceptable in Isar proof checking, before either
the system or the reader can no longer work things out with reasonable
efforts.
If "rule" would admit arbitrary permutations of facts, for example, there
would be a much larger search space for seamingly trivial steps. Even
worse, it would stop working in most "open" situations via plain "proof
... qed', because the first choice needs to be the right one. It is
better to leave free rearrangement of facts and goals to automated tools,
and use something like "from a b c show D by blast" instead.
In contrast, there is no problem to allow the user write the required
sub-proofs (fix/assume/show) in any order, because explicit text is given,
and both the system and the user usually manage to fit things together
quickly. (Actually, there used to be a performance bottle-neck here until
recently; it has been sorted out for Isabelle2009.)
Makarius
Last updated: Nov 21 2024 at 12:39 UTC