From: Gottfried Barrow <gottfried.barrow@gmx.com>
Hi,
I have a function:
axiomatization--"Set membership predicate."
in\<^isub>\<sigma>::"\<sigma>\<^isub>t \<Rightarrow>
\<sigma>\<^isub>t \<Rightarrow> \<sigma>\<^isub>t\<^isub>b" ("_
\<in>\<^isub>\<sigma> _" 55)
I then state multiple axioms about it, one of them being:
axiomatization--"Axiom schema of comprehension: schema set." where
\<Sigma>\<^isub>s\<^isub>c\<^isub>A: "\<forall>u. \<forall>P.
\<exists>a. \<forall>x.(x \<in>\<^isub>\<sigma> a \<longleftrightarrow>
(x \<in>\<^isub>\<sigma> u \<and> P x))"
I want to look at how Isabelle decided to type variables in my
axiomatization, but I can't find any print or find commands that will do
that for me. I also want to see if explicit parentheses will be added as
I expect, and see if there are any operator priority surprises.
Is there such a command that will do this for me?
The above is the basic question. Here's what led me to want to know this:
1) I used "\<in>" as notation for my set membership.
2) I got lots of warnings like: "Ambiguous input produces 4 parse
trees... Fortunately, only one parse tree is type correct..."
3) I change my notation to " \<in>\<^isub>\<sigma>", and it gets rid of
some of the warnings.
4) For the axiom above, I still get "ambiguous" warnings, with some of
the variables explicitly typed.
5) So, I explicitly type every variable in the formula, and I still get
"ambiguous" warnings.
6) So, I remove all the typing, and it seems to work. Explicit or
non-explicit typing doesn't affect the warnings.
7) I then conclude that Isabelle gets all of the typing information in
the formula from my one uniquely named function, "in\<^isub>\<sigma>".
Consequently, I ask myself, "Is that too good to be true?"
Okay, if I knew how to prove anything, maybe I would know how to get the
information I want. However, even if I was an expert, I would want to be
able to get immediate feedback to check my typing, see if I'm correctly
using parentheses, and see if I understand correctly the priority of the
operators.
I attach a screen shot of my code. I don't think attaching images and
PDFs is the best way to give people code information. On the other hand,
the above low-level Isar shows that nice looking things in jEdit cease
to be good for reading in a text file. Also, "\<^isub>" doesn't result
in subscripted, unicode characters when copying and pasting.
jEdit Stuff: I edited the etc/symbols file and jEdit gave me an
exception error when loading. Afterwards, jEdit wouldn't translate
commands like "\<in>" in the one file. About an hour later, or two, I
deleted "jEdit/recent.xml" and it started working again.
Regards,
GB
wantToPrintAxiomInfo.png
From: Lawrence Paulson <lp15@cam.ac.uk>
This message indicates that your syntax is ambiguous. The way to eliminate such a message is by disambiguating your syntax, which in most cases means tightening up the precedences of operators. And indeed, your operator “in" gives no precedence information for its operands. I wonder why you don't use something akin to (ignoring fancy symbols)
(infixl "IN" 55)
where you have
("_ IN_ " 55) .
Larry Paulson
From: Gottfried Barrow <gottfried.barrow@gmx.com>
Larry,
Thanks for fix. It makes sense now that a binary operator needs an
associativity rule. I was trying to finish making my table of HOL
operator priorities, so I would have lots of examples, and then try to
dig through section "7.2 Mixfix annotations" of isar-ref.pdf. But now, I
can put off looking at the mixfix documentation.
(For anyone interested, I've attached iS.thy. Load it into jEdit, have
sidekick open to see the section tree, and in section 1 there's all the
symbols from etc/symbols. In section 2 there's a table showing the
mixfix operator priorites from HOL.thy, and in section 3 there's the
source from src/Pure/pure_thy.ML, which contains the mixfix operator
priorities for the Pure logic.)
If I had print commands, like thm, find_theorems, find_consts, etc., but
for axiomatizations and defs, I might could figure out some of your
logic in HOL.thy. Makarius showed me how to print explicit use of
Trueprop and prop, so with explicit types, explicit parentheses,
explicit names, explicit Trueprop, and explicit prop, I might could
eventually figure out the reasoning behind your defs in HOL.thy, like
for these:
LINE: 178 of src/HOL/HOL.thy
defs
True_def: "True == ((%x::bool. x) = (%x. x))"
All_def: "All(P) == (P = (%x. True))"
Ex_def: "Ex(P) == !Q. (!x. P x --> Q) --> Q"
False_def: "False == (!P. P)"
not_def: "~ P == P-->False"
and_def: "P & Q == !R. (P-->Q-->R) --> R"
or_def: "P | Q == !R. (P-->R) --> (Q-->R) --> R"
Ex1_def: "Ex1(P) == ? x. P(x) & (! y. P(y) --> y=x)"
I'm thinking that parts of your "Interactive Proof with Cambridge LCF"
and your "The Foundation of a Generic Theorem Prover" will eventually
help me out, that you directly implement into HOL.thy some of the logic
definitions in your LCF book. It'll take time for me to find that out.
But, with the right print info for definitions and axiomatizations, it
would help me when I occasionally look at definitions such as the above.
Thanks,
GB
iS.thy
From: Lawrence Paulson <lp15@cam.ac.uk>
I imagine that the following paper (Church, 1940) has all the information you need:
http://www.classes.cs.uchicago.edu/archive/2007/spring/32001-1/papers/church-1940.pdf
However, Isabelle's formalism includes a much richer variety of types, including type variables and type constructors. As I recall, essentially the same system of axioms is used in HOL.
Larry Paulson
From: Gottfried Barrow <gottfried.barrow@gmx.com>
Larry,
Thanks for link to the document. I added Church's journal article to my
library of HOL documents.
I kind of figured out that in the definitions below, I should start
doing some substitutions, like
P & Q == !R. ((P--> Q-->R) --> R)
== (R = (%x.True)). ((P--> Q-->R) --> R)
== (R = (%x.((%y::bool. y) = (%y. y))). ((P--> Q-->R) --> R)
Using trial and error, I finally decided I don't know what the notation
means enough to continue to make substitutions.
If I was formally enrolled in a course, I would go and impose myself on
my professor, and take up his or her time. But I'm not, so it'll have to
be delayed gratification.
Regards,
GB
From: Lawrence Paulson <lp15@cam.ac.uk>
I honestly don't think doing such substitutions is of any benefit at all. See if you can make some sense of each definition individually.
Larry Paulson
From: Gottfried Barrow <gottfried.barrow@gmx.com>
On 7/9/2012 2:35 PM, Lawrence Paulson wrote:
I honestly don't think doing such substitutions is of any benefit at all.
See if you can make some sense of each definition individually.
I can make sense of the logic with these equivalencies:
(A --> B --> C) == ((A /\ B) --> C), and
(A --> B) == (~A \/ B).
So I reduce (!R. ((P --> Q --> R) --> R)) down to
!R. ((P /\ Q) \/ R)
THE_SCRATCH_THIS_FUNCTION
(This is where my truth-table-indoctrinated mentality starts to cause me
a lot of insecurity, because when R=F, P=T, and Q=T, the result is false).
THE_ALMOST_DID_SAY_SOMETHING_DUMB_FUNCTION__MAYBE_STILL_AM
(I almost sent this email off based on the above sentence. A faulty bias
was causing me to misinterpret my truth table, plus my final result
column was under an "/\" due to my writing.
So when R=false, the reduction above functions like conjunction. This
brings to my attention to the definition
False == (!P. P).
I've gone on too long here, but I've already written what's below, so I
leave it. For me, truth table logic vs. constructive and natural
deduction logic causes me some confusion, on how to interpret a simple
statement like "!P. P", or maybe, as I say below, it's because I'm not
used to quantifying over propositions, but only over elements of a set,
such as in "!x P(x)". I'll get straightened out with the right books.
If you don't read what's below, it'll be no loss, but it represents
certain aspects of life, as experienced by me.
)
But, though I haven't went through my natural deduction textbook much, I
assume that it's saying "for every R that is asserted to be true...".
If that's the case, then the engine is somehow implementing and_def as
conjunction, which, of course, I know to begin with.
It could be that I've said something dumb, so I write it with a little
typing,
!R::('a => bool). ((P /\ Q) \/ (R::('a => bool))),
to try and figure out if I have said something dumb. I know that R can
be viewed as a proposition, but I'm not used to quantifying over
propositions; I'm used to quantifying over elements of a set, like
!x P(x).
I've gone on like this to demonstrate (even more) that I have some
weaknesses in logic. I assume that your B.S. in Math at CalTech (not to
mention your PhD) gave you a lot better education than my B.S. in Math
at Podunk University, but I also know from experience and observation
that most undergrad and graduate degrees in math don't give people much
more than a basic education in logic.
Anyway, you're paying the price for giving people access to the details
of the engine, unlike Mizar.
Can I work without understanding all the logic details? I suppose I have
to. But if I get access to information, many times I think I'm supposed
to understand it. Actually, I do need to set aside some of this
low-level stuff. I'm progressing at a snail's pace.
Thanks for your time,
GB
>
>
On 9 Jul 2012, at 20:28, Gottfried Barrow wrote:
I kind of figured out that in the definitions below, I should start
doing some substitutions, likeP& Q == !R. ((P--> Q-->R) --> R)
== (R = (%x.True)). ((P--> Q-->R) --> R)
== (R = (%x.((%y::bool. y) = (%y. y))). ((P--> Q-->R) --> R)
On 9 Jul 2012, at 20:28, Gottfried Barrow wrote:
I kind of figured out that in the definitions below, I should start doing some substitutions, like
P& Q == !R. ((P--> Q-->R) --> R)
== (R = (%x.True)). ((P--> Q-->R) --> R)
== (R = (%x.((%y::bool. y) = (%y. y))). ((P--> Q-->R) --> R)
From: Lars Noschinski <noschinl@in.tum.de>
On 10.07.2012 02:42, Gottfried Barrow wrote:
On 7/9/2012 2:35 PM, Lawrence Paulson wrote:
I honestly don't think doing such substitutions is of any benefit at all.
See if you can make some sense of each definition individually.I can make sense of the logic with these equivalencies:
(A --> B --> C) == ((A /\ B) --> C), and
(A --> B) == (~A \/ B).So I reduce (!R. ((P --> Q --> R) --> R)) down to
!R. ((P /\ Q) \/ R)
This would be "!R. (~(P /\ Q) \/ R)"
So when R=false, the reduction above functions like conjunction. This
brings to my attention to the definitionFalse == (!P. P).
"!P. P" just says "all propositions hold". As long as our logic is
consistent, this is not going to be the case, so we choose this as our
value for False.
-- Lars
From: Lawrence Paulson <lp15@cam.ac.uk>
No need even to do that. The reasoning is very simple. let P and Q be two given formulas, and suppose that you have !R. ((P --> Q --> R) --> R). And suppose that you want to prove some formula, R say (just to keep things very simple). Then it is enough to prove P --> Q --> R. Which means, you may as well assume that P and Q are actually true while you are proving R. And that is what it means to know P&Q.
Disjunction can be defined similarly. If I'm not mistaken, this discovery is due to Frege.
Larry Paulson
From: Gottfried Barrow <gottfried.barrow@gmx.com>
On 7/10/2012 2:14 AM, Lars Noschinski wrote:
So I reduce (!R. ((P --> Q --> R) --> R)) down to
!R. ((P /\ Q) \/ R)
This would be "!R. (~(P /\ Q) \/ R)"
I don't know. This is what I get:
(P --> Q --> R) --> R
== ((P /\ Q) --> R) --> R)
== (~(P /\ Q) \/ R) --> R)
== ~(~(P /\ Q) \/ R) \/ R
== ((P /\ Q) /\ ~R) \/ R
== ((P /\ Q) \/ R) /\ (~R \/ R)
== (P /\ Q) \/ R
So when R=false, the reduction above functions like conjunction. This
brings to my attention to the definitionFalse == (!P. P).
"!P. P" just says "all propositions hold". As long as our logic is
consistent, this is not going to be the case, so we choose this as our
value for False.
Okay, that's good to know. I'm still transitioning from truth tables,
where writing a formula doesn't assert anything, at least that's how
I've always viewed it for the connectives /\, \/, -->, and ~.
I did decide previously that I should interpret "!P. P" as asserting
that P is always true, which helps me with it being the definition of
False, but it doesn't help me understand
!R. ((P /\ Q) \/ R), or even
!R. (~(P /\ Q) \/ R).
The "P" and "Q" part of the "conj" (infixed as "&", and "/\") definition
makes sense to me because when I use the function
conj P Q,
in my mind, I go through the truth table and make assignments to "P" and
"Q" and decide what result I should get based on how the function was
being used in a statement.
With
!R. ((P /\ Q) \/ R),
I try to use the same trick and say, "it's asserting that for every R, R
is always true", but that doesn't work. If R is always true, then the
statement is always true. But if it's always false, then the statement
operates as a conjunction.
Well, okay, the above sentence shows I still have trouble interpreting
things.
What I know is that saying "every proposition R is true" is a false
statement, so if that's what makes R always false in the statement "((P
/\ Q) \/ R)", then the statement as a whole works.
Ah, maybe I have it. The statement "!R. ((P /\ Q) \/ R)" could be
asserting this:
"For every proposition R, (P /\ Q) is true or R is true."
A more friendlier statement would be this, using your statement above:
"Every proposition is true or (P /\ Q) is true".
I guess that's it. Simple stuff, unless it's not.
Thanks for the help.
Regards,
GB
From: Lars Noschinski <noschinl@in.tum.de>
On 11.07.2012 14:54, Gottfried Barrow wrote:
On 7/10/2012 2:14 AM, Lars Noschinski wrote:
So I reduce (!R. ((P --> Q --> R) --> R)) down to
!R. ((P /\ Q) \/ R)
This would be "!R. (~(P /\ Q) \/ R)"
I don't know. This is what I get:
Yes, sorry.
Ah, maybe I have it. The statement "!R. ((P /\ Q) \/ R)" could be
asserting this:"For every proposition R, (P /\ Q) is true or R is true."
Exactly.
-- Lars
From: Gottfried Barrow <gottfried.barrow@gmx.com>
On 7/10/2012 2:57 AM, Lawrence Paulson wrote:
On 10 Jul 2012, at 01:42, Gottfried Barrow wrote:
So I reduce (!R. ((P --> Q --> R) --> R)) down to
!R. ((P /\ Q) \/ R)
No need even to do that. The reasoning is very simple. let P and Q
be two given formulas, and suppose that you have !R. ((P --> Q -->
R) --> R).
And suppose that you want to prove some formula, R say (just to keep
things
very simple). Then it is enough to prove P --> Q --> R. Which
means, you
may as well assume that P and Q are actually true while you are
proving R.
And that is what it means to know P&Q.Disjunction can be defined similarly. If I'm not mistaken, this discovery is due to Frege.
I'll do the easy part and show HOL's definition of disjunction to
complete the discussion.
P | Q == !R.(P-->R) --> (Q-->R) --> R
Larry, thanks for the explanation of the not-so-straightforward logic.
Regards,
GB
From: Gottfried Barrow <gottfried.barrow@gmx.com>
On 7/10/2012 2:31 AM, Ramana Kumar wrote:
I can make sense of the logic with these equivalencies:
(A --> B --> C) == ((A /\ B) --> C), and
(A --> B) == (~A \/ B).Sure, but you should remember that from the perspective of defining the
connectives, the definitions are primary and we would derive these
equivalences as consequences. So it would be good to get a little
intuitive appreciation for the definitions directly.
I kind of understood that I'm getting circular, which is why I wasn't so
ready to be satisfied with those kind of substitutions, but Larry told
me to forget about low-level substitutions, so that relieved me of a
feeling of responsibility to not get circular.
Larry's explanation was able to explain the logic and still only use "-->".
So I reduce (!R. ((P --> Q --> R) --> R)) down to
!R. ((P /\ Q) \/ R)
Can you see why this is a good definition for 'and'?
It's a good definition if you only have "!" and "-->" at your disposal.
It's doesn't sync up well with simplistic lessons in logic as given in
"Discrete Mathematics" by Rosen, or many other mathematical logic
textbooks. But yea, it looks good.
This is where my truth-table-indoctrinated mentality starts to cause
me a lot of insecurity, because when R=F, P=T, and Q=T, the result is
false).Surely when R is false but P and Q are true the result is true...?
Truth tables still work for propositional logic embedded in HOL.
You're asking about what I said there? That was messed up. I left it in
to show how I was messed up, but it leaving it in and not working to
redo the email right was probably messed up too.
The result with truth tables is the same, but there are subtle or
not-so-subtle differences in the logic notation and the use of logic
formulas as specifically used in a proof assistant. Or maybe there's
not. It could just be information overload that throws me off, which
results in me not being able to process simple statements.
...causes me some confusion, on how to interpret a simple
statement like "!P. P", or maybe, as I say below, it's because I'm not
used to quantifying over propositions, but only over elements of a set,
such as in "!x P(x)". I'll get straightened out with the right books.Don't get confused by its being a capital letter. Look at the type. P is
a plain old boolean proposition. It doesn't take any arguments. The
definition of false basically says both boolean values are true, that
is, there is an inconsistency.
Sometimes a formula can be so simple that a newbie doesn't know how to
get information out of it. It helps for people to tell me exactly how to
phrase the meaning of a formula in informal English, until I get good at
creating my own interpretive phrases.
!R::('a => bool). ((P /\ Q) \/ (R::('a => bool))),
Surely that did not typecheck.
Here are some pertinent lines from ~/HOL/HOL.thy:
074: consts
082: All::"('a => bool) => bool" (binder "ALL " 10)
153: notation (HOL)
154: All (binder "! " 10)
178: defs
180: All_def: "All(P) == (P = (%x. True))"
I put this in jEdit, and it shows R::bool:
lemma "!R. (P-->Q-->R) --> R"
So, you're right, though I thought you had to be wrong, until the goal
for the proof step told me you're right. This is where things like
declare[[show_types=true]] save us a bunch of time.
It appeared to me, by line 82, that "All" takes a "('a => bool)" and
returns a bool. It could be that "All(P)" is not the same thing as "All P".
Regards,
GB
From: Lars Noschinski <noschinl@in.tum.de>
If you resolve the binder syntax,
!R. (P --> Q --> R) --> R
is the same as
All (%R. (P --> Q -->R) --> R)
where the lambda abstraction has type bool => bool.
From: Gottfried Barrow <gottfried.barrow@gmx.com>
Lars, thanks for the tip. Knowing that has saved me some time, and will
save me some time.
Thanks again,
GB
Last updated: Nov 21 2024 at 12:39 UTC