(I suspect it would be not possible since I was taught recently that Isar is designed exactly for making the proof readable, whereas tactic application will generate something which are only visible in the interactive window. So if it is possible, you can tell me directly).
I find that it is not possible to write a "have" after the tactic application. Like this:
image.png
Without even knowing the definitions it would be clear what I am trying to do: The goal is phi in P
, and the assumptions says A x -> phi in x
, so I want to reduce the goal into A P.
For HOL4 users this is first_x_assum irule
(or first_assum irule
if you want to keep your assumptions).
Such a thing does not obviously exist. So I naively tried to state the A P
, and MP with it with A P -> phi in P
.
Which seems not allowed. How can I carry out such a step here?
instead of show ?thesis using 7
use show ?thesis apply (rule 7)
?
It is not accepted. What would rule 7 be expected to be? It is not an implication itself.
ah wait I didn't read 7
carefully... you need to massage it into an implication
Any canonical way to do this?
try 7[unfolded NMLG_def,rule_format]
?
add simplified
?
So it does the unfolding, but maybe failed to transform it into an implication?
add simplified
before rule_format
I mean
That works. And I see why it should be before rule_format.
great, I'm not sure whether this would be the recommended way of doing things though...
If it is after rule_format then it gives me a HOL quantification instead of a schematic variable.
So what would be the recommendation?
I am advised to get used to Isar's style.
write the expanded argument in Isar, probably
Let me try if I can do it in an efficient way.
i.e., before show ?thesis
write have "is_NML P" ... have "G \subset P" ...
then use sledgehammer at your show ?thesis
it'll figure that out
Let me try if sledgehammer can do it.
Wow it does. Thank you!
So I have trouble proving "is_NML P".
image.png
After unfolding the definitions I expect simp with assumption 1:
"(⋀form. ptaut form ⟹ form ∈ P)"
to kill the first conjunct of the goal:
proof (prove)
using this:
ptaut ?form ⟹ ?form ∈ P
BOX (?p → ?q) → BOX ?p → BOX ?q ∈ P
DIAM ?p → NOT (BOX (NOT ?p)) ∈ P
?A ∈ P ⟹ BOX ?A ∈ P
NOT (BOX (NOT ?p)) → DIAM ?p ∈ P
?A ∈ P ⟶ subst ?f ?A ∈ P
?A → ?B ∈ P ∧ ?A ∈ P ⟹ ?B ∈ P
goal (1 subgoal):
1. ∀A B p q f form.
ptaut form ⟶
form ∈ P ∧
BOX (p → q) → (BOX p → BOX q) ∈ P ∧
DIAM p → NOT (BOX (NOT p)) ∈ P ∧
NOT (BOX (NOT p)) → DIAM p ∈ P ∧
(A ∈ P ⟶ subst f A ∈ P) ∧ (A ∈ P ⟶ BOX A ∈ P) ∧ (A → B ∈ P ∧ A ∈ P ⟶ B ∈ P)
But it is always purple. Is this behaviour expected?
Oh I may miss a (). Let me try.
Does not work. I am still in trouble.
purple for a long time probably means it is looping... you have to be careful with what you're passing simp
As mentioned in the prog-prove:
Simplification can run forever, for example if both f x = g x and g x = f x are simplification rules. It is the user’s responsibility not to include simplification rules that can lead to nontermination, either on their own or in combination with other simplification rules. The right-hand side of a simplification rule should lways be “simpler” than the left-hand side — in some sense. But since termination is undecidable, such a check cannot be automated completely and Isabelle makes little attempt to detect nontermination. When conditional simplification rules are applied, their preconditions are proved first. Hence all preconditions need to be simpler than the left-hand side of the conclusion.
Not all your rules fulfill that
I do not see if there is any chance to have a loop here. I will spend some time try to debug.
As for simp, I do not expect it to simp equality here. I give it an implication. And I expect the normalization to turn it into
A <=> True
. Here A
will be the first conjunct in the goal, namely:
ptaut form ⟶ form ∈ P
I added "()" around the above and still does not work.
last rule of the assumption
it is an exploding rule
So do you mean, when it see A ==> B
, instead of turning it into (A ==> B) <=> True
, it trys to prove A
, and want to use it as A ==> (B <=> True)
instead?
Yiming Xu said:
So do you mean, when it see
A ==> B
, instead of turning it into(A ==> B) <=> True
, it trys to proveA
, and want to use it asA ==> (B <=> True)
instead?
Yes
…and that seems a lot more natural to me
...Oh my.
It would be different for A --> B though. which is why you should never have that as simplification rule
Okay, if I write A-->B
, instead of A==>B
instead?
I am just about to ask it.
I think you predicted my question...
So I expect if I write A-->B
at HOL level, then it will simply turn it into (A-->B) \equiv True
.
Yes, except that most tactic will transform a goal like "A --> B" into "A ==> B". So a simp lemma like "A-->B" is a very useless rule
That means, the A -->B
in A -->B \equiv True
would tend to be converted into A ==>B \equiv True
before it get applied?
No, only in the proof you are trying to prove
I am not sure if I understand you. Do you mean the last assumption is a useless rule here in terms of automatic simplification?
5:"(⋀ A B. (A → B) ∈ P ∧ A ∈ P ⟹ B ∈ P)"
This one is useless as a rule during the proof.
ah no actually it does rewrite it before declaring it as simp
I had forgotten that behavior
lemma
assumes "A ⟶ B"
shows ‹A ⟶ B›
supply [[simp_trace]]
apply (simp add: assms)
(*
[0]Adding rewrite rule "local.assms":
A ⟹ B ≡ True
*)
Yiming Xu said:
5:"(⋀ A B. (A → B) ∈ P ∧ A ∈ P ⟹ B ∈ P)"
actually the simplifier can decide to guess a good A and try to instantiate it
The "->" is not "-->", this is an infix I declared for modal formula implication, I do not think it is making any attempt instantiating this.
Mathias Fleury said:
I had forgotten that behavior
lemma assumes "A ⟶ B" shows ‹A ⟶ B› supply [[simp_trace]] apply (simp add: assms) (* [0]Adding rewrite rule "local.assms": A ⟹ B ≡ True *)
Is meta implication/equiv left or right assoc?
the not/box <--> diamond is also looping
(rules 3 and 7)
If you control-click on the ==> you will find an infixr
on that line. So right associative
Means it is used as A ==> (B \equiv T)
when being applied. Funny.
I remember I read something relevant in prog-prove telling me to watch out the antecedent of the implication. I was confused.
For me it makes a lot of sense: you use the condition to simplify the goal, and only have to prove the preconditions
the other direction would make the solver much much weaker
So Isabelle's is trying to make sure that every theorem added to simp when calling simp add is actually used? Otherwise the implications will just fail?
So, for Isabelle, such an implication can be used only if we can prove the antecedent.
HOL4 will just ignore a theorem added to simp if it cannot be useful for the goal. The user's responsibility is heavier here...
no isabelle also ignores it
but it applies rewrites only if there is no new assumptions
Otherwise no rule like "finite A ==> ..." would be possible
But for an implication, when it is normalized, Isabelle will try to do the back-chain to maximal its attempt to discard at least amount of theorems once possible?
Mathias Fleury said:
Otherwise no rule like "finite A ==> ..." would be possible
I do not understand this example, could you please elaborate a bit on this?
I am not sure we are talking about the same thing:
- rewrite only happens if all the assumptions can be proved. simp will not pick one and create a new goal that you cannot prove (like my finite A ==> ...
example: the simplifier will never ask you to prove finite A
. Either it can do it or it will not simplify)
- when trying proving an assumption, the simplifier will call itself recursively, potentially looping
I see. Your first point here explains in which case should an implication theorem fed to simp will be ignored.
So for my !A B. (A->B) in P /\ A in P ==> B in P
, since there is a forall, when it is trying to prove B in P, it tries to reduce it into (A ->B in P) and A in P, and it tries to prove A->B in P
, and B in P
, also using !A B. (A->B) in P /\ A in P ==> B in P
?
that is my guess
If it is the case it seems not natural, the LHS has more variables!
Mathias Fleury said:
the not/box <--> diamond is also looping
although that one is also looping
Yiming Xu said:
If it is the case it seems not natural, the LHS has more variable!
simp can try B repeatedly
Mathias Fleury said:
Yiming Xu said:
If it is the case it seems not natural, the LHS has more variable!
simp can try B repeatedly
In which sense?
Or I misunderstand your B? B is a modal formula (a schemvar for a modal formula).
To prove B in P
you have to prove that (B->B) in P /\ B in P
B->B
is true (I guess at least)
hence it is sufficient to prove B in P
Oh you mean it picks an existing term instead of generating a random free variable.
lemma
assumes ‹⋀A B. (A⟶B) ∈ P ∧ A ∈ P ⟹ B ∈ P› ‹⋀A. A ⟶ A ∈ P›
shows ‹B ∈ P›
supply [[simp_trace,simp_trace_depth_limit=2]]
apply (simp add: assms)
(*
[0]Adding rewrite rule "local.assms_1":
(?A1 ⟶ ?B1) ∈ P ∧ ?A1 ∈ P ⟹ ?B1 ∈ P ≡ True
[0]Adding rewrite rule "local.assms_2":
?A4 ⟹ ?A4 ∈ P ≡ True
[0]Adding rewrite rule "local.assms_1":
(?A1 ⟶ ?B1) ∈ P ∧ ?A1 ∈ P ⟹ ?B1 ∈ P ≡ True
[0]Adding rewrite rule "local.assms_2":
?A4 ⟹ ?A4 ∈ P ≡ True
[1]SIMPLIFIER INVOKED ON THE FOLLOWING TERM:
B ∈ P
[1]Applying instance of rewrite rule "local.assms_1":
(?A1 ⟶ ?B1) ∈ P ∧ ?A1 ∈ P ⟹ ?B1 ∈ P ≡ True
[1]Trying to rewrite:
(?A1 ⟶ B) ∈ P ∧ ?A1 ∈ P ⟹ B ∈ P ≡ True
[2]SIMPLIFIER INVOKED ON THE FOLLOWING TERM:
(?A1 ⟶ B) ∈ P ∧ ?A1 ∈ P
[2]Applying congruence rule:
?A1 ≡ ?P'2 ⟹ (?P'2 ⟹ B ≡ ?Q'2) ⟹ ?A1 ⟶ B ≡ ?P'2 ⟶ ?Q'2
simp_trace_depth_limit exceeded!
simp_trace_depth_limit exceeded!
[2]SUCCEEDED
?A1 ⟶ B ≡ ?A1 ⟶ B
[2]Applying instance of rewrite rule "local.assms_1":
(?A1 ⟶ ?B1) ∈ P ∧ ?A1 ∈ P ⟹ ?B1 ∈ P ≡ True
[2]Trying to rewrite:
(?A3 ⟶ ?A1 ⟶ B) ∈ P ∧ ?A3 ∈ P ⟹ (?A1 ⟶ B) ∈ P ≡ True
*)
I lost here
[2]Applying congruence rule:
?A1 ≡ ?P'2 ⟹ (?P'2 ⟹ B ≡ ?Q'2) ⟹ ?A1 ⟶ B ≡ ?P'2 ⟶ ?Q'2
So it is generating unification var P'2 and Q'2. Not obvious to me it is doing the back-chaining on the two conjuncts respectively.
Seems your example traps sledgehammer.
No, almost traps it.
image.png
It solves that, but takes very long.
Is there any canonical way to solve this goal then...?
That is not provable as far as I can see
it is made to make rule application loop
I do not know what your a_is_b
is, but I am very suspicious that it is actually incorrect
Maybe it is indeed incorrect...
image.png
Well it indicates that cvc4 found a proof that verit could not find
And yes a_is_b is something nonsense made up for another test:
lemma a_is_b: "3 \<equiv> 5" sorry
completely irrelevant.
Means sledgehammer is messed up.
Mathias Fleury said:
That is not provable as far as I can see
I really hope it is not as it implies false:
lemma H:
assumes ‹⋀A B. (A⟶B) ∈ P ⟹ A ∈ P ⟹ B ∈ P› ‹⋀A. A ⟶ A ∈ P›
shows ‹B ∈ P›
sorry
lemma False
using H[of ‹{True}› False, simplified]
by simp
Take the set of P to be the set of tautologies. Then if (T -->F in P /\ T in P) it will imply F in P. Indeed P does not contain everything.
Oh yes I misread it as !!A. A in P
.
lemma
assumes ‹⋀A B. (A⟶B) ∈ P ∧ A ∈ P ⟹ B ∈ P› ‹⋀A. A ∈ P›
shows ‹B ∈ P›
supply [[simp_trace,simp_trace_depth_limit=2]]
by (simp add: assms(2))
[0]Adding rewrite rule "local.assms_2":
?A1 ∈ P ≡ True
[0]Adding rewrite rule "local.assms_2":
?A1 ∈ P ≡ True
[0]Adding rewrite rule "local.assms_2":
?A1 ∈ P ≡ True
theorem (⋀A B. (A ⟶ B) ∈ ?P ∧ A ∈ ?P ⟹ B ∈ ?P) ⟹ (⋀A. A ∈ ?P) ⟹ ?B ∈ ?P
[1]SIMPLIFIER INVOKED ON THE FOLLOWING TERM:
B ∈ P
[1]Applying instance of rewrite rule "local.assms_2":
?A1 ∈ P ≡ True
[1]Rewriting:
B ∈ P ≡ True
As expected, good.
Return to my original aim. The lesson I just learnt is to not only to use the simplified...
definition is_NML1_def :
"is_NML1 s ≡
(∀ A B p q f form.
ptaut form ⟶ form ∈ s ∧
(IMP (BOX (IMP p q)) (IMP (BOX p) (BOX q))) ∈ s ∧
(A ∈ s ⟶ (subst f A) ∈ s) ∧
(A ∈ s ⟶ (BOX A) ∈ s)∧
((A → B) ∈ s ∧ A ∈ s ⟶ B ∈ s))
"
lemma foo1 :
assumes
1:"(⋀form. ptaut form ⟹ form ∈ P)" and
s:"(⋀ A f. A ∈ P ⟶ (subst f A) ∈ P)" and
2:"⋀ p q. (IMP (BOX (IMP p q)) (IMP (BOX p) (BOX q))) ∈ P " and
4:"(⋀ A. A ∈ P ⟹ BOX A ∈ P)" and
5: "(⋀A B. (IMP A B) ∈ P ∧ A ∈ P ⟶ B ∈ P)"
shows
"is_NML1 P"
proof -
show ?thesis
by (metis "1" "2" "4" "5" is_NML1_def s)
qed
metis solves this, as expected. What a peaceful world.
The above is very slow, is there any thing to do to help metis to do it faster?
In HOL4 it is possible to add additional theorems to make metis works it out quicker.
try Sledgehammer, or writing some steps out yourself
Sledgehammer does not give a quick proof either. For "write out some steps", I do not think there is any possible intermediate step in this particular case.
The assumptions and the definition looks identical (modulo meta/object language).
you can try to feed more theorems into metis
like you would in HOL4
In this case, may I please ask what to fed into metis?
you can also split the conjuncts after unfolding is_NML1
and doing each one separately
Yiming Xu said:
In HOL4 it is possible to add additional theorems to make metis works it out quicker.
^ I mean whatever you would feed it in HOL4
They are basically intermediate theorems about concrete notions, we rarely feed metis proof rules.
Let me try if I can split the conjuncts.
Do you mean like this?
lemma foo1 :
assumes
1:"(⋀form. ptaut form ⟹ form ∈ P)" and
s:"(⋀ A f. A ∈ P ⟶ (subst f A) ∈ P)" and
2:"⋀ p q. (IMP (BOX (IMP p q)) (IMP (BOX p) (BOX q))) ∈ P " and
4:"(⋀ A. A ∈ P ⟹ BOX A ∈ P)" and
5: "(⋀A B. (IMP A B) ∈ P ∧ A ∈ P ⟶ B ∈ P)"
shows
"is_NML1 P"
proof -
have "ptaut form ⟶
form ∈ P" proof
fix "A" "B" "p" "q" "f" "form"
...
qed
probably do unfolding is_NML1_def proof standard
instead and see what that does
I tried:
lemma foo1 :
assumes
1:"(⋀form. ptaut form ⟹ form ∈ P)" and
s:"(⋀ A f. A ∈ P ⟶ (subst f A) ∈ P)" and
2:"⋀ p q. (IMP (BOX (IMP p q)) (IMP (BOX p) (BOX q))) ∈ P " and
4:"(⋀ A. A ∈ P ⟹ BOX A ∈ P)" and
5: "(⋀A B. (IMP A B) ∈ P ∧ A ∈ P ⟶ B ∈ P)"
shows
"is_NML1 P"
apply (simp add:is_NML1_def)
proof
have "ptaut form ⟶
form ∈ P" sorry
Let me try yours.
lemma foo1 :
assumes
1:"(⋀form. ptaut form ⟹ form ∈ P)" and
s:"(⋀ A f. A ∈ P ⟶ (subst f A) ∈ P)" and
2:"⋀ p q. (IMP (BOX (IMP p q)) (IMP (BOX p) (BOX q))) ∈ P " and
4:"(⋀ A. A ∈ P ⟹ BOX A ∈ P)" and
5: "(⋀A B. (IMP A B) ∈ P ∧ A ∈ P ⟶ B ∈ P)"
shows
"is_NML1 P"
unfolding is_NML1_def proof standard
have "ptaut form ⟶
form ∈ P" sorry
have ptaut form ⟶ form ∈ P
proof (state)
this:
ptaut form ⟶ form ∈ P
goal (1 subgoal):
1. ⋀A. ∀B p q f form.
ptaut form ⟶
form ∈ P ∧
BOX (p → q) → (BOX p → BOX q) ∈ P ∧
(A ∈ P ⟶ subst f A ∈ P) ∧ (A ∈ P ⟶ BOX A ∈ P) ∧ (A → B ∈ P ∧ A ∈ P ⟶ B ∈ P)
same.
what about proof safe
instead?
proof (state)
goal (5 subgoals):
1. ⋀A B p q f form. ptaut form ⟹ form ∈ P
2. ⋀A B p q f form. BOX (p → q) → (BOX p → BOX q) ∈ P
3. ⋀A B p q f form. A ∈ P ⟹ subst f A ∈ P
4. ⋀A B p q f form. A ∈ P ⟹ BOX A ∈ P
5. ⋀A B p q f form. A → B ∈ P ⟹ A ∈ P ⟹ B ∈ P
That's better!
there you go, prove those one-by-one
So from here:
goal (1 subgoal):
1. ⋀A. ∀B p q f form.
ptaut form ⟶
form ∈ P ∧
BOX (p → q) → (BOX p → BOX q) ∈ P ∧
(A ∈ P ⟶ subst f A ∈ P) ∧ (A ∈ P ⟶ BOX A ∈ P) ∧ (A → B ∈ P ∧ A ∈ P ⟶ B ∈ P)
How can I strip all the quantifiers and split the conjunctions?
(I may have seen it somewhere, I am sorry I forgot. To much things to grab.)
I suppose maybe some repetition of rule application by hand. But in HOL4, it is rw[]. Stripping away all quantifiers and split conjunctions.
apply safe, I guess?
lemma foo1 :
assumes
1:"(⋀form. ptaut form ⟹ form ∈ P)" and
s:"(⋀ A f. A ∈ P ⟶ (subst f A) ∈ P)" and
2:"⋀ p q. (IMP (BOX (IMP p q)) (IMP (BOX p) (BOX q))) ∈ P " and
4:"(⋀ A. A ∈ P ⟹ BOX A ∈ P)" and
5: "(⋀A B. (IMP A B) ∈ P ∧ A ∈ P ⟶ B ∈ P)"
shows
"is_NML1 P"
unfolding is_NML1_def proof standard
have "ptaut form ⟶
form ∈ P" sorry
After the proof standard, can I still write "apply safe" in some manner?
If I put it after proof standard it will be a syntax error.
Yong Kiam said:
apply safe, I guess?
Also I am curious, the "guess" means you do not need it. To me, it seems like very standard operations in the goal, but when I ask around recently, people just tell me "we never use Isar like this". So what do people often try to deal with such a situation, I wonder.
(I understand for mature users who have already forgotten how they grabbed this stuff, it is hard to tell, feel free to not answer it if it is too hard!)
proof (intro allI impI conjI)
So "safe" means "applying all safe rules once possible"?
Isar-ref p92-93:
Also available is clarsimp, a method that interleaves clarify and simp. Also there is safe, which like clarify performs obvious steps but even applies those that split goals.
Leave it here just for a reference later. Do not think I would look at the code.
Mathias Fleury said:
To prove
B in P
you have to prove that(B->B) in P /\ B in P
B->B
is true (I guess at least)
hence it is sufficient to proveB in P
For a reference later. http://concrete-semantics.org/slides-isabelle.pdf page 67 explains conditional rw and I find it pretty clear.
Last updated: Dec 21 2024 at 16:20 UTC