When I define my lemmas and have universally quantified variables, I can either
⋀
∀
measure_induct
and measure_induct_rule
kind of exemplifies my confusion - which variant is good for what?thm measure_induct
(⋀x. ∀y. ?f y < ?f x ⟶ ?P y ⟹ ?P x) ⟹ ?P ?a
thm measure_induct_rule
(⋀x. (⋀y. ?f y < ?f x ⟹ ?P y) ⟹ ?P x) ⟹ ?P ?a
In my experience, the difference does not really matter
Both versions are annoying to work with OF:
lemma H: ‹(⋀x. ∀y. f y < f x ⟶ P y ⟹ P x) ⟹ P a›
sorry
context
assumes F: ‹(⋀x. ∀y. f y < f x ⟶ P y ⟹ P x)›
begin
thm H[OF F]
(*
unexpected for me
(⋀x. ∀y. ?f y < ?f x ⟶ ?P1 y (?x1 y) ⟹ ∀y. ?f1 x y < ?f1 x (?x1 x) ⟶ ?P1 x y) ⟹ ?P1 ?a (?x1 ?a)
*)
I've found just in my experience that it can sometimes affect automation/the sucess of sledgehammer calls, and some tactics (which is why some big libraries have both). Generally, keep in mind that is part of Isabelle's metalogic, whereas is not. You can't mix the two inside a single logical formula, so this sometimes determines what you are using. This is why, for example, in measure\_induct
it switches to using as it's also using . My general principle when defining lemmas is to (1) avoid binding if it's not logically necessary, (2) use Isabelle's meta logic for universal quantification () where possible, and (3) only use in lemma definitions if you need to, e.g. if if you are writing a lemma to help prove a statement that already uses /uses other logical symbols.
That was also my experience. Is there a better way to do it? I cant really get measure_induct
to work like induction t
does.
If I have shows "P x"
I can just use proof (induction x)
but proof (induction rule:measure_induct)
fails (Illegal schematic variable(s)
)
I think this is an issue with how you're using induction. Try something like induct x rule: measure_induct[where f = ...]
(you could also just use of instead). Basically I think you'll need to specify what you want f to be in order to use the induct tactic with that rule otherwise it doesn't know what to do with f (which is a schematic variable in the rule). In comparison induction x
as a tactic tries to be clever and automatically deduce the induction rule you want to apply, so in more complex situations often uses the wrong induction rule which results in goals that aren't particularly useful.
Mathias Fleury said:
Both versions are annoying to work with OF:
lemma H: ‹(⋀x. ∀y. f y < f x ⟶ P y ⟹ P x) ⟹ P a› sorry context assumes F: ‹(⋀x. ∀y. f y < f x ⟶ P y ⟹ P x)› begin thm H[OF F] (* unexpected for me (⋀x. ∀y. ?f y < ?f x ⟶ ?P1 y (?x1 y) ⟹ ∀y. ?f1 x y < ?f1 x (?x1 x) ⟶ ?P1 x y) ⟹ ?P1 ?a (?x1 ?a) *)
You could use uOF
from the AFP:
theory Scratch
imports
Main
ML_Unification.Unification_Attributes
ML_Unification.ML_Unification_HOL_Setup
begin
lemma H: ‹(⋀x. ∀y. f y < f x ⟶ P y ⟹ P x) ⟹ P a›
sorry
context
assumes F: ‹(⋀x. ∀y. f y < f x ⟶ P y ⟹ P x)›
begin
thm H[uOF F] (*what you want*)
thm H[uOF F where mode=resolve] (*same behaviour as OF*)
end
end
Follow up and context for my problem: I tried something like
datatype dat = Data
fun func where
‹func Data = 0 + 1›
lemma ‹P (d::dat)›
proof (induction d rule:measure_induct[where f = func])
case (1 x)
then show ?case sorry
qed
On which case (1 x)
gave me the error exception THM 1 raised (line 1242 of "thm.ML"): assume: variables
I have now realized that the error is because func: dat => 'a
since 1
is overloaded. so the induction instantiation has a type variable. I have really had a lot of issues with unconstrained type variables. I wish the errors regarding them were more clear
Typing functions is always better (also better for refactoring later)
Much less error-prone
Yeah, I keep relearning that. But sometimes I have functions with with type variables just on the result (stupid example but perhaps nat => 'a list
and then the same kind of problems occur.
Last updated: Dec 21 2024 at 16:20 UTC