Stream: Archive Mirror: Isabelle Users Mailing List

Topic: [isabelle] antiquotations


view this post on Zulip Email Gateway (Aug 18 2022 at 11:16):

From: Jeremy Dawson <jeremy@rsise.anu.edu.au>
I'm having some trouble using antiquotations.

I find, for example that

val ls_sp = @{simproc let_simp} ;

works fine, but

val ncn_sp = @{simproc natless_cancel_numerals} ;

doesn't work, message is
*** ERROR "Undefined simplification procedure:
\"natless_cancel_numerals\""

(and many more that I've tried also don't work).

How should I use these antiquotations?

Another problem I've found with antiquotations is that commenting them
out doesn't work properly, thus

val ls_sp = @{simproc let_simp} ;
(*
val ncn_sp = @{simproc natless_cancel_numerals} ;
*)

produces the same error message about natless_cancel_numerals

Jeremy

view this post on Zulip Email Gateway (Aug 18 2022 at 11:16):

From: Alexander Krauss <krauss@in.tum.de>
Dear Jeremy,

I find, for example that

val ls_sp = @{simproc let_simp} ;

works fine, but

val ncn_sp = @{simproc natless_cancel_numerals} ;

doesn't work, message is
*** ERROR "Undefined simplification procedure:
\"natless_cancel_numerals\""

Hmmm... It seems that the difference is that "let_simp" was declared
using the "simproc_setup" command (in HOL.thy), and the declaration of
"natless_cancel_numerals" still happens using the (imperative) ML
operations. This might be the reason for the anomaly, though I am not sure.

Another problem I've found with antiquotations is that commenting them
out doesn't work properly, thus

val ls_sp = @{simproc let_simp} ;
(*
val ncn_sp = @{simproc natless_cancel_numerals} ;
*)

produces the same error message about natless_cancel_numerals

Yes, this is probably worth improving: At the stage where the
antiquotations are expended, comments are not taken into account.

I am sure, Makarius can comment on these issues when he is back from
vacation.

Alex

view this post on Zulip Email Gateway (Aug 18 2022 at 11:16):

From: Amine Chaieb <chaieb@in.tum.de>
Alexander Krauss wrote:
The @{simproc } Antiquotation just asks the context for a simproc
declared under that name. If the simproc has not been "registered", it
can not be found. The simproc_setup command does this consequently.

The distribution is not yet cleaned up in this aspect, i.e. not all
simprocs are declared "officially".

Amine.

view this post on Zulip Email Gateway (Aug 18 2022 at 11:18):

From: Makarius <makarius@sketis.net>
Having studied the recent threads on the "effect of use_thy",
"antiquotations" etc. here is a summary in my own words, together with
some additional hints on how to avoid problems with old ways of doing
things, and moving on towards more powerful concepts.

* ML code should be used within a proper theory only, with explicit
"uses" declarations for the theory header specification for any "use"
command in the body. Inline "ML" commands work without further ado.

For historical reasons, the theory loader is a bit too liberal in
skimping over ill-specified dependencies, which occasionally results
in confusing situations, where the loader and the user disagree on the
state of a theory being fully loaded. This will improve eventually,
as we remove more legacy features, especially the possibility of
loading ML files ``after a theory'', which has caused endless trouble
ever since it was introduced many years ago. (Long ago this feature
was really needed, because it was impossible to prove theorems within
a theory body, which is hard to believe today.)

* The raw interactive ML toplevel is mostly obsolete. After many years
of the two toplevels side-by-side, we have finally passed the turning
point where everything is Isar, and ML is smoothly integrated into
that. Proper context-dependent antiquotations are just one benefit
from this principle, and typing additional ML {* *} should be a small
price to pay in current Proof General (adding a key binding to produce
that wrapping should be trivial anyway).

Further benefits of turning everything into Isar transactions will
become more relevant in the future. The point is that this uniform
arrangement allows the Isar toplevel to ``manage'' isolated
executions. For example, it enables to schedule transactions
concurrently (which is still limited to whole theory files in
Isabelle2007). Or just think of interrupting individual Isar commands
instead of the whole Isabelle process with all its concurrent
transactions being worked on at some point.

If you really need the bare-metal feeling of the raw ML toplevel (e.g.
for special debugging situations), here is an easy way to get it with
proper Isar context, but without antiquotations:

theory A imports B begin
exit (leaving Isar for now)
ML> ... (raw ML text here)
ML> Isar.loop(); (back to normal)

This also works within a proof or locale contexte etc.

* ML functions that refer to the old-style (dynamic) context, such as
"the_context" and "thm" are very hard to understand exactly and should
be replaced more and more by proper antiquotations (which are
statically scoped at compile-time).

This means that pointing your finger at some bits of ML code, you
already know its context (i.e. where your finger is within the
Isabelle/Isar text), and any terms, thms etc. being referenced by
antiquotation are fixed values wrt. to that context. Moreover there
is some degree of static checking of formal references. For example,
@{const_name wf} or @{thm wf_induct} will only compile if these
entities are well-formed according to the compile-time context.

Slightly more delicate issues arise when ML implementations are moved
between contexts along with logical entities, by re-interpreting
general ``declarations'' wrt. to a given morphism. Some recent papers
and talks by Chaieb/Wenzel explain these advanced concepts further.

* The expander for antiquotations is ignorant of ML syntax. In
particular, Isar entities referenced inside ML comments are still
statically checked, which might occasionally appear more rigid than
expected.

At some later stage, the expansion mechanism might learn bits of ML
syntax (which is quite delicate), so this could be liberalized
eventually. A more interesting application of ML syntax interacting
with Isar antiquotations is to produce different code in ML pattern
positions (of fun/case/let) vs. plain ML expressions.

* The @{theory} antiquotation requires special care, because it produces
a static reference to the current theory value, which is still
developed further in the present theory body. If the ML environment
somehow keeps this value persistently, it is likely to cause ``stale
theory'' errors when accessed later. As explained in the rudimentary
"Isabelle/Isar Implementation Manual", theory certificates work like a
linear type in ML, but a persistent @{theory} value violates the
linearity restriction.

In the worst case, one needs to keep a @{theory_ref} instead, which is
a sliding theory reference that never becomes stale, but may evolve
monotonically without further notice (just like the implicit theory
reference present within any @{context}, @{thm}, or @{cterm} value).

Luckily, raw theory values are rarely required these days, since
Proof.context has taken over its role to represent ``the context''
most of the time. (In new-style local theory specifications, such as
definitions within a locale, the context is disguised as local_theory,
which is of the same ML type Proof.context, but holds additional
target information to tell how results are processed).

Makarius

view this post on Zulip Email Gateway (Aug 18 2022 at 11:20):

From: Makarius <makarius@sketis.net>
On Fri, 8 Feb 2008, Jeremy Dawson wrote:

Mostly my problems are something do to with theories. I don't know if
the problems I've faced today are because of theories or not, but I
think it is quite likely. But I waste an enormous amount of time trying
to cope with things that don't work as expected.

The general advice is to let Isar take over control as much as possible,
while avoiding raw ML accesses (in particular operations with unclear
theory/proof context).

In particular:

- Use ProofGeneral by default.
- Always work within a theory body, if a theory is required. Do not
rely on accidental theory states from previous use_thy etc.

- Use ML antiquotions as much as possible, because here the context is
checked statically at compile time.

The latest example is contained in the attached theory file. For some
reason the ML expression bool_axioms RL [fun_cong] produces an empty
list of results when it appears in the theory file being loaded.

After the theory file is modified in the way shown (ie not using
bool_axioms RL [fun_cong]) it loads OK. Then, after that, the
expression bool_axioms RL [fun_cong] returns exactly the results
expected. Why does the same ML expression behave differently at
different points ?

This is a good example why proper contexts matter. The bool_axioms list
is bound at the ML toplevel in a situation before certain type class
instantiations in your theory. For the resolution to work out, these type
arities need to be present in the theory. Your ML code fails to produce
any result here, because it references bool_axioms in the raw ML
environment, without going through the Isar layer, which would have
transferred the result as required.

Using a proper ML antiquotation makes things work as expected:

@{thms bool_axioms} RL [fun_cong]

It is important to keep in mind that the ML environment is not the real
thing, just a low-level view on the implementation platform.

Makarius

view this post on Zulip Email Gateway (Aug 18 2022 at 11:20):

From: Jeremy Dawson <jeremy@rsise.anu.edu.au>
Makarius wrote:

On Fri, 8 Feb 2008, Jeremy Dawson wrote:

Mostly my problems are something do to with theories. I don't know if
the problems I've faced today are because of theories or not, but I
think it is quite likely. But I waste an enormous amount of time trying
to cope with things that don't work as expected.

The general advice is to let Isar take over control as much as possible,
while avoiding raw ML accesses (in particular operations with unclear
theory/proof context).

In particular:

- Use ProofGeneral by default.
- Always work within a theory body, if a theory is required. Do not
rely on accidental theory states from previous use_thy etc.
- Use ML antiquotions as much as possible, because here the context is
checked statically at compile time.

The latest example is contained in the attached theory file. For some
reason the ML expression bool_axioms RL [fun_cong] produces an empty
list of results when it appears in the theory file being loaded.

After the theory file is modified in the way shown (ie not using
bool_axioms RL [fun_cong]) it loads OK. Then, after that, the
expression bool_axioms RL [fun_cong] returns exactly the results
expected. Why does the same ML expression behave differently at
different points ?

This is a good example why proper contexts matter. The bool_axioms list
is bound at the ML toplevel in a situation before certain type class
instantiations in your theory. For the resolution to work out, these type
arities need to be present in the theory.
Makarius,

Thanks for this - I think I understand this point.

Your ML code fails to produce
any result here, because it references bool_axioms in the raw ML
environment, without going through the Isar layer, which would have
transferred the result as required.

However I don't understand this - as I said in my earlier email, exactly
the same ML code _does_ produce the desired result later on. So why
does the _same_ ML expression behave _differently_ at different points
? Both the points where the ML expression bool_axioms RL [fun_cong] is
used occur _after_ the point where fun is shown to be in the
axclass boolean_class (I understand that this is what the problem was).
Using a proper ML antiquotation makes things work as expected:

@{thms bool_axioms} RL [fun_cong]

I gather your point is that

@{thms bool_axioms}

transfers bool_axioms to the current theory. But this is exactly what I
don't want!
My complaint is that the _same_ ML expression bool_axioms RL [fun_cong]
behaves differently between (a) where I put it in the theory file, where
it doesn't work, and (b) when I use it later, in trying to debug the
problem. I want the same expression to have the same meaning when I am
trying to debug it as the meaning it has when it fails to work!

Incidentally, some years ago I discovered that thms "bool_axioms"
retrieves the theorems
bool_axioms, after transferring them to the current theory. I asked how
to retrieve named theorem(s) without transferring them to the current
theory - no one then seemed to know. Does anyone know how to do this?

Further questions from my previous email, whose answers would, I hope,
help me to understand the complexities of theories in Isar:

theory_of_thm (hd bool_axioms); produces a different result inside the
Isar file
from the result it gives afterwards. Why is this?

If I understand correctly the_context () returns the current theory as
at the point where it is executed. Is that correct?

The difficulty in using Isar seems to be that the current theory keeps
on changing, much more than with typical .ML proof files. Is this correct?

So far as I understood the previous emails,
ML {* XXX the_context () } and ML { XXX@{theory} *} would only return
a different result if the code represented by XXX actually changed the
current theory. Is that correct?

A further question from a previous email (the answer given then was to
use a different function) : bind_thm(s) seems to sometimes work (in the
sense of storing a theorem in the database so that it can be effectively
retrieved later). Why is this? What is the difference between bind_thm
and the function I was told to use (ie, PureThy.add_thmss)?

Finally, is all this stuff documented anywhere?

It is important to keep in mind that the ML environment is not the real
thing, just a low-level view on the implementation platform.

I'm not sure what this means, but it is the language in which enormous
amounts of Isabelle proofs are and have been written - and it is still
the only language for doing lots of things (including some fairly
mundane things, as my theory file showed) in Isabelle.

Jeremy

view this post on Zulip Email Gateway (Aug 18 2022 at 11:20):

From: Makarius <makarius@sketis.net>
Here are some references that might help:

These things did not appear overnight out of the blue, but emerged from
principles that have been present in Isabelle for many years already.

The following talk provides an overview of the general state of affairs of
integrating everything into some kind of ``logical operating system'':

When using the system you do not have to understand all the details, but
merely sit back comfortably and let the framework do most of the job.
Complications mostly arise from trying to challange things by interfering
with the low-level implementation layer.

Makarius

view this post on Zulip Email Gateway (Aug 18 2022 at 11:20):

From: Alexander Krauss <krauss@in.tum.de>
Dear Jeremy,

In addition to the references, let me try to comment more specifically
on some of your problems / complaints:

My complaint is that the _same_ ML expression bool_axioms RL
[fun_cong] behaves differently between (a) where I put it in the
theory file, where it doesn't work, and (b) when I use it later, in
trying to debug the problem.

If your radio doesn't work in the cellar, but does work in the living
room, then you cannot debug the problem in the living room. You have to
go to the cellar.

I want the same expression to have the same meaning when I am trying
to debug it as the meaning it has when it fails to work!

You should debug it in the same context: Open the offending theory in
PG, step to the point until it breaks, and debug it there. Then you
should be able to see what the problem is.

If I understand correctly the_context () returns the current theory
as at the point where it is executed. Is that correct?
[...]
So far as I understood the previous emails, ML {* XXX the_context ()
} and ML { XXX@{theory} *} would only return a different result if
the code represented by XXX actually changed the current theory. Is
that correct?

No. Strictly speaking, the behaviour of the_context () is completely
unspecified: Since theories can be loaded in parallel, you could get
almost anything, including a theory that is currently developing in a
different thread. The "current context" is not defined in terms of the
point in time when something is executed. It must be given statically,
by the location in the theory source. The @{theory} antiquotation does that.

Let me repeat: If you use "the_context ()" in production code, it will
break with obscure errors sooner or later.

A further question from a previous email (the answer given then was
to use a different function) : bind_thm(s) seems to sometimes work
(in the sense of storing a theorem in the database so that it can be
effectively retrieved later). Why is this? What is the difference
between bind_thm and the function I was told to use (ie,
PureThy.add_thmss)?

You can read the answer from its type:

bind_thms : string * Thm.thm list -> unit
PureThy.add_thmss : ((bstring * Thm.thm list) * Thm.attribute list) list
-> Context.theory -> Thm.thm list list * Context.theory

Since bind_thms is impure, it can only update some global reference
hanging around. This is incompatible with paralellism. At the moment it
may work sometimes, but it is guaranteed to break sooner or later.

These changes are necessary, since the ability to process theories in
paralell decides whether you can use the 15 extra cores in your next CPU :-)

You can usually tell these "dont-use" functions from the type: If you
feel they should depend on or change the theory or context, and this is
not explicit in the type, then this is calling for trouble.

Since my comments above are a little non-constructive ("don't do this"),
I had another look at the theory you sent recently. You just need to
change a few things to avoid all the trouble.

  1. Instead of the (evil, imperative) bind_thms, use a proper
    declaration. On the theory level, the "setup" command will do what you
    need. It takes an ML snippet of type "theory -> theory". In your example:

setup {*
let
val boolexp = @{thm "boolexp"} ;

val bool_axioms = Seq.list_of (EVERY'
[REPEAT o dresolve_tac [conjunct1, conjunct2], REPEAT o etac allE,
atac] 1
(boolexp RS revcut_rl)) ;
in
snd o PureThy.add_thmss [(("bool_axioms", bool_axioms), [])]
end
*}

will do the right thing. If you find the syntax of PureThy.add_thmss too
complicated, just define yourself a shortcut:

ML {*
fun save_thms (name, thms) =
snd o PureThy.add_thmss [((name, thms), [])]
*}

  1. Use antiquotations to refer to things from the context. Don't use the
    imperative "thm" or "thms". They have no context, so they are bad :-).

I recognise that it is hard to "unlearn" these things when they have
worked for you all the years. But it is really necessary to change them
(and thus break backwards compatibility) in order to bring Isabelle
forward. But in the end, it should not be so difficult to solve your
concrete problems.

I hope this helps... Please don't hesitate to ask if you run into more
problems, and include theory files whenever you can.

All the best,
Alex

view this post on Zulip Email Gateway (Aug 18 2022 at 11:20):

From: Stefan Berghofer <berghofe@in.tum.de>
Jeremy Dawson wrote:
Dear Jeremy,

the reason for this effect is that each theorem contains an ML reference
pointing to the theory it has been proved in (or been looked up in). The
theorems contained in bool_axioms refer to some intermediate theory that
does not yet contain the information that fun is in boolean_class. However,
as soon as your theory has been completely processed, all theorems that
have been proved in the preceeding intermediate theories are promoted to
theorems that are also valid in the final theory. This is done by
(imperatively!) updating the theory references contained in the theorems,
which explains why the very same ML expression (denoting a data structure
containing a reference) suddenly behaves in a different way.

This usage of references might indeed seem a bit strange at first sight,
but it seems to be the only way to achieve that definitions and proofs
may be intermixed within the same theory, which was impossible with the
implementation of theories we had in Isabelle more than a decade ago.

An effect similar to the one that you have noticed can be produced by
executing the ML command

Type.of_sort (Sign.tsig_of (theory_of_thm (hd bool_axioms)))
(Type ("fun", [TFree ("'a", HOLogic.typeS), TFree ("'b", ["BI.boolean_class"])]),
["BI.boolean_class"])

for checking whether a function type is a member of boolean_class. This
expression returns "false" when executed inside your theory, but "true"
when executed outside your theory, because the theory returned by
"theory_of_thm ..." is changed when closing the theory.

Greetings,
Stefan

view this post on Zulip Email Gateway (Aug 18 2022 at 11:22):

From: Lucas Dixon <ldixon@inf.ed.ac.uk>
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Alexander Krauss wrote:
I'm curious about this: is it really the case that references are
inherently incompatible with parallelism? Is there some reason why it
cannot update a thread-local reference?

I would expect that you can cut the Isar/ML cake both ways:
anti-quotations in Isar or quotations in ML. I feel there should be a
confluence proof there somewhere :)

So, is there some reason this is not possible? In particular, Isar is
executed in ML, so it seems rather strange to me that you cannot
replicate it's effect in ML, for example by pasting the function calls
used by Isar into the ML.

cheers,
lucas
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFHs4HXogMqU4edHsMRAjj/AJ0fAFJEtuZYSsziwW/nTSJqqHNJsQCdFCyS
aOP6FBuIs/se0AyydGDCyqM=
=UlWW
-----END PGP SIGNATURE-----

view this post on Zulip Email Gateway (Aug 18 2022 at 11:22):

From: Jeremy Dawson <jeremy@rsise.anu.edu.au>
Lucas Dixon wrote:
Lucas / Alexander,

Well, yes, one would think so.

Perusing the source code, I see a function called
eval_antiquotes (to be exact, I used !ML_Context.eval_antiquotes_fn)

which gives results like this:

val eval_antiquotes = !ML_Context.eval_antiquotes_fn ;
eval_antiquotes "val x = y ; val c = @{context} ; \
\ val a = b ; val cs = @{claset} ; val t = @{theory}" ;
val eval_antiquotes = fn : string -> string * string

val it =

("structure Isabelle =\nstruct\nval context =
ML_Context.the_local_context ();\nval claset = Classical.local_claset_of
(ML_Context.the_local_context ());\nval thy = ML_Context.the_context
();\nend;",
"val x = y ; val c = Isabelle.context ; val a = b ; val cs =
Isabelle.claset ; val t = Isabelle.thy")
: string * string

Is this in fact used when the system encounters code such as

ML {* val x = y ; val c = @{context} ; \
\ val a = b ; val cs = @{claset} ; val t = @{theory} *} ?

Or to be exact, is this translated code executed when the system
encounters the
ML {* ... *} ? If not, what exactly does happen ?

If so, how does it work that in this case calls such as
ML_Context.the_context () are safe to use ? (in general when are they
safe and when not ?)

In terms of threads in general:
When you're sitting at the terminal typing input into Isar, it gets
executed in the context of a particular theory, doesn't it ? Obviously
it manages to avoid any ambiguity (ie, it doesn't execute it in the
context of some other theory which is currently developing in a
different thread. How does it identify which is the relevant theory ?

Regards,

Jeremy

view this post on Zulip Email Gateway (Aug 18 2022 at 11:22):

From: Makarius <makarius@sketis.net>
On Wed, 13 Feb 2008, Lucas Dixon wrote:

Alexander Krauss wrote:

Since bind_thms is impure, it can only update some global reference
hanging around. This is incompatible with paralellism. At the moment it
may work sometimes, but it is guaranteed to break sooner or later.

I'm curious about this: is it really the case that references are
inherently incompatible with parallelism? Is there some reason why it
cannot update a thread-local reference?

In principle you can do this, but this requires extra bookeeping of some
kind of ``thread data environment''. In post-Isabelle2007 we already have
this for the "print_mode", which is now thread-safe without giving up the
slightly odd "setmp" idiom. Such contraptions should be the exeption, not
the rule.

Even Java programmers have started to envy powerful concepts of purely
functional programming, which partly explains the current excitement about
Scala.

I would expect that you can cut the Isar/ML cake both ways:
anti-quotations in Isar or quotations in ML. I feel there should be a
confluence proof there somewhere :)

In principle yes. This is a bit like the Holo Deck in Startrek TNG, which
is sufficiently powerful to simulate another Enterprise with another fully
functional Holo Deck inside -- potentially ad infinitum. In the end it is
a matter of practical concern which level to take as primary one, which as
secondary, and stop the nesting at some point.

So, is there some reason this is not possible? In particular, Isar is
executed in ML, so it seems rather strange to me that you cannot
replicate it's effect in ML, for example by pasting the function calls
used by Isar into the ML.

Since the Isar toplevel is more powerful than the raw ML one, the choice
of preference is clear to me. Also you don't want your ``operating
system'' code (i.e. the Isar infrastructure) pasted into your application
code. Moreover, we have recently started to repair some defects in
versions of SML/ML, by passing through our management of ML sources within
Isar.

Makarius

view this post on Zulip Email Gateway (Aug 18 2022 at 11:22):

From: Makarius <makarius@sketis.net>
On Thu, 14 Feb 2008, Jeremy Dawson wrote:

Perusing the source code, I see a function called
eval_antiquotes (to be exact, I used !ML_Context.eval_antiquotes_fn)

which gives results like this:

val eval_antiquotes = !ML_Context.eval_antiquotes_fn ;
eval_antiquotes "val x = y ; val c = @{context} ; \
\ val a = b ; val cs = @{claset} ; val t = @{theory}" ;
val eval_antiquotes = fn : string -> string * string

val it =

("structure Isabelle =\nstruct\nval context = ML_Context.the_local_context
();\nval claset = Classical.local_claset_of (ML_Context.the_local_context
());\nval thy = ML_Context.the_context ();\nend;",
"val x = y ; val c = Isabelle.context ; val a = b ; val cs =
Isabelle.claset ; val t = Isabelle.thy")
: string * string

Is this in fact used when the system encounters code such as

ML {* val x = y ; val c = @{context} ; \
\ val a = b ; val cs = @{claset} ; val t = @{theory} *} ?

Or to be exact, is this translated code executed when the system encounters
the
ML {* ... *} ?

Yes, here you see the compile time Isar context closure wrapped around ML
text issued to the underlying platform. Our previously mentioned paper
about SML-antiquotations is explains this in a more abstract fashion.

If so, how does it work that in this case calls such as
ML_Context.the_context () are safe to use ? (in general when are they
safe and when not ?)

Here it is safe, because it runs in a critical section -- just for compile
time. Later at run-time, where the actual work happens, eveything is
purely functional and enables proper paralellism without further worries.

In terms of threads in general: When you're sitting at the terminal
typing input into Isar, it gets executed in the context of a particular
theory, doesn't it ? Obviously it manages to avoid any ambiguity (ie,
it doesn't execute it in the context of some other theory which is
currently developing in a different thread. How does it identify which
is the relevant theory ?

It is determined from the context :-)

Pointing your finger at a particular position in Isar source text, you
determine a certain context once and for all. Any embedded ML code at
that point will keep this as a first-class value (of type Proof.context)
for later use at run-time; provided proper antiquotations are used. So
this is just the well-known static closure principle applied to
Isabelle/Isar + embedded ML.

In contrast, low-level access to global references buys you dynamic
scoping in the best case, and erratic non-determinism in the worst case.
This is why thm"foo", thms"foo", the_context() are all bad.

Makarius

view this post on Zulip Email Gateway (Aug 18 2022 at 11:22):

From: Makarius <makarius@sketis.net>
^^^^^^

This should read SML/NJ (we've managed to replace 5 integer types by just
one true version).

Makarius

view this post on Zulip Email Gateway (Aug 18 2022 at 11:23):

From: Jeremy Dawson <jeremy@rsise.anu.edu.au>
Makarius wrote:

If so, how does it work that in this case calls such as
ML_Context.the_context () are safe to use ? (in general when are they
safe and when not ?)

Here it is safe, because it runs in a critical section -- just for compile
time. Later at run-time, where the actual work happens, eveything is
purely functional and enables proper paralellism without further worries.

Makarius,

How do these critical sections work? Is it to do with the function
CRITICAL which appears in the source code?

In terms of threads in general: When you're sitting at the terminal
typing input into Isar, it gets executed in the context of a particular
theory, doesn't it ? Obviously it manages to avoid any ambiguity (ie,
it doesn't execute it in the context of some other theory which is
currently developing in a different thread. How does it identify which
is the relevant theory ?

It is determined from the context :-)

The question relates to what the system does. That is to say, when I am
sitting at the terminal and type in ML {* val a = @{context} *} what is
the ML code which the system evaluates ? I want to know what ML code I
would put there which would achieve exactly the same effect.
Pointing your finger at a particular position in Isar source text, you
determine a certain context once and for all. Any embedded ML code at
that point will keep this as a first-class value (of type Proof.context)
for later use at run-time; provided proper antiquotations are used. So
this is just the well-known static closure principle applied to
Isabelle/Isar + embedded ML.

In contrast, low-level access to global references buys you dynamic
scoping in the best case, and erratic non-determinism in the worst case.
This is why thm"foo", thms"foo", the_context() are all bad.

So what functions, _in ML_, are good, for these purposes?

Jeremy

Makarius

view this post on Zulip Email Gateway (Aug 18 2022 at 11:23):

From: Jeremy Dawson <jeremy@rsise.anu.edu.au>
Makarius wrote:

So, is there some reason this is not possible? In particular, Isar is
executed in ML, so it seems rather strange to me that you cannot
replicate it's effect in ML, for example by pasting the function calls
used by Isar into the ML.

Since the Isar toplevel is more powerful than the raw ML one,
I don't get this. Is it in fact true that Isabelle (including Isar) is
(entirely) written in ML?
If so, how can you do things in Isar that can't be done in ML?

Jeremy

Makarius

view this post on Zulip Email Gateway (Aug 18 2022 at 11:23):

From: Makarius <makarius@sketis.net>
On Fri, 15 Feb 2008, Jeremy Dawson wrote:

How do these critical sections work? Is it to do with the function
CRITICAL which appears in the source code?

See appendix A.2 of the fragmentary Isabelle/Isar implementation manual.
It explains the programming model for multithreaded Isabelle; the short
and simple message is: do it purely functionally and in accord with the
official concepts of context (as a plain value).

In rare situations, user code may also have to synchronize things
explicitly, but well-behaved programs should only do this for tiny amounts
of time, i.e. refrain from any real computations / proof search inside a
critical section.

The question relates to what the system does. That is to say, when I am
sitting at the terminal and type in ML {* val a = @{context} *} what is
the ML code which the system evaluates ? I want to know what ML code I
would put there which would achieve exactly the same effect.

You can look at the Isar implementation to find out the details, just for
curiosity. It is of course out of question to attempt to interfere with
the existing infrastructure in user ML code.

In contrast, low-level access to global references buys you dynamic
scoping in the best case, and erratic non-determinism in the worst
case. This is why thm"foo", thms"foo", the_context() are all bad.

So what functions, _in ML_, are good, for these purposes?

Basically none. You can refer to the compile time context using
@{context}, or get a runtime context as explicit functional argument from
somewhere else. For example, a proof method is essentially a function
Proof.context -> args -> tactic, and your implementation will be something
like (fn ctxt => fn args => ...), so you get the context ``by induction
hypothesis'' over the structure of your program.

Concerning theorem lookup, ProofContext.get_thm: Proof.context -> thmref
-> thm enables to retrieve named facts from a given context, but it is
rarely useful within abstract code, because the name spacing rules behind
this are quite delicate. Better refer to proper theorem values
directly, either via @{thm ...} at compile time, or by using well-typed
interfaces to derived context data, such as DatatypePackage.the_datatype
or similar operations of whatever tool you build yourself.

The general idea of strongly-typed context data is explained in section
1.1 in the Isabelle/Isar implementation manual.

Makarius

view this post on Zulip Email Gateway (Aug 18 2022 at 11:24):

From: Jeremy Dawson <jeremy@rsise.anu.edu.au>
Makarius et al,

At present I have a function modify_ss (of type :
MetaSimplifier.simpset -> MetaSimplifier.simpset).

It is used in a further function

fun modified_ss () = modify_ss (simpset ()) ;

Now I observe that

val simpset = simpset_of o ML_Context.the_context;

and given the previous remarks about using the_context () I assume that
you wouldn't approve of using simpset (), either. Is that correct?

But if I understand the previous emails correctly, to write
fun modified_ss () = modify_ss (simpset_of @{theory}) ;
would have the effect that the simpset used was invariably one
particular simpset, namely that relevant when the text "@{theory}" was
interpreted.

But I want modified_ss () to take the simpset relevant at the time it is
called, and modify it.

How do I define modified_ss in the approved way - using antiquotations
as and where appropriate ?

Jeremy

view this post on Zulip Email Gateway (Aug 18 2022 at 11:24):

From: Makarius <makarius@sketis.net>
On Thu, 21 Feb 2008, Jeremy Dawson wrote:

At present I have a function modify_ss (of type : MetaSimplifier.simpset ->
MetaSimplifier.simpset).

It is used in a further function

fun modified_ss () = modify_ss (simpset ()) ;

Now I observe that

val simpset = simpset_of o ML_Context.the_context;

and given the previous remarks about using the_context () I assume that you
wouldn't approve of using simpset (), either. Is that correct?

Yes, the simpset() form is indeed a legacy feature. If it occurs in
one-shot scripts, it can usually be replaced by the static version
@{simpset}; the same for @{claset} or @{clasimpset}.

Your application sounds more like requiring a run-time context, though. So
assuming you have a proper value (ctxt: Proof.context) already, you may
get the simpset component like this:

local_simpset_of ctxt

and then add your stuff to the result.

How do I define modified_ss in the approved way - using antiquotations
as and where appropriate ?

Most likely, no antiquotations will get involved here, because you are
working only with a runtime context. You still need to get hold of that,
i.e. ``by induction over the structure of your code'' as explained
earlier.

Makarius

view this post on Zulip Email Gateway (Aug 18 2022 at 11:25):

From: Alexander Krauss <krauss@in.tum.de>
Dear Jeremy,

That depends on the application.

You use "setup", when you want to apply the function to the current
theory at that point in the source. This can be used e.g. to

- install new commands, methods and attributes

- Do something else (definitions, proofs or anything) at a
specific point.

On the other hand, if you are adding something like a method (=Isar
version of a tactic), then you are usually given a theory (or a
Proof.context) by the framework.

Can you describe your application (i.e. what does "addsm" do and how
should it be used on a theory), then I can be more concrete.

Alex

view this post on Zulip Email Gateway (Aug 18 2022 at 11:25):

From: Jeremy Dawson <jeremy@rsise.anu.edu.au>
Alexander Krauss wrote:
Dear Alexander,

thanks - OK, now I've got a function of a type which hopefully doesn't
use any impure features:

val addsm = fn : Context.theory -> Context.theory

Now, what do I do with it?

(Your email showed an example using setup {* ... : theory -> theory *},
but I have this function, and want to use it, inside an ML file which
is to be "used" by a theory)

regards,

Jeremy Dawson

view this post on Zulip Email Gateway (Aug 18 2022 at 11:28):

From: Jeremy Dawson <jeremy@rsise.anu.edu.au>
Makarius wrote:

Having studied the recent threads on the "effect of use_thy",
"antiquotations" etc. here is a summary in my own words, together with
some additional hints on how to avoid problems with old ways of doing
things, and moving on towards more powerful concepts.

* The raw interactive ML toplevel is mostly obsolete. After many years
of the two toplevels side-by-side, we have finally passed the turning
point where everything is Isar, and ML is smoothly integrated into
that. Proper context-dependent antiquotations are just one benefit
from this principle, and typing additional ML {* *} should be a small
price to pay in current Proof General (adding a key binding to produce
that wrapping should be trivial anyway).

Unfortunately the integration isn't "smooth". In fact I'm working on a
project now where it is demanded that everything is in Isar, so I have
to try to work in this way. And it is incredibly frustrating, because
things seem to sometimes work, and sometimes not, with no apparent
reason, seemingly randomly.

Mostly my problems are something do to with theories. I don't know if
the problems I've faced today are because of theories or not, but I
think it is quite likely. But I waste an enormous amount of time trying
to cope with things that don't work as expected. What is a simple job
when I'm doing proofs using "the raw interactive ML toplevel" becomes
slow and frustrating when I try to use Isar as much as possible, while
using ML for the things not available in Isar.

The latest example is contained in the attached theory file.
For some reason the ML expression bool_axioms RL [fun_cong] produces an
empty list of results when it appears in the theory file being loaded.

After the theory file is modified in the way shown (ie not using
bool_axioms RL [fun_cong]) it loads OK. Then, after that, the
expression bool_axioms RL [fun_cong]
returns exactly the results expected. Why does the same ML expression
behave differently at different points ?

Why is this? I suspect it is to do with theories, and noted that
theory_of_thm (hd bool_axioms); produces a different result inside the
Isar file
from the result it gives afterwards. Why is this?

I waste an enormous amount of time trying to cope with things that don't
work as expected. When a proof works fine at the terminal, but doesn't
work when you put it into a theory file and load that theory file,
debugging the situation is very difficult. (This is quite apart from the
fact that debugging Isar is naturally more difficult than debugging ML,
because when you load an ML proof file that fails, the theorems proved
prior to the failure are there for you to use).

Something that is a simple job when I'm doing proofs solely in ML
becomes slow and frustrating when I try to use Isar as much as possible
(and ML for the things not available in Isar.) Often it seems to be
something to do with theories. And although I've been using Isabelle
for over 10 years now, it seems that these difficulties have only arisen
in the last few years, and mostly to do with trying to use Isar. In
short, using Isar is so much more difficult than using pure ML for
proofs. (Of course, learning two languages instead of one is no doubt
part of the reason also).

* ML functions that refer to the old-style (dynamic) context, such as
"the_context" and "thm" are very hard to understand exactly and should
be replaced more and more by proper antiquotations (which are
statically scoped at compile-time).

If I understand correctly the_context () returns the current theory as
at the point where it is executed. Is that correct? If so, it doesn't
seem so hard to understand. But - again I may be wrong here - the
difficulty in using Isar seems to be that the current theory keeps on
changing, much more than with typical .ML proof files.
So far as I understood the previous emails,
ML {* XXX the_context () } and ML { XXX@{theory} *} would only return
a different result if the code represented by XXX actually changed the
current theory. Is that correct?

Anyway, attached is the theory file that has taken so much of my time.
If the experts could tell me why bffcs0 is different from bffcs1-3, I'd
be grateful.

Regards,

Jeremy Dawson

view this post on Zulip Email Gateway (Aug 18 2022 at 11:29):

From: Jeremy Dawson <jeremy@rsise.anu.edu.au>
Jeremy Dawson wrote:
After spending so long composing an email, I forgot to attach the file,
here it is.

Thanks,

Jeremy
BI.thy

view this post on Zulip Email Gateway (Aug 18 2022 at 11:31):

From: Jeremy Dawson <jeremy@rsise.anu.edu.au>
Makarius wrote:

In relation to advocating the use of antiquotations, you said:
(this was in relation to advocating the use of antiquotations)

But now I find that using @{theory} instead of the_context () produces
an error

Exception-
ERROR
"Stale theory encountered:\n{ProtoPure, Pur (etc)

When I change this back to the_context () the error goes away.

What is a stale theory, and why does @{theory} produce a stale theory?

Jeremy Dawson

view this post on Zulip Email Gateway (Aug 18 2022 at 11:31):

From: Makarius <makarius@sketis.net>
As explained in the Isar Implementation Manual, theory acts like a linear
type. The "Stale theory" error indicates that it has been used in
non-linerar fashion.

Note that @{theory} is rarely used in production code, only in one-liner
tests within a toplevel ML command etc. If you really need to keep a live
reference to a (potentially evolving) theory, then use @{theory_ref}.

Normally one would just pass through a Proof.context, instead of
manipulating with low-level theory certificates.

Makarius

view this post on Zulip Email Gateway (Aug 22 2022 at 17:22):

From: Makarius <makarius@sketis.net>
The canonical documentation for Isabelle/ML is the "implementation"
manual. You can look at the index for Isar.goal or use Isabelle/jEdit to
make a hypersearch for it on all documentation sources:
$ISABELLE_HOME/src/Doc -- the latter gives a "live" view on that
document (including example snippets) in the Prover IDE.

Anyway, I suspect that Jeremy wants to recreate the look-and-feel of
different system called "Isabelle98" from 20 years ago. For that, the
proof method called "tactic" might help: it is documented in the
"isar-ref" manual, section "7.3: Tactics -- improper proof methods".
This already indicates that we are looking at fringe topics of Isabelle2017.

Makarius

view this post on Zulip Email Gateway (Aug 22 2022 at 17:25):

From: Jeremy Dawson <Jeremy.Dawson@anu.edu.au>
Hi Makarius,

Thanks - what I really meant when I asked about documentation was, how
do I find out how to get the goal as an ML value when I don't already
know the answer to that question. I agree that if I know the answer
then it's easy to find it in the canonical documentation.

In fact the preceding section in the Implementation manual is about the
Proof structure, which also looks as though it contains useful stuff,
except that one needs to get hold of the current proof state. How do
you get the current proof state as an ML value?

Thanks for the pointer about apply tactic ..., yes it's possible to
write a tactic which does nothing but print out the goal, and can use
the goal as an ML value within the tactic, but when I tried to define a
reference value in which to save the goal value I found I couldn't. Has
the version of ML available to Isar users been jinxed, or something, so
that reference variables don't work?

For your information, I don't want to recreate the look-and-feel of
(what you now, I think correctly, admit is a) different system called
"Isabelle98". I just want to get stuff done. The sort of stuff (like
finding out how to get hold of the current goal) that takes no time at
all in HOL4 or Isabelle2005 (which is actually what I use when I'm
building on previous work) and takes hours in your Isabelle 2017

Incidentally, on the subject of documentation - what about the Isabelle
cookbook - why isn't it included as part of the Isabelle documentation
(and kept up to date)?

Cheers,

Jeremy

view this post on Zulip Email Gateway (Aug 22 2022 at 17:26):

From: Makarius <makarius@sketis.net>
On 05/06/18 04:12, Jeremy Dawson wrote:

In fact the preceding section in the Implementation manual is about the
Proof structure, which also looks as though it contains useful stuff,
except that one needs to get hold of the current proof state.  How do
you get the current proof state as an ML value?

E.g. like this:

lemma "x = x"
ML_val ‹@{Isar.goal}›
ML_val ‹Toplevel.proof_of @{Isar.state}›

This has little practical relevance though. We are merely toying around
with details of the implementation here.

Thanks for the pointer about apply tactic ..., yes it's possible to
write a tactic which does nothing but print out the goal, and can use
the goal as an ML value within the tactic, but when I tried to define a
reference value in which to save the goal value I found I couldn't.  Has
the version of ML available to Isar users been jinxed, or something, so
that reference variables don't work?

Isabelle/ML is a very clean parallel programming environment, as
explained in chapter 0 of the "implementation" manual. Section 0.7.9 is
about Usynchronitzed.ref and Section 0.8 about "Thread-safe programming".

For your information, I don't want to recreate the look-and-feel of
(what you now, I think correctly, admit is a) different system called
"Isabelle98".  I just want to get stuff done.  The sort of stuff (like
finding out how to get hold of the current goal) that takes no time at
all in HOL4 or Isabelle2005 (which is actually what I use when I'm
building on previous work) and takes hours in your Isabelle 2017

In Isabelle you have the goal right there in the source, and the Prover
IDE manages proof documents that apply to it. Anything beyond that is
for tool developers, and even that is quite easy if you think about the
system in the proper way, and look a bit through the documentation.

Note that "to get stuff done" (doing proofs) usually requires no
Isabelle/ML at all.

Makarius

view this post on Zulip Email Gateway (Aug 22 2022 at 17:29):

From: Jeremy Dawson <Jeremy.Dawson@anu.edu.au>
Hi Makarius,

not clear what you mean by this - sure, the goal has been entered in
textual form by the user, and obviously the system turns this into the
ML value - but I can't see how the ML value is "there in the source".

What do you mean by this?

Cheers,

Jeremy

view this post on Zulip Email Gateway (Aug 22 2022 at 17:30):

From: Makarius <makarius@sketis.net>
The true proof state is the partial proof text that you edit in the
Prover IDE: as Isar source.

You don't need the ML value to do the proof, it is part of the
implementation.

It might be fun to explore the implementation, but it can also take some
years to get to the bottom of it.

Makarius

view this post on Zulip Email Gateway (Aug 22 2022 at 17:33):

From: Jeremy Dawson <Jeremy.Dawson@anu.edu.au>
Hi

I don't seem to be able to use the goals or subgoals antiquotations
correctly

lemma example: "a = b --> b = a"
ML_prf {* @{thm refl} *}

ML_prf {* @{goals} } ( fails *)
ML_prf {* @{subgoals} } ( fails *)
ML_prf {* @{context} } ( OK *)
ML_prf {* @{theory} } ( OK *)

what am I doing wrong here?

Thanks

Jeremy

view this post on Zulip Email Gateway (Aug 22 2022 at 17:35):

From: Lars Hupel <hupel@in.tum.de>
Dear Jeremy,

I don't seem to be able to use the goals or subgoals antiquotations
correctly

as far as I understand, those are "document antiquotations", i.e. they
may only appear in document text, but not in ML:

text ‹@{goals}›

If you want to take a look at the proof state from within the IDE, you
have two choices:

1) open the "State" panel
2) open the "Output" panel and check the "Proof state" box

lemma example: "a = b --> b = a"
ML_prf {* @{thm refl} *}

Note that this can be shorter expressed as "thm refl" directly. No need
to wrap that into ML.

Cheers
Lars

view this post on Zulip Email Gateway (Aug 22 2022 at 17:36):

From: Dominique Unruh <unruh@ut.ee>
Hi,

you can use

lemma example: "a = b --> b = a"
ML_val {* @{Isar.goal} *}

to get the current goal. Note though, that this does not work with ML_prf.

Best wishes,
Dominique.

view this post on Zulip Email Gateway (Aug 22 2022 at 17:38):

From: Jeremy Dawson <Jeremy.Dawson@anu.edu.au>
Hi Dominique,

Thanks - that's most helpful. Do you know if this is documented anywhere?

Cheers,

Jeremy

view this post on Zulip Email Gateway (Aug 22 2022 at 17:39):

From: Dominique Unruh <unruh@ut.ee>
I don't know if it's documented. I found it via autocompletion in
Isabelle/jEdit. But it took some experimenting to figure out that it works
with ML_val. (Because otherwise it just thrown UNDEF).

Best wishes,
Dominique.


Last updated: Nov 21 2024 at 12:39 UTC