From: Andreas Lochbihler <andreas.lochbihler@inf.ethz.ch>
Hi,
We are trying to use the dynamic evaluation facility of the code generator to evaluate
Isabelle terms in different target languages. The theory below (also attached) shows our
reduced setup, which follows Quickcheck's narrowing engine, and a minimal example.
Unfortunately, we keep getting exceptions during the serialisation. Apparently, we are not
using the functions correctly, but we have no clue what could be wrong.
Can anyone spot the mistake or provide some background how Code_Target.evaluator is
supposed to be used?
theory Scratch imports "~~/src/HOL/Main" begin
datatype foo = Foo
ML {*
fun do_something _ = true;
fun test target =
Code_Thingol.dynamic_value @{theory} (fn _ => I)
(fn naming => fn program => fn ((_, vs_ty), t) => fn deps =>
do_something
(Code_Target.evaluator @{theory} target naming program deps (vs_ty, t)))
@{term "Foo"}
*}
ML {* test "Scala" } --{ Match (line 176 of "~~/src/Tools/Code/code_scala.ML") *}
ML {* test "SML" } --{ Option (line 81 of "General/basics.ML") *}
ML {* test "Haskell" } --{ Match (line 137 of "~~/src/Tools/Code/code_symbol.ML") *}
end
Thanks in advance for any suggestions,
Andreas
Scratch.thy
From: Florian Haftmann <florian.haftmann@informatik.tu-muenchen.de>
Hi Andreas,
Can anyone spot the mistake or provide some background how
Code_Target.evaluator is supposed to be used?
this is indeed a very uncomfortable corner of the system. The
evaluation stack has to accomplish a couple of issues at the same time:
When building similar thing, it is best to follow structure
Code_Runtime. The Quickcheck Narrowing engine is an experimental
(though apparently operative) ad-hoc approach toward code generation and
definitely nothing I recommend to glimpse from in that respect.
theory Scratch imports "~~/src/HOL/Main" begin
datatype foo = Foo
ML {*
fun do_something _ = true;
fun test target =
Code_Thingol.dynamic_value @{theory} (fn _ => I)
(fn naming => fn program => fn ((_, vs_ty), t) => fn deps =>
do_something
(Code_Target.evaluator @{theory} target naming program deps
(vs_ty, t)))
@{term "Foo"}
*}
Formally, this does not look bad. A slightly refined version:
datatype foo = Foo
definition "bar x = x"ML {*
val t = @{term "Foo"};fun do_something ([(_, s)], _) = (writeln s; true);
fun test thy target =
Code_Thingol.dynamic_value thy (fn _ => I)
(fn naming => fn program => fn ((_, vs_ty), t) => fn deps =>
do_something
(Code_Target.evaluator thy target naming program deps (vs_ty, t)))
t
*}ML {* test @{theory} "Scala" *}
ML {* test @{theory} "SML" *}
ML {* test @{theory} "Eval" *}
ML {* test @{theory} "Haskell" *}
Two observations:
a) Scala will choke always. As of Isabelle2013-2, Scala just does not
carry on for evaluation.
b) The problems with the other invocations vanish as soon as you replace
term "Foo" by the logically identical "bar Foo". Don't ask me exactly
why, but it could be a misbehaviour wrt. dependencies which for subtle
reasons does not show up in the existing applications.
Both problems cannot be reproduced on the ongoing development branch.
Since we are converging towards a release, you might consider basing
your work on a particular source code revision until the next release.
The whole evaluation stack has seen considerable reworking and
clarification during the last months and subtle misbehaviours are likely
to have been eliminated.
I'm not giving this advice light-minded but I trust your experience. In
case, we should continue the discussion on isabelle-dev.
Hope this helps,
Florian
signature.asc
From: Andreas Lochbihler <andreas.lochbihler@inf.ethz.ch>
Hi Florian,
Thank you for the quick response.
When building similar thing, it is best to follow structure
Code_Runtime. The Quickcheck Narrowing engine is an experimental
(though apparently operative) ad-hoc approach toward code generation and
definitely nothing I recommend to glimpse from in that respect.
In Code_Runtime, we have not found anything related to actually writing out the files,
compiling and executing them and reading back the input. This is essentially what we take
from Quickcheck_Narrowing. We am aware that the whole setup with narrowing_engine.hs is ad
hoc and we avoid that.
There is also something in both Code_Runtime and Quickcheck_Narrowing that we have not yet
understood. What is the purpose of the cookies that appear everywhere? So far, our
solution seems to work without them, but maybe our testcases are too simple.
Both problems cannot be reproduced on the ongoing development branch.
Since we are converging towards a release, you might consider basing
your work on a particular source code revision until the next release.The whole evaluation stack has seen considerable reworking and
clarification during the last months and subtle misbehaviours are likely
to have been eliminated.I'm not giving this advice light-minded but I trust your experience. In
case, we should continue the discussion on isabelle-dev.
The problem is that it is not just me, but also a student working on this. We'll think
about this and let you know if we switch.
Best,
Andreas
From: Florian Haftmann <florian.haftmann@informatik.tu-muenchen.de>
Hi again,
When building similar thing, it is best to follow structure
Code_Runtime. The Quickcheck Narrowing engine is an experimental
(though apparently operative) ad-hoc approach toward code generation and
definitely nothing I recommend to glimpse from in that respect.
In Code_Runtime, we have not found anything related to actually writing
out the files, compiling and executing them and reading back the input.
This is essentially what we take from Quickcheck_Narrowing. We am aware
that the whole setup with narrowing_engine.hs is ad hoc and we avoid that.
the actual compiler invocation is in these lines in Code_Runtime:
fun exec verbose code =
(if ! trace then tracing code else ();
ML_Context.exec (fn () => Secure.use_text ML_Env.local_context (0, "generated code") verbose code));
There is also something in both Code_Runtime and Quickcheck_Narrowing
that we have not yet understood. What is the purpose of the cookies that
appear everywhere? So far, our solution seems to work without them, but
maybe our testcases are too simple.
The cookies are a device to connect to ML more or less raw compiler to
the runtime system.
In an ideal world, the ML compiler could provide a function
evaluate : ml_program -> ml_expr -> tau
where ml_program is an abstract representation of an ML program, ml_expr
is an abstract representation of an ML expression, and tau is the
expected result type. The compiler would then compile the program,
evaluate the expression against it and return the internalized value
(raising and exception on type mismatch).
But we do not have this in ML as used by Isabelle (I do not know whether
particular compilers provided such extensions). What we have else is
something like
compile : string -> unit
where string is ML source code which is compiled resulting in a side
effect on the underlying ML environment.
Here the cookie comes in. It has three fields get, put, put_ml. The
plain string put_ml is a reference by name to a slot which can be filled
with an evaluation result. After compilation, the internalized
evaluation result can be obtained using get. The put is just a security
device to make sure that the connection between put and put_ml has not
been broken by name space shadowing etc. Thus:
val _ put error_evaluation
val _ = compile (»compiled program, containing a value which is
stored using put_ml (by name)«)
val result = get ctxt;
I have spent a lot of thought for alternative patterns here, but have
not been able to find any.
Hope this helps,
Florian
signature.asc
From: Florian Haftmann <florian.haftmann@informatik.tu-muenchen.de>
What went out well in the past in such borderline situations:
start with a certain revision where you have the impression that its
potential oddities do not hinder you project proper
stick to that revision
a) until a new release (candidate) is approaching
b) until you realise that the assumption from 1. is not valid.
But don't miss 2. a) finally.
Florian
signature.asc
From: Andreas Lochbihler <andreas.lochbihler@inf.ethz.ch>
Hi Florian,
In Code_Runtime, we have not found anything related to actually writing
out the files, compiling and executing them and reading back the input.
This is essentially what we take from Quickcheck_Narrowing. We am aware
that the whole setup with narrowing_engine.hs is ad hoc and we avoid that.the actual compiler invocation is in these lines in Code_Runtime:
fun exec verbose code =
(if ! trace then tracing code else ();
ML_Context.exec (fn () => Secure.use_text ML_Env.local_context (0, "generated code") verbose code));
This is a compiler invocation, but within Isabelle's PolyML process. That is not what I
meant in the above. We need to write out the files and compile and run them using external
programs such as GHC.
There is also something in both Code_Runtime and Quickcheck_Narrowing
that we have not yet understood. What is the purpose of the cookies that
appear everywhere? So far, our solution seems to work without them, but
maybe our testcases are too simple.The cookies are a device to connect to ML more or less raw compiler to
the runtime system.
Thanks for the explanation. That confirms my feeling that we probably do not need these
cookies, because our application do not invoke the ML compiler inside Isabelle's ML
process any more.
Cheers,
Andreas
From: Florian Haftmann <florian.haftmann@informatik.tu-muenchen.de>
Then your life is easy: just invoke your external process and return
your result.
Florian
signature.asc
Last updated: Nov 21 2024 at 12:39 UTC