Stream: Archive Mirror: Isabelle Users Mailing List

Topic: [isabelle] Code generation for composite terms


view this post on Zulip Email Gateway (Aug 19 2022 at 14:10):

From: Florian Haftmann <florian.haftmann@informatik.tu-muenchen.de>
Hi Andreas,

Unfortunately, the test data is not an Isabelle constant, the most
prominent example are literal numbers like "42 :: code_numeral".
However, export_code and its ML equivalent only accept constants, not
general terms like "42". Therefore, we'd like to know how we can
generate code for such terms.

We had a look at how "value [code]" achieves to evaluate arbitrary
terms. After studying the sources, it seems to us that
Code_Thingol.ensure_value does the necessary wrapping by introducing a
code dependency "dummy_pattern = <term to evaluate>", then builds the
code graph and removes "dummy_pattern" and the dependency again.

This
looks a bit hacky (I don't see a way to generalise this to multiple
terms to evaluate) and uses lots of functions that Code_Thingol does not
export.

The first half of this sentence is the reason for the second half ;-).

If we had to do it manually, we would define constants for the arguments
and feed them to the export_code command. However, in an automated
system, our asynchronous testing command then would become a theory
transformation that pollutes the name space if many test cases are run.
Is there a simpler solution? For example, making a bunch of definitions
solely for the invocation of the code generator that are forgotton
afterwards again?

AFAIK, quickcheck achieves a similar effect by forking the theory,
adding definitions (or even axiomatizations), generating code and
throwing it away afterwards. If I understand right, your business is
also about testing, so this should also be applicable to your scenario.

Hope this helps,
Florian
signature.asc

view this post on Zulip Email Gateway (Aug 19 2022 at 14:11):

From: Andreas Lochbihler <andreas.lochbihler@inf.ethz.ch>
Hi Florian,

We have been unable to find these temporary definitions in Quickcheck. The three
generators random, exhaustive and narrowing package everything that is needed for testing
in in a single term. Then, they call some version of Code_Thingol.dynamic_value which uses
Code_Target.ensure_value to get the intermediate representation of the generated code.
Finally, they run the code either with the usual mechanism from Code_Runtime or with
Narrowing_Generators.value.

In principle, we could also stuff all our data into a large single term and then decompose
it in the generated test driver again. Do you recommend that we follow this way? Or do you
think that separate definitions are superior?

Cheers,
Andreas

view this post on Zulip Email Gateway (Aug 19 2022 at 14:11):

From: Florian Haftmann <florian.haftmann@informatik.tu-muenchen.de>
Hi Andreas,

AFAIK, quickcheck achieves a similar effect by forking the theory,
adding definitions (or even axiomatizations), generating code and
throwing it away afterwards. If I understand right, your business is
also about testing, so this should also be applicable to your scenario.
We have been unable to find these temporary definitions in Quickcheck.
The three generators random, exhaustive and narrowing package everything
that is needed for testing in in a single term. Then, they call some
version of Code_Thingol.dynamic_value which uses
Code_Target.ensure_value to get the intermediate representation of the
generated code. Finally, they run the code either with the usual
mechanism from Code_Runtime or with Narrowing_Generators.value.

this was my slip, I have been referring to the predicate compiler
quickcheck.

In principle, we could also stuff all our data into a large single term
and then decompose it in the generated test driver again. Do you
recommend that we follow this way? Or do you think that separate
definitions are superior?

The approach with a big tuple seems suitable, indeed.

Hope this helps,
Florian
signature.asc

view this post on Zulip Email Gateway (Aug 19 2022 at 14:23):

From: Andreas Lochbihler <andreas.lochbihler@inf.ethz.ch>
Hi,

We are trying to implement an evaluation system for Isabelle2013-2 that can generate code
in different target languages, run generated functions on some input data, and process the
result. Our goal is to automate testing of adaptations of the code generator setup
(code_printing). To that end, we would like the code generator to generate the function to
test and the test data.

Unfortunately, the test data is not an Isabelle constant, the most prominent example are
literal numbers like "42 :: code_numeral". However, export_code and its ML equivalent only
accept constants, not general terms like "42". Therefore, we'd like to know how we can
generate code for such terms.

We had a look at how "value [code]" achieves to evaluate arbitrary terms. After studying
the sources, it seems to us that Code_Thingol.ensure_value does the necessary wrapping by
introducing a code dependency "dummy_pattern = <term to evaluate>", then builds the code
graph and removes "dummy_pattern" and the dependency again. This looks a bit hacky (I
don't see a way to generalise this to multiple terms to evaluate) and uses lots of
functions that Code_Thingol does not export. So, I wonder whether that is one way to go.

If we had to do it manually, we would define constants for the arguments and feed them to
the export_code command. However, in an automated system, our asynchronous testing command
then would become a theory transformation that pollutes the name space if many test cases
are run. Is there a simpler solution? For example, making a bunch of definitions solely
for the invocation of the code generator that are forgotton afterwards again?

Or are there simpler solutions that we have overlooked?

Thanks in advance for any suggestions and ideas,
Andreas


Last updated: Apr 19 2024 at 20:15 UTC