Stream: Mirror: Isabelle Users Mailing List

Topic: [isabelle] Two questions about the availability of certai...


view this post on Zulip Email Gateway (Jan 15 2021 at 21:47):

From: Mikhail Chekhov <mikhail.chekhov.w@gmail.com>
Dear All,

I have two weakly related questions with regard to the availability of
certain functionality from the public interface associated with the
Isabelle/ML infrastructure of the standard distribution of Isabelle:

  1. I would like to understand if it is possible to tell whether a given
    attribute is a declaration attribute, a rule attribute, or a mixed
    attribute after its construction.

  2. I would like to understand whether it is possible to compare proof
    contexts in a manner similar to which theories can be compared using the
    functions Context.eq_thy, Context. proper_subthy and Context.subthy. Do
    there, by any chance, exist diagnostic theory/context difference
    functions, allowing for listing the exact specific changes introduced by a
    modification of a theory/context?

If the functionality that is described in the questions above is not
available, I would like to convert the questions into a feature request, if
possible.

Kind Regards,
Mikhail Chekhov

view this post on Zulip Email Gateway (Jan 16 2021 at 09:21):

From: Florian Haftmann <florian.haftmann@informatik.tu-muenchen.de>
Hi Mikhail,

overall, I think it is best if you expose in a few sentences what you
want to achieve in the end – as you will see from my terse replies below
the mechanisms you request are conceptually not available in the system,
but typically this can be resolved by reshaping the genuine problem
after having a different perspective on it.

  1. I would like to understand if it is possible to tell whether a given
    attribute is a declaration attribute, a rule attribute, or a mixed
    attribute after its construction.

Traditionally, attributes are mainly observable by their effect, not by
their construction – although AFAIK more structure has emerged for the
benefit of the module systems (locales, bundles).

  1. I would like to understand whether it is possible to compare proof
    contexts in a manner similar to which theories can be compared using the
    functions Context.eq_thy, Context. proper_subthy and Context.subthy. Do
    there, by any chance, exist diagnostic theory/context difference
    functions, allowing for listing the exact specific changes introduced by
    a modification of a theory/context?

Those operations are a mere technical device to manage the theory graph,
not a means to build applications on it. Since the managements of proof
contexts is less involved, they get along without it.

Regards,
Florian
signature.asc

view this post on Zulip Email Gateway (Jan 16 2021 at 14:15):

From: Mikhail Chekhov <mikhail.chekhov.w@gmail.com>
Dear Florian Haftmann/All,

Thank you for your reply.

On 16 Jan 2021, at 10:21, Florian Haftmann <florian.haftmann at
informatik.tu-muenchen.de> wrote:

overall, I think it is best if you expose in a few sentences what you
want to achieve in the end – as you will see from my terse replies below
the mechanisms you request are conceptually not available in the system,
but typically this can be resolved by reshaping the genuine problem
after having a different perspective on it.

I will act upon your request and provide more background for my questions.
However, please note that I accept the comments in your last email as a
reply, i.e. the functionality is not available and there are no immediate
plans to introduce it. This information is sufficient to allow me to make
the relevant design decisions with regard to the applications that I am
working on.

  1. I would like to understand if it is possible to tell whether a given
    attribute is a declaration attribute, a rule attribute, or a mixed
    attribute after its construction.
    Traditionally, attributes are mainly observable by their effect, not by
    their construction – although AFAIK more structure has emerged for the
    benefit of the module systems (locales, bundles).

In certain applications, I allow the users to provide their own attributes
as inputs to a command. These attributes are applied to a theorem/context
within the scope of an application of a certain algorithm. I would like to
be able to restrict the attributes that the users can provide to rule
attributes. Being able to introduce this condition makes the design of the
system easier: unfortunately, it is substantially more difficult (if not
entirely impossible) to support arbitrary context modifications.

Given what has been stated, it seems that my options are to either restrict
the attributes to a given pre-determined subset of standard rule attributes
or to allow arbitrary attributes, but state in the documentation that only
the rule attributes are supported fully.

  1. I would like to understand whether it is possible to compare proof
    contexts in a manner similar to which theories can be compared using the
    functions Context.eq_thy, Context. proper_subthy and Context.subthy. Do
    there, by any chance, exist diagnostic theory/context difference
    functions, allowing for listing the exact specific changes introduced by
    a modification of a theory/context?

Those operations are a mere technical device to manage the theory graph,
not a means to build applications on it. Since the managements of proof
contexts is less involved, they get along without it.

The functionality that I seek could be useful for the purpose of designing
tests for my applications. For example, it would be useful to be able to
state that a given function/module does not modify the proof context under
certain inputs.

In this context, I wonder what are the usual software
engineering/management practices that are followed when working
on/extending the Isabelle/ML system associated with the standard
distribution? Do there exist unit test frameworks specifically designed for
Isabelle/ML? Does there exist any infrastructure that would enable
black-box testing of Isabelle's commands/methods/attributes? (I am aware
that proving is preferred to testing in the world of functional
programming, but, in my view, this approach is impractical for the UI
design in Isabelle/ML, given the existing state of the art).

Kind Regards,
Mikhail Chekhov


Last updated: Sep 25 2021 at 08:21 UTC