From: Florian Haftmann <florian.haftmann@informatik.tu-muenchen.de>
[This thread has become too voluminous to be followed in detail by a
casual bystander, so I try here with »one issue – one mail«.]
I am personally very sympathetic towards a refined, stricter, more
accurate, better understood consistency check. The separation of
Isabelle/Pure and Isabelle/HOL however is an integral part of the system
design and needs to be respected and reflected in the design of components.
In the past, we always managed to leave typedef itself unchanged,
resisting obvious temptations like
replace sets by predicates (which is conceptually possible but would
break down the bridges to Gordon HOL);
move typedef to Pure (which seems just to be wrong, although I have no
example at hand for a valid Pure model that would be invalidated by that).
It is a capriole of Isabelle/Pure to provide types (»syntactic
categories«) and overloading but no means to introduce new types with
specific properties other than purely axiomatic statements. So I wonder
how a check considering constant definitions and »type definitions« at
the same time ought to be designed. Conceding that I don't know the
essence of previous discussions, two spontaneous ideas:
A hook design for defs.ML, where more specific checks can be plugged
in later?
Shadow constants declared by typedef to mimic the additional
dependencies imposed by a stricter check?
Just a few bits,
Florian
signature.asc
From: Larry Paulson <lp15@cam.ac.uk>
I think this could work, using a meta-level predicate of course. It could closely resemble the version in HOL4. I haven’t thought this through, but the other thing it needs is an equality symbol, which we have in the meta-logic.
Whether it would cause significant architectural upheavals is another question. There might be simpler solutions to the problem at hand. I don’t have a clear idea of how the circularity check works and whether typedef it could feed into it.
Larry
From: Ondřej Kunčar <kuncar@in.tum.de>
There is a third solution, which I used in my proof-of-concept patch:
if c = t is defined in Pure you let c depend on all entities (in t)
that are from Pure's logic and that have a global name, i.e., constants
and type constructors. I argue that is not an intrusion of Isabelle/HOL
into Isabelle/Pure because the very notion of a type constructor is
defined in Pure and therefore c must depend on the type constructors
from t nevertheless if we have a definitional mechanism for types at
this point.
if there is an object logic that provides additional definitional
mechanisms (e.g., typedef in Isabelle/HOL), this object logic can add
additional "edges" to the graph of dependencies, as it is already done
nowadays for example for Abs and Rep morphisms. E.g., if I define tau =
S, I let tau depend on all entities from S.
As you can see no hooks and no shadow constants are needed and we can
still keep separation between the meta-logic and object logics.
From a technical/implementation point of view: the only conceptual
change of the code is that you have to separate name spaces of types and
constants in defs.ML. But because the implementation of defs.ML is
already visionary, this is the only change that has to be done. You can
treat type instances of type constructors and constants in an uniform
way, i.e., there is not any change in the algorithm's itself. You just
need a more complicated type for names of the nodes in the graph.
Ondrej
From: Florian Haftmann <florian.haftmann@informatik.tu-muenchen.de>
Hi Ondřej,
[...]
thanks for explaining this so thoroughly.
So, the technical complication in the first instance is a name space issue.
To maintain the »visionary« property of defs.ML, it could also be
generalized to administrate dependencies of »named symbols« in general,
e.g. by using some kind of name space tagging etc. instead of hard-coded
constructors. These could serve potential specification object logics
dealing with different kinds of entities than type constructors and
constants also, but this is highly speculative.
Cheers,
Florian
signature.asc
From: Burkhart Wolff <Burkhart.Wolff@lri.fr>
- move typedef to Pure (which seems just to be wrong, although I have no
example at hand for a valid Pure model that would be invalidated by that).
Well, this seems to be highly problematic to me. Coq like systems would consider
not a function space like ‘a => bool but ‘a => prop and start to build al sorts of
“setoid constructions” as basis of a type… Meaning that complements of setoids
do not necessarily exist.
I am afraid that any attempt to build a typedef in the Pure - Framework will
will be too special and ad-hoc for the generality that Pure adresses …
In my view, the entire construction of typedef’s makes one sense in the
context of a classical set notion (be it ZF like or HOL like).
bu
[ … ]
From: Makarius <makarius@sketis.net>
This "speculative" treatment of specification items that may be either
consts or types is already part of the Isabelle code base for the coming
release. I even added a 'print_definitions' command that prints a keyword
"type" with PIDE markup, to make it easier to spot these extremely rare
occurrences of overloaded type constructors. Moreover, the syntax
"typedef (overloaded)" makes clear to the user where traditional HOL
interpretation according to A. Pitts is transcended.
Formally all of this is just a certain abstract service of Isabelle/Pure,
which is used inside Pure itself for constant definitions. Object-logics
like Isabelle/HOL then add their own axiomatization schemes with extra
dependencies, and then call them "definitional". The explanation how this
can be justified is in the ITP paper, despite some gaps compared to the
actual code.
Thus we are back to the standard invariant of Isabelle development over 3
decades, that the concepts behind the implementation and the concepts
behind published papers are slightly divergent.
Makarius
Last updated: Nov 21 2024 at 12:39 UTC