Stream: Archive Mirror: Isabelle Users Mailing List

Topic: [isabelle] Need a "What's the foundation of Theory.thy" u...


view this post on Zulip Email Gateway (Aug 19 2022 at 11:28):

From: Gottfried Barrow <gottfried.barrow@gmx.com>
Hi,

I don't need this now, but in a year or two, I'd like to have a Isabelle
utility that I run on a theory which tells me what the "foundation" is,
and that also can take a Thy1.thy as the base "foundation", and given
Thy2.thy, the utility reports what additional "foundation" Thy2.thy adds
to Thy1.thy.

The foundation would be anything in a theory which would be axiomatic in
nature, and maybe some kind of checksum on things.

Different people are motivated by Isabelle for different reasons. A big
part of my interest in Isabelle is that it's a tool which, at the very
least, replaces the function of peer reviewed journals, to the extent
that it can. That means, at the very least, it's a huge preliminary
vetting of logic in Theory.thy that announces to anyone interested, "The
logic in Theory.thy has been vetted by Isabelle as being correct. It
most likely will not be a waste of anyone's time to wade through 20,000
lines of source in Theory.thy to figure out what it is, and to find out
if it really is correct."

On a personal level, by all appearances, Isabelle can take the place of
peer reviewed journals, to the extent that it can give you confidence
that you're not producing a bunch of logical nonsense. What Isabelle
doesn't do is solve the "person X is trustworthy and competent" problem
on a community level.

I could dig through the mailing list archives and find where Larry has
said something like, "You have to use Isabelle intelligently." And a
while back, on the HOL4 list, there was a discussion about the
possibilities of people getting proof assistants to do something wrong
for either devious reasons or non-devious reasons. And most recently, on
this very list, Falso was presented as a great logical break through,
where, no doubt, still today, many verifiers, formalizers, and sourcers,
not to be confused with sorcerers, are now full of themselves, mistaking
their own logical prowess with the success that always comes with Falso.

https://lists.cam.ac.uk/mailman/htdig/cl-isabelle-users/2013-March/msg00165.html

Anyway, a "what's the foundation" utility would allow people to do a 10
second check on a theory as a first step, to make sure nothing obvious
is wrong, and to increase the trust factor, and even good people mess up
bad some times.

I assume that there are much more complex problems involved in proof
assistants being completely secure from being hacked, or messed up
unintentionally, but I think it would be good for a person to get a
basic rubber stamp from an official utility that generates a basic
report on what the foundation of a theory is, and that can compare the
foundation of two theories.

No one needs to respond to this. I'm putting it out so maybe in a year
or two there's an Isabelle foundation report utility.

Regards
GB

view this post on Zulip Email Gateway (Aug 19 2022 at 11:28):

From: Ramana Kumar <rk436@cam.ac.uk>
You may be interested in these projects:

https://www.rocq.inria.fr/deducteam/Dedukti/index.html
http://www.gilith.com/research/opentheory/
http://proof-technologies.com/holzero/

The basic idea is to do your proof development in a heavyweight like
Isabelle, and when you're happy with it and want to give it an
extra-special tick, you export a proof and trust it with a tiny,
well-understood checker.

N.B. Isabelle cannot yet export to these, as far as I'm aware, but Brian
Huffman wrote some code for importing from OpenTheory.

view this post on Zulip Email Gateway (Aug 19 2022 at 11:28):

From: Gottfried Barrow <gottfried.barrow@gmx.com>
On 7/9/2013 4:04 AM, Ramana Kumar wrote:

You may be interested in these projects:

https://www.rocq.inria.fr/deducteam/Dedukti/index.html
http://www.gilith.com/research/opentheory/
http://proof-technologies.com/holzero/

The basic idea is to do your proof development in a heavyweight like
Isabelle, and when you're happy with it and want to give it an
extra-special tick, you export a proof and trust it with a tiny,
well-understood checker.

Ramana,

So it seems that, in the future, third-party verification will be a big
part of verifying the verifiers.

There's one big problem, which occurred to me lately. I'm getting
totally locked into Isabelle due to type classes and Isabelle's nifty
type coercion commands. Using those features prevents the possibility of
doing an easy translation to any other member of the HOL family. I
probably would never have the time to port over what I'm doing, but if I
can do something, then I sometimes entertain the idea of doing that
something, so timewise, the limitation ends up being a good thing.

With Open Theory, the last time I was looking at it, I think I saw that
it doesn't allow new axioms. That's good for third-party verification,
but bad if I need new axioms, which I do. With Dedkti, I assume that the
problem of either of or both of type classes and no new axioms rules it
out. For these, if the foundation is fixed, then it solves the problem
of what I'm talking about.

HOL Zero would fall under the "more complex" part of verifying the
verifiers. But even HOL Zero would need a utility to report back what
the foundation is. If it would be as simple as listing the axiom
commands, then it would do that. The idea is for some official utility
to authoritatively tell us exactly what axiomatic foundation is being
used, other than an exploit, rather than us grepping on some files and
us still saying, "I wonder if this is all there is to the axiomatic
foundation?" I'll say more about this in reply to Mark's email.

Thanks for the links.

Regards,
GB

N.B. Isabelle cannot yet export to these, as far as I'm aware, but Brian
Huffman wrote some code for importing from OpenTheory.

On Tue, Jul 9, 2013 at 8:40 AM, Gottfried Barrow
<gottfried.barrow@gmx.com>wrote:

Hi,

I don't need this now, but in a year or two, I'd like to have a Isabelle
utility that I run on a theory which tells me what the "foundation" is, and
that also can take a Thy1.thy as the base "foundation", and given Thy2.thy,
the utility reports what additional "foundation" Thy2.thy adds to Thy1.thy.

The foundation would be anything in a theory which would be axiomatic in
nature, and maybe some kind of checksum on things.

Different people are motivated by Isabelle for different reasons. A big
part of my interest in Isabelle is that it's a tool which, at the very
least, replaces the function of peer reviewed journals, to the extent that
it can. That means, at the very least, it's a huge preliminary vetting of
logic in Theory.thy that announces to anyone interested, "The logic in
Theory.thy has been vetted by Isabelle as being correct. It most likely
will not be a waste of anyone's time to wade through 20,000 lines of source
in Theory.thy to figure out what it is, and to find out if it really is
correct."

On a personal level, by all appearances, Isabelle can take the place of
peer reviewed journals, to the extent that it can give you confidence that
you're not producing a bunch of logical nonsense. What Isabelle doesn't do
is solve the "person X is trustworthy and competent" problem on a community
level.

I could dig through the mailing list archives and find where Larry has
said something like, "You have to use Isabelle intelligently." And a while
back, on the HOL4 list, there was a discussion about the possibilities of
people getting proof assistants to do something wrong for either devious
reasons or non-devious reasons. And most recently, on this very list, Falso
was presented as a great logical break through, where, no doubt, still
today, many verifiers, formalizers, and sourcers, not to be confused with
sorcerers, are now full of themselves, mistaking their own logical prowess
with the success that always comes with Falso.

https://lists.cam.ac.uk/**mailman/htdig/cl-isabelle-**
users/2013-March/msg00165.html<https://lists.cam.ac.uk/mailman/htdig/cl-isabelle-users/2013-March/msg00165.html>

Anyway, a "what's the foundation" utility would allow people to do a 10
second check on a theory as a first step, to make sure nothing obvious is
wrong, and to increase the trust factor, and even good people mess up bad
some times.

I assume that there are much more complex problems involved in proof
assistants being completely secure from being hacked, or messed up
unintentionally, but I think it would be good for a person to get a basic
rubber stamp from an official utility that generates a basic report on what
the foundation of a theory is, and that can compare the foundation of two
theories.

No one needs to respond to this. I'm putting it out so maybe in a year or
two there's an Isabelle foundation report utility.

Regards
GB

view this post on Zulip Email Gateway (Aug 19 2022 at 11:28):

From: Ramana Kumar <rk436@cam.ac.uk>
OpenTheory does allow new axioms. (OpenTheory only checks that some
conclusions follow from some axioms; you're free in the choice of
conclusions and axioms, as long as you provide the proof.)

Dedukti should work both for new axioms and for type classes. It does an
encoding into a rich type theory that should support most logical
features... (I'm not so sure about the details).

view this post on Zulip Email Gateway (Aug 19 2022 at 11:28):

From: Gottfried Barrow <gottfried.barrow@gmx.com>
On 7/9/2013 5:13 AM, "Mark" wrote:

Hi Gottfried,

So you are advocating a utility that would reveal the "foundation" in a
theory. Do you envisage that this utility would have a human aspect to its
usage, reviewing that the reported foundation is good, e.g. that the "+"
operator really is the intended "+" operator, etc? By saying a "10 second
check", it suggests not, although maybe I'm missing your point.

Mark,

No, it's only to list anything that's being used in the theory and
language that's either axiomatic, or might be axiomatic. Also, it would
do some easy checks to rule out the possibility of a person doing sneaky
things, like doing checksum comparisons on third-party files a person is
distributing with their theory, where the local third-party files would
be obtained from the original source.

If someone writes 100,000 lines of source and at the end claims to prove
the Riemann Conjecture, the first thing you want to know is what's in
their THY that's axiomatic, or can't be determined to be axiomatic. The
utility is to give you that information fast.

If someone's not using anything axiomatic beyond what Complex_Main gives
them in Isabelle, then their doing stupid stuff like trying to fool
people with deceptive notation still wouldn't allow them to prove
something that's not true. It would just appear to be something
different on the surface. The statement of a theorem after 50,000 lines
of source is always going to take some effort to figure out what it
proves. The utility is to let you know what the limitations there are to
the foundation.

Also, are
you suggesting that the utility would do some form of automated check on the
foundation of a theory, or would it just report what the foundation is?

No, it would just tell me what the axiomatic foundation is for a
particular THY, or what can't be determined in a THY to be axiomatic.

Some things can't be hacked easily. Most people use distribution sources
they've gotten from the official web sites, so you can't hack the
distribution sources and fool anyone but people using local sources. I
do want the utility to tell me what foundation Complex_Main gives me,
but I'd think it's mostly about the utility reporting what people have
done to extend Complex_Main, or done to modify third-party files that
aren't part of the official distribution sources.

The simple part would be listing the axioms used. You might think all I
need to do is grep on "axiomatization", but in Isar, there are more than
"axiomatization". The latest axiomatic command in Isar I've learned
about is "arities". It's not obvious at all it's an axiomatic command.

Right now I'm speculating that when ML is used in Isar, it can't be
determined that the ML does something axiomatic unless you inspect the
ML, but I wouldn't know, which is the purpose of the utility, to get an
official report on what the foundation is.

The purpose is to make it fast, easy, and official what axiomatic
foundation is being used in a THY. For the case where we're not
interested in any logic proved other than what can be proved by
importing Complex_Main, the first thing we would do is run the utility
on a THY to see if that's the case.

I was vague on the details because it's the Isabelle foundation people
who know what possibly can be done axiomatically, but I came up with a
scenario:

1) Person 3 releases Theory3.thy. It's distributed with Theory1.thy and
Theory2.thy, which have been released by Persons 1 and 2.

2) I download Person 3's theory, and I run the Isabelle foundation
utility on it. It does a checksum comparison on Theory1.thy and
Theory2.thy with the original files which I obtained. It tells me
everything that's in Theory3.thy that's axiomatic, or can't be
determined to be axiomatic. Additionally, it tells me what's axiomatic
beyond what's axiomatic in Theory1.thy and Theory2.thy

Now, having obtained all that information in a couple of minutes, I can
make an informed decision about whether I want to look at Theory1.thy.
If I'm a professor, and Person 3 wasn't supposed to add anything
axiomatic, and they did, they get bad grade. If I'm an employer, and
Person 3 wasn't supposed to use anything axiomatic, and they did, they
get fired.

Some of that I could do myself, but then it's not official, and I don't
know all the details about what can be axiomatic in Isabelle, or what
can't be determined to be axiomatic.

It's not just about checking others, I want the ability to say to
people, "Here's an official report on the foundation I'm using, so you
can know up front what logic my theory is based on."

Like I said to Ramana, even a third-party verifier like yours needs to
be able to give a basic report on what foundation is being used. If all
I'm interested in is standard HOL, then I want to know as fast as
possible whether someone's using standard HOL.

Regards,
GB

Mark.

on 9/7/13 8:41 AM, Gottfried Barrow<gottfried.barrow@gmx.com> wrote:

Hi,

I don't need this now, but in a year or two, I'd like to have a Isabelle
utility that I run on a theory which tells me what the "foundation" is,
and that also can take a Thy1.thy as the base "foundation", and given
Thy2.thy, the utility reports what additional "foundation" Thy2.thy adds
to Thy1.thy.

The foundation would be anything in a theory which would be axiomatic in
nature, and maybe some kind of checksum on things.

Different people are motivated by Isabelle for different reasons. A big
part of my interest in Isabelle is that it's a tool which, at the very
least, replaces the function of peer reviewed journals, to the extent
that it can. That means, at the very least, it's a huge preliminary
vetting of logic in Theory.thy that announces to anyone interested, "The
logic in Theory.thy has been vetted by Isabelle as being correct. It
most likely will not be a waste of anyone's time to wade through 20,000
lines of source in Theory.thy to figure out what it is, and to find out
if it really is correct."

On a personal level, by all appearances, Isabelle can take the place of
peer reviewed journals, to the extent that it can give you confidence
that you're not producing a bunch of logical nonsense. What Isabelle
doesn't do is solve the "person X is trustworthy and competent" problem
on a community level.

I could dig through the mailing list archives and find where Larry has
said something like, "You have to use Isabelle intelligently." And a
while back, on the HOL4 list, there was a discussion about the
possibilities of people getting proof assistants to do something wrong
for either devious reasons or non-devious reasons. And most recently, on
this very list, Falso was presented as a great logical break through,
where, no doubt, still today, many verifiers, formalizers, and sourcers,
not to be confused with sorcerers, are now full of themselves, mistaking
their own logical prowess with the success that always comes with Falso.

https://lists.cam.ac.uk/mailman/htdig/cl-isabelle-users/2013-March/msg00165.
html

Anyway, a "what's the foundation" utility would allow people to do a 10
second check on a theory as a first step, to make sure nothing obvious
is wrong, and to increase the trust factor, and even good people mess up
bad some times.

I assume that there are much more complex problems involved in proof
assistants being completely secure from being hacked, or messed up
unintentionally, but I think it would be good for a person to get a
basic rubber stamp from an official utility that generates a basic
report on what the foundation of a theory is, and that can compare the
foundation of two theories.

No one needs to respond to this. I'm putting it out so maybe in a year
or two there's an Isabelle foundation report utility.

Regards
GB

view this post on Zulip Email Gateway (Aug 19 2022 at 11:28):

From: Ramana Kumar <rk436@cam.ac.uk>
In HOL4, every theorem (a value of type thm in ML), can be inspected to see
the axioms used in its production.

Moreover, all the theorems about any constant (such as might appear in the
conclusion of a theorem) can be retrieved, to help gain confidence in how
the constant was defined. (For example, you could easily check that all the
constants in some theorem were defined in standard HOL4 libraries.)

I'd be surprised if analogous facilities didn't exist in Isabelle already..

view this post on Zulip Email Gateway (Aug 19 2022 at 11:29):

From: Gottfried Barrow <gottfried.barrow@gmx.com>
On 7/9/2013 9:21 AM, Ramana Kumar wrote:

In HOL4, every theorem (a value of type thm in ML), can be inspected to see
the axioms used in its production.

Moreover, all the theorems about any constant (such as might appear in the
conclusion of a theorem) can be retrieved, to help gain confidence in how
the constant was defined. (For example, you could easily check that all the
constants in some theorem were defined in standard HOL4 libraries.)

I'd be surprised if analogous facilities didn't exist in Isabelle already..

If it's easy, and a single, automated process, then I definitely want to
know about it. But if I have ten theorems in a theory that give me 10
final results, rather than one final result, then it's not easy if I
have to manually find those 10 theorems, and type in a command, and look
at the results. I like the idea of automation, as we all do.

I keep using the phrase "or can't be determined to be axiomatic". I say
that because I don't assume that with ML used in a THY it's quick and
easy to automate listing any statement in the ML that's axiomatic, but
it could be easy. I try to assume less than more.

What is axiomatic in logic is so important, the first thing that the
software should provide for a THY is what axiomatic statements get used
in the THY, and anything axiomatic that the THY imports. I want to know
what others are getting for free, and I want others to know what I'm
getting for free, and I want it all to be listed by an official utility
so we all know it's legit and official.

I shouldn't have to have any expertise to answer the question, "What's
the axiomatic foundation of Theory.thy?" The software should immediately
be able to tell me, whether I understand what it tells me or not.

When you study a book on axiomatic set theory, you get the axioms up
front. You don't understand them at first, but getting them up front
categorizes the logic you're using. We're not really interested in
ZFC+Falso. It's different. We immediately skip past those theories in
the AFP that tell us Falso is used. When they don't tell us up front
they use Falso, we end up looking at something we're not interested in.

I'm making a request which now sounds like a demand, but I just use
what's available, and a lot is available. On a private level, I don't
need this at all.

What I'm asking for will eventually become a priority. If thousands of
people are using proof assistants, and they're not from some circle of
trusted professionals, if you don't know a person, and you don't know
what's in their THY, you want to know immediately and effortlessly what
they're using axiomatically, and you want the software to tell you that,
and do with automation.

On 7/9/2013 8:02 AM, Ramana Kumar wrote:

OpenTheory does allow new axioms. (OpenTheory only checks that some
conclusions follow from some axioms; you're free in the choice of
conclusions and axioms, as long as you provide the proof.)

Dedukti should work both for new axioms and for type classes. It does
an encoding into a rich type theory that should support most logical
features... (I'm not so sure about the details).

The lack of one feature can make a big difference, like automatic type
coercion, not to mention getting heavily tied into a particular library
of theorems, used with a particular proof assistant.

But it's good to know that OpenTheory allows new axioms, and that
Dedukti has type classes. Maybe in two years I'll be looking for some
extra verification, and by then there should be more documentation.

It's hard enough to learn the language of one proof assistant. A simple
utility to report the foundation of a THY would probably satisfy me.
Like I said. I say this now just to plant a seed. And maybe someone will
say, "Dude, just type in print_foundation."

Regards,
GB

On Tue, Jul 9, 2013 at 3:10 PM, Gottfried Barrow
<gottfried.barrow@gmx.com>wrote:

On 7/9/2013 5:13 AM, "Mark" wrote:

Hi Gottfried,

So you are advocating a utility that would reveal the "foundation" in a
theory. Do you envisage that this utility would have a human aspect to
its
usage, reviewing that the reported foundation is good, e.g. that the "+"
operator really is the intended "+" operator, etc? By saying a "10 second
check", it suggests not, although maybe I'm missing your point.

Mark,

No, it's only to list anything that's being used in the theory and
language that's either axiomatic, or might be axiomatic. Also, it would do
some easy checks to rule out the possibility of a person doing sneaky
things, like doing checksum comparisons on third-party files a person is
distributing with their theory, where the local third-party files would be
obtained from the original source.

If someone writes 100,000 lines of source and at the end claims to prove
the Riemann Conjecture, the first thing you want to know is what's in their
THY that's axiomatic, or can't be determined to be axiomatic. The utility
is to give you that information fast.

If someone's not using anything axiomatic beyond what Complex_Main gives
them in Isabelle, then their doing stupid stuff like trying to fool people
with deceptive notation still wouldn't allow them to prove something
that's not true. It would just appear to be something different on the
surface. The statement of a theorem after 50,000 lines of source is always
going to take some effort to figure out what it proves. The utility is to
let you know what the limitations there are to the foundation.

Also, are

you suggesting that the utility would do some form of automated check on
the
foundation of a theory, or would it just report what the foundation is?

No, it would just tell me what the axiomatic foundation is for a
particular THY, or what can't be determined in a THY to be axiomatic.

Some things can't be hacked easily. Most people use distribution sources
they've gotten from the official web sites, so you can't hack the
distribution sources and fool anyone but people using local sources. I do
want the utility to tell me what foundation Complex_Main gives me, but I'd
think it's mostly about the utility reporting what people have done to
extend Complex_Main, or done to modify third-party files that aren't part
of the official distribution sources.

The simple part would be listing the axioms used. You might think all I
need to do is grep on "axiomatization", but in Isar, there are more than
"axiomatization". The latest axiomatic command in Isar I've learned about
is "arities". It's not obvious at all it's an axiomatic command.

Right now I'm speculating that when ML is used in Isar, it can't be
determined that the ML does something axiomatic unless you inspect the ML,
but I wouldn't know, which is the purpose of the utility, to get an
official report on what the foundation is.

The purpose is to make it fast, easy, and official what axiomatic
foundation is being used in a THY. For the case where we're not interested
in any logic proved other than what can be proved by importing
Complex_Main, the first thing we would do is run the utility on a THY to
see if that's the case.

I was vague on the details because it's the Isabelle foundation people who
know what possibly can be done axiomatically, but I came up with a scenario:

1) Person 3 releases Theory3.thy. It's distributed with Theory1.thy and
Theory2.thy, which have been released by Persons 1 and 2.

2) I download Person 3's theory, and I run the Isabelle foundation utility
on it. It does a checksum comparison on Theory1.thy and Theory2.thy with
the original files which I obtained. It tells me everything that's in
Theory3.thy that's axiomatic, or can't be determined to be axiomatic.
Additionally, it tells me what's axiomatic beyond what's axiomatic in
Theory1.thy and Theory2.thy

Now, having obtained all that information in a couple of minutes, I can
make an informed decision about whether I want to look at Theory1.thy. If
I'm a professor, and Person 3 wasn't supposed to add anything axiomatic,
and they did, they get bad grade. If I'm an employer, and Person 3 wasn't
supposed to use anything axiomatic, and they did, they get fired.

Some of that I could do myself, but then it's not official, and I don't
know all the details about what can be axiomatic in Isabelle, or what can't
be determined to be axiomatic.

It's not just about checking others, I want the ability to say to people,
"Here's an official report on the foundation I'm using, so you can know up
front what logic my theory is based on."

Like I said to Ramana, even a third-party verifier like yours needs to be
able to give a basic report on what foundation is being used. If all I'm
interested in is standard HOL, then I want to know as fast as possible
whether someone's using standard HOL.

Regards,
GB

Mark.

on 9/7/13 8:41 AM, Gottfried Barrow<gottfried.barrow@gmx.**com<gottfried.barrow@gmx.com>>
wrote:

Hi,

I don't need this now, but in a year or two, I'd like to have a Isabelle
utility that I run on a theory which tells me what the "foundation" is,
and that also can take a Thy1.thy as the base "foundation", and given
Thy2.thy, the utility reports what additional "foundation" Thy2.thy adds
to Thy1.thy.

The foundation would be anything in a theory which would be axiomatic in
nature, and maybe some kind of checksum on things.

Different people are motivated by Isabelle for different reasons. A big
part of my interest in Isabelle is that it's a tool which, at the very
least, replaces the function of peer reviewed
[message truncated]

view this post on Zulip Email Gateway (Aug 19 2022 at 11:29):

From: Gottfried Barrow <gottfried.barrow@gmx.com>
Mark,

You asked me a specific question at the end your email, so I put it here
first, so it doesn't get lost in my noise below.

Like I said to Ramana, even a third-party verifier like yours needs to
be able to give a basic report on what foundation is being used. If all
I'm interested in is standard HOL, then I want to know as fast as
possible whether someone's using standard HOL.
Can you spell out in what sense HOL Zero does not meet your aims on this?
I'm eager to hear about any ideas for improving it.

I think it's merely that you don't support Isabelle/HOL. I have no
loyalties when it comes to proof assistants. I'd abandon Isabelle
tomorrow if I found better software. But I'm not going to find anything
else. For natural deduction based logic, there's nothing out there that
comes close to providing the features that Isabelle provides, and it
gets better with every release.

If you tell me, "But I do support Isabelle/HOL. You can run HOL Zero on
any Isabelle/HOL theory that does anything that Isar allows you to do."

I say, "You do? Man, item number n of large integer M of things I've
wanted that someone has already implemented, but I didn't know about.
Here I've been asking for some basic utility when I can already run HOL
Zero on my THY to verify the verifier, to check the checker, to go the
extra mile to get an automated rubber stamp of what I'm doing in logic,
logic being a tricky deal, full of potential pitfalls, with people
needing to have some preliminary confidence in what I've done before
they're willing to figure out what I've done."

On 7/9/2013 4:33 PM, "Mark" wrote:

Hi Gottfried,

I think you're advocating the "proof auditing" idea that I've been trying to
push in recent years.

No, I'm a just a user, and I'm not remotely qualified to be involved in
the debate over what it takes to "verify the verifiers". I want people
like you to be a verifier watchdog, but I'm no more qualified to know
what proof auditing should entail than I am to know what it takes to
find security holes in Java or Windows.

What I'm primarily talking about is what a language can declare; I'm not
concerned with whether the engine that implements the language has
integrity, or can be exploited through security holes. What I'm
requesting is simple. Given two languages, Isabelle/ML and
Isabelle/Isar, and two theories, Theory1.thy and Theory2.thy, report to
me what statements in ML and Isar are made in the theories that are
axiomatic, and anything else that someone decides to label "the
foundation", and tell me what is axiomatic in Theory2.thy that goes
beyond Theory1.thy.

Do you envisage that this utility would have a human aspect to
its usage, reviewing that the reported foundation is good ....
No, it's only to list anything that's being used in the theory and
language that's either axiomatic, or might be axiomatic.
By "axiomatic" are you referring to any HOL axioms, constant definitions,
type definitions, etc (i.e. what I call "assertions", which is what
mathematicians call "axioms") made in the theory, or are you referring to
just HOL axioms (i.e. the completely unrestricted form of HOL assertion)?

This is where I'm not the one who's supposed to be deciding on
everything in Isabelle/HOL, Isabelle/Isar, Isabelle/ML, and
Isabelle/Pure that deserves to be labeled "axiomatic".

I never made any real progress in HOL4, but it appears that what is
axiomatic in HOL4 is more straightforward and obvious than what's
axiomatic in Isabelle/ML, Isabelle/Pure, and Isabelle/Isar.

At the easiest level, it's keywords in Isar that are axiomatic, two of
those keywords being "arities" and "axiomatization".

I'm under the impression that Isar constants aren't axiomatic, that they
don't allow you to prove anything other than they exist, or that they
always return a value because HOL functions are total. There is the
undefined constant in HOL.thy "axiomatization undefined :: 'a", line
195, which Lars pointed out, so it looks like constants alone don't give
us anything axiomatic.

As far as types, there is this statement in isar-ref.pdf:

If you introduce a new type axiomatically, i.e. via typedecl and
axiomatization, the minimum requirement is that it has a non-empty
model, to avoid immediate collapse of the HOL logic.

What does that mean in the context of what in Theory.thy is the
foundation? I don't know completely, but I do know that I used
"typedecl" and left the HOL function "eq" undefined for my new type, and
after I stated an axiom, my logic was inconsistent because "eq" wasn't
defined for my new type. So, apparently, typedecl qualifies as being
listed in "the foundation".

To summarize, the utility is supposed to tell me what's axiomatic in a
THY, so I can then tell you what's axiomatic in a THY.

Don't you mean "yes" as the answer to my quesion? Surely whatever
foundation is being printed out by the utility needs to be examined in
detail, with all its subtlety, by a human to make sure it's good foundation
rather than nonsense. This is not a 10 seconds exercise, even with the most
user-friendly utility imaginable.

No, other than the utility reporting that the logic is inconsistent. If
someone puts in an axiom that will give them the Riemann Hypothesis, but
which isn't the Riemann Hypothesis, how is the software supposed to know
that it's nonsense, unless it makes the logic inconsistent?

You could have a black list of bogus axioms, but that's no great
feature. Most likely, if a crackpot published a THY with 10,000 lines of
source claiming to prove the Riemann Hypothesis, there would only be a
few axiomatic statements in it. What's important is that the utility
tells me that on line 4500 "axiomatization" is used, and what the axiom
is. Or maybe "arities" is used in a clever way to get the result, but it
tells me about it, and it's my job to judge whether it's valid.

For most people using Isabelle/HOL, it's not about whether additional
axioms are nonsense or not, it's that Isabelle/HOL shouldn't be extended
axiomatically in any way. The foundation to check any THY against is
Complex_Main.thy, and mostly, people don't want anything axiomatic
beyond that.

The nature of logic is I'm free to start with whatever axioms I want.
The main thing is to clearly know what the foundation is, not to have
someone tell you that it's nonsense. What I'm asking for is a 1 day or a
1 week job. If I ask for more, no one has the time to do it.

Also, it would do some easy checks to rule out the possibility of a person
doing sneaky things, like doing checksum comparisons on third-party files
a person is distributing with their theory, where the local third-party files
would be obtained from the original source.
I don't get what you're talking about here with checksums. Is this used by
Isabelle or something?

The utility doing a checksum on a file using a known good checksum would
be a secondary feature. This falls under "easy integrity check" rather
than "axiomatic declaration in the language".

The idea is that Trusted Person has released Hol_Extension.thy and it
has become a standard extension of HOL, but it's not part of the
Isabelle2013 distribution, and people are distributing it with their own
theories. I give the utility a MD5 checksum that Hol_Extension.thy is
supposed to match when a checksum is run on the file. If it doesn't
match, then the person is not distributing the right file, and maybe
they edited it.

Okay, but if I have the known good Hol_Extension.thy, then can't I just
copy it over the one distributed? So maybe the purpose is just to try
and bust amateur deceivers who are trying to do easy hacks.

The checksum idea was just part of brainstorming on the concept of some
minimal thing that a utility could do to keep people from wasting our
time with bogus theories.

If someone writes 100,000 lines of source and at the end claims to prove
the Riemann Conjecture, the first thing you want to know is what's in
their THY that's axiomatic, or can't be determined to be axiomatic. The
utility is to give you that information fast.
This really does sound like a tool for supporting proof auditing.

You would have to tell me what proof auditing is, but I don't classify
it as proof auditing. "Proof auditing" implies that proofs need to be
checked because there's the possibility that the underlying software is
being exploited or has bugs.

I use the phrase "or can't be determined to be axiomatic," but I'm only
speculating that a person might need to look at a block of ML to
determine whether it declares something axiomatic. The utility would
report on ML that's used which needs to be inspected by a human, and a
person who knows enough could decisively determine whether the ML is
axiomatic. That's what I was talking about, and just knowing about
anything that's axiomatic.

If I state a axiom that gives me a proof of the Riemann Conjecture,
there's nothing that needs to audited. The proof is correct, given the
axiom. Logically, there's no problem with the proof.

If someone's not using anything axiomatic beyond what Complex_Main
gives them in Isabelle, then their doing stupid stuff like trying to fool
people with deceptive notation still wouldn't allow them to prove
something that's not true. It would just appear to be something
different on the surface. The statement of a theorem after 50,000
lines of source is always going to take some effort to figure out what
it proves. The utility is to let you know what the limitations there are
to the foundation.
That's right, the proof audit should only examine what's been used.
Intermediate stuff is irrelevant if it doesn't get used. Specifically you
need to be examining all axioms and definitions that get used in the proof
of t
[message truncated]

view this post on Zulip Email Gateway (Aug 19 2022 at 11:29):

From: Gottfried Barrow <gottfried.barrow@gmx.com>
So maybe you can make it happen. People easily being able to cull out
the bogus theories, whether logically correct due to bogus axioms, or
illogical due to exploitations and bugs, increases the chances that
people will take their time to study the good theories, which will
always be necessary for anything that's complicated.

The potential features I've naysayed, I'm sure I would find useful.

I don't see that it could be a bad thing, unless you end up like John
MacAfee:

http://arstechnica.com/tech-policy/2013/05/john-mcafee-admits-vice-revealed-his-location-when-fleeing-belize-last-year/

Regards,
GB


Last updated: Apr 16 2024 at 08:18 UTC