Stream: Archive Mirror: Isabelle Users Mailing List

Topic: [isabelle] General nitpick/sledge info & counterX found o...


view this post on Zulip Email Gateway (Aug 18 2022 at 20:33):

From: Gottfried Barrow <gottfried.barrow@gmx.com>
First, I ask a straightforward question.

If I declare a type, such as

typedecl sT,

am I effectively creating a non-empty set, which has a power set
associated with it? I've read several times that a type can be
interpreted as a non-empty set.

Second, I try to bump up the value of this email with some setup
information for Nitpick and Sledgehammer. I'm in the "learn how to setup
and use Nitpick and Sledgehammer" mode. I'm almost out of it, but I try
to explore it in depth so that when I get back to the drudgery of
working through tutorials and exercises, I know how to use these
automated tools for my own tangents. And, certain logics can never be
used to prove their own consistency, so I might as well run Nitpick on
everything, and to check the more likely scenario that I've introduced
inconsistency.

Third, if you want to read that far, I have a type, a constant, a
predicate, one axiom, and one theory, and I get a counterexample from
Nitpick. The answers to my questions on that may be obvious, but I ask
them anyway. Not understanding subtleties can cause me to have a too
simple view of the universe, which might take me months to find out.

NITPICK STUFF

If you run "nitpick [verbose]" under a theorem, you'll get this message
(at least for Cygwin):

The following solvers are configured: "MiniSat_JNI",
"CryptoMiniSat_JNI", "Lingeling_JNI", "SAT4J", "SAT4J_Light".

Jasmine Blanchette rates these SAT solvers like this:

"SAT4J is very slow, but the others are roughly equivalent, with
Lingeling slightly better than CryptoMiniSat, and CryptoMiniSat slightly
better than vanilla MiniSat."

Though CryptoMiniSat_JNI and Lingeling_JNI show up, to use them you need
Kodkodi 1.5.2, from Blanchette's site:

http://www4.in.tum.de/~blanchet/#software
<http://www4.in.tum.de/%7Eblanchet/#software>

The install instructions for a Kodkodi upgrade are on page 3 of
nitpick.pdf.

If you want to be compulsive, after setting some options with something
like "nitpick_params [verbose,timeout=120,user_axioms=true]", which will
also lists how all the options are set in the output window, you can run
all the SATs under a theorem with something like:

nitpick[verbose,user_axioms,sat_solver=MiniSat_JNI]
nitpick[verbose,user_axioms,sat_solver=CryptoMiniSat_JNI]
nitpick[verbose,user_axioms,sat_solver=Lingeling_JNI]
nitpick[verbose,user_axioms,sat_solver=SAT4J]
nitpick[verbose,user_axioms,sat_solver=SAT4J_Light]

SLEDGEHAMMER STUFF

On Jasmin's site are also the upgrades to the Sledgehammer solvers CVC3,
E, SPASS, and Z3. So far, I haven't been big on wanting to change the
default Isabelle2012 distribution, but I'll work on doing this, which
will keep me in the "learn the auto-tools mode" for another day.

In the folder Isabelle2012/contrib, you can see the versions of Kodkodi,
CVC3, E, SPASS, and Z3 to see what versions you have.

SLEDGEHAMMER YICES

Working hard to do more than ask questions, I am the official liaison of
important, breaking news on [isabelle-dev], which is a joke, of course.

Put YICES_INSTALLED="yes" in your "~/Isabelle2012/etc/settings" file,
and yices will be one of the available solvers when you use the command
"sledgehammer supported_provers" under a theorem. The pertinent thread
is "sledgehammer / yices":

http://www.mail-archive.com/search?q=sledgehammer+%2F+yices&l=isabelle-dev%40mailbroy.informatik.tu-muenchen.de
<http://www.mail-archive.com/search?q=sledgehammer+%2F+yices&l=isabelle-dev%40mailbroy.informatik.tu-muenchen.de>

NITPICK'S COUNTERX: PENCIL & PAPER LOGIC vs. ISABELLE'S LOGIC

The foundation of the traditional "pencil & paper" logic can never be
exactly the same as Isabelle's logic foundation. Our brain maps formulas
to a value of either true or false. Isabelle uses typed lambda calculus
to map formulas to true or false. I understand that the two lowest-level
foundations for the two logics aren't exactly the same.

Suppose I have an "extreme minimalist ZF sets logic". I've been given
the same logical symbols, the same non-logical symbols, and the same
rules for using them as ZF sets.

I write one formula as an axiom, "(1) there exists a set which contains
no elements", (? x. ! y. ~(y IN x)).

I write one formula as a theorem, "(2) the set described by (1) is
unique", (! u. ((! x. ~(x IN u)) --> u = emptyS)).

Setting aside that I'm taking liberties with (2) by using a constant
name, though I can't prove (2) because I haven't defined "=" with an
axiom, I would be very surprised if someone told me that a
counterexample could be found for (2) using the single formula of (1),
which is all that's available to be used. Well, not quite. The FOL
specification requires that "=" is given as a logical symbol, but I do
need some sets available for a counterexample.

Alright, but with Isabelle, I can never be so minimalist. My minimalist
logic on top of HOL needs a declared type, and the "=" is given as the
FOL logical symbol which is (partly or fully) axiomatized by four axioms
starting at line 196 of src/HOL/HOL.thy. I understand the first 2
axioms, but not the second 2.

My logic needs a declared type, "typedecl sT", and I assume I
effectively introduce a non-empty set by doing that, which means there
are sets available for a counterexample.

I show my minimalist world below and I attache it as a file. The theorem
"emS_is_unique1" fails Nitpick. After "=" is defined for sets,
"emS_is_unique2" doesn't fail Nitpick, and is proved by metis.

Some responses, whether sent or thought, might be:

"There's not a problem, don't make a mountain out of a molehill. Define
equality like you have to do anyway."

Or, "If a consistency results because your setup is too simplistic,
you'll probably find out. If it takes you 6 months to find it out, well,
that's the cost of doing business in the world of proof assistants."

Regards,
GB

theory sTs_Empty_Exist_Nitpick
imports Main
begin
declare[[show_brackets=true]]
typedecl sT
consts emS::sT
consts inS::"sT => sT => bool" (infixl "IN" 55)

axiomatization where
eeA:"!x. ¬(x IN emS)"

theorem emS_is_unique1:
"!u. (!x. ~(x IN u)) --> u = emS"
nitpick[verbose,user_axioms] oops

axiomatization where
exA: "!u. !v. ((!x. x IN u <-> x IN v) --> u = v)"

theorem emS_is_unique2:
"!u. (!x. ~(x IN u)) --> u = emS"
by (metis eeA exA)
end
sTs_Empty_Exist_Nitpick.thy

view this post on Zulip Email Gateway (Aug 18 2022 at 20:33):

From: Gottfried Barrow <gottfried.barrow@gmx.com>
I know that "=" as axiomatized in HOL.thy is a HOL function as
introduced on line 98: eq :: "['a, 'a] => bool".

I naively adopt the logical symbols /\, \/, -->, ~, =, \<forall>,
\<exists>, and <-> as my FOL logical symbols.

Regards,
GB

view this post on Zulip Email Gateway (Aug 18 2022 at 20:33):

From: Gottfried Barrow <gottfried.barrow@gmx.com>
Here is my conclusion/lesson, which is merely stating the obvious: If I
don't axiomatize things right, I get inconsistencies.

axiomatization where
eeA:"!x. ¬(x IN emS)"

theorem emS_is_unique1:
"!u. (!x. ~(x IN u)) --> u = emS"
nitpick[verbose,user_axioms] oops

theorem emS_is_unique1_negation:
"~(!u. (!x. ~(x IN u)) --> u = emS)"
nitpick[verbose,user_axioms] oops

So nitpick finds counterexamples for both.

Unfortunately, I don't understand types, lambda calculus, and the "="
function well enough to know why.

So, I add the standard axioms, without completely understanding the
underlying engine, and hope for the best. There's always the possibility
that it's something embarrassingly simple.

Still, metis is pretty impressive, even though it's being applied to a
simple problem. I negated emS_is_unique1 so that it started with an
\<exists>, and then negated the whole thing with "~", and after defining
set equality, metis proved it by the non-negated version of the theorem
right above it. These things don't happen by accident, and sledgehammer
is there to help make things happen.

Well, I go ahead and try to think this out. If emS_is_unique1 is false,
then:

? u. (! x. ~(x IN u)) & (u ~= emS).

If types allow an infininite number of "u::sT" to be lurking out there,
then there's a "u" distinct from emS for which I've never said that "u =
emS", therefore "u ~= emS", and for which I've never used on the RHS of
an "IN", therefore (!x. ~(x IN u)) .

What blows my analysis apart is that Nitpick finds a counterexample for
emS_is_unique1_negation. I want to say that Nitpick shows that the
negation of the theorem is true, but that would require me to ask an
embarrassing question, like, "Uh, can I ask a simple question? If
Nitpick finds a counterexample to a theorem, does that mean that the
negation of the theorem is true?"

Things being both true and false don't make a lot of sense.
Regards,
GB

view this post on Zulip Email Gateway (Aug 18 2022 at 20:33):

From: Christian Sternagel <c-sterna@jaist.ac.jp>
Just to clarify: YICES_INSTALLED="yes" alone won't work if the yices
binary is not installed and set up correctly (as described in isabelle doc sledgehammer), since the standard Isabelle bundle does not include
this binary due to licensing issues.

cheers

chris

PS: Z3_NON_COMMERCIAL="yes" (in your etc/settings) is also a way to add
an additional prover and works out of the box.

view this post on Zulip Email Gateway (Aug 18 2022 at 20:33):

From: Gottfried Barrow <gottfried.barrow@gmx.com>
That's good info because I would have missed that. The auto-provers
return lots of errors, and it wasn't apparent to me that yices wasn't
installed. I searched the web based on provers that showed up for
"sledgehammer supported_provers", but I did my web search before doing
my YICES_INSTALLED="yes".

Here's the downloads that I found on the web:

Yices: http://yices.csl.sri.com/download.shtml

Vampire: http://www.vprover.org/

alt_ergo: http://alt-ergo.lri.fr/, http://why3.lri.fr/

leo2: http://www.ags.uni-sb.de/~leo/

satallax: http://www.ps.uni-saarland.de/~cebrown/satallax/

Yices has a Cygwin binary, and it works if I correctly use Cygwin's path
naming like /cygdrive/e/.../yices-1.0.35/bin/yices. I wouldn't be able
to use it without the binary, since there's no remote yices.

I couldn't get Vampire working on Cygwin last year, and I don't expect
better results this year, plus I can't get the file off of their web
site. But there's remote_vampire.

The alt_ergo installer got deleted by Norton, plus Why3 has to be
installed to use it, so I forget about that one for now.

Leo2 and Satallax don't come with a Cygwin binary, so I forget about
them, since I don't try to make anything with Cygwin. But there are
remote versions for both of these.

I have them all in now. I count twenty-one of them. I wonder about that
dummy_thf. It doesn't ever solve anything. That "dummy" sounds a little
suspicious.

I wonder if I really need all three of spass, spass_new, and spass_old,
but I leave them all in.

Regards,
GB

view this post on Zulip Email Gateway (Aug 18 2022 at 20:34):

From: Makarius <makarius@sketis.net>
On Thu, 19 Jul 2012, Gottfried Barrow wrote:

Here is my conclusion/lesson, which is merely stating the obvious: If I
don't axiomatize things right, I get inconsistencies.

Axiomatizations are very difficult to get right. Experts usually avoid
them altogether, and use definitions exclusively. Definitions can be as
diverse as 'definition', 'inductive', 'function', 'locale', ...

Unfortunately, I don't understand types, lambda calculus, and the "="
function well enough to know why.

Lambda calculus is just notations for mathematical functions; '=' is plain
classical equality on arbitrary mathematical entities, including
functions.

HOL is more simple than FOL, because it does not impose any restrictions
about quantification and equality.

Types in HOL are just separate syntactic domains, which are always
non-empty by construction of the logic. This is like the implicit
(single) domain of discourse in FOL, but HOL allows many of them, also
constructors to operate on such domains: nat, nat => bool, nat set,
nat list, ..., 'a * 'b, ...

Makarius

view this post on Zulip Email Gateway (Aug 18 2022 at 20:34):

From: Gottfried Barrow <gottfried.barrow@gmx.com>
On 7/20/2012 6:23 AM, Makarius wrote:

Axiomatizations are very difficult to get right. Experts usually avoid
them altogether, and use definitions exclusively. Definitions can be as
diverse as 'definition', 'inductive', 'function', 'locale', ...

I briefly thought about how to use definitions, just to try and use good
discipline. Suppose I have 7 standard axioms that I have to use. Then I
guess I could define an object, function, or property with a big
conjunction using the 7 axioms.

If I see the right example, it'll become clear. Using axioms lets me
start working on a real application right now as a diversion, instead of
waiting until I've become thoroughly educated, which could be a long
process. Trying to apply what I know at any point in time helps drive it
into long term memory, and usually requires me to learn something in
addition to the knowledge I'm trying to apply.

Lambda calculus is just notations for mathematical functions; '=' is
plain classical equality on arbitrary mathematical entities, including
functions.

HOL is more simple than FOL, because it does not impose any restrictions
about quantification and equality.

Types in HOL are just separate syntactic domains, which are always
non-empty by construction of the logic. This is like the implicit
(single) domain of discourse in FOL, but HOL allows many of them, also
constructors to operate on such domains: nat, nat => bool, nat set,
nat list, ..., 'a * 'b, ...

Thanks for the summary. Not understanding subtleties can kill me, but I
also need to understand the big picture.

Quantifying over functions has me a little worried. I see the need to
quantify over types of something like "sT => bool", rather than over
just a primitive type variable. If I'm doing that in what would normally
be a FOL system of logic, then I've entered the realm of higher order
logic. I don't care about that. I only care that I get busted for doing it.

I head down the road semi-blind, and find out where it leads.

Regards,
GB

view this post on Zulip Email Gateway (Aug 18 2022 at 20:35):

From: Makarius <makarius@sketis.net>
Why? Semantically a function is just a set, and in mathematics you
qantify over collections of sets all the time.

You can take HOL is a simplified version of FOL + set theory, but with an
explicit type system. Paradoxically HOL is weaker that FOL + ZF set
theory, despite arbitrary abstraction and quantification in HOL.

Anyway, in practice you don't need to worry about the foundations and
their relative logical strenth. You just do your applications with
high-level specification mechanims in Isabelle (datatype, function,
inductive, locale etc.), better not axiomatization.

Makarius

view this post on Zulip Email Gateway (Aug 19 2022 at 07:47):

From: Gottfried Barrow <gottfried.barrow@gmx.com>
On 7/20/2012 9:34 AM, Makarius wrote:

On Fri, 20 Jul 2012, Gottfried Barrow wrote:

Quantifying over functions has me a little worried.

Why? Semantically a function is just a set, and in mathematics you
qantify over collections of sets all the time.

Because I read things that I don't completely understand that make me
worry, such as that certain things can be proved about FOL that can't be
proved about HOL. Was that "completeness" that I read about? I don't
even look it up.

Until recently, I made my money as a technician working in telecom. I've
seen it over and over again. The best educated and hardest working
engineers don't typically make fundamental mistakes, because they
understand the fundamentals. Those with an inferior education, even if
they work hard, make fundamental mistakes that doom their designs.

You understand foundations all the way to the bottom. I try to
understand what I don't understand to not be the guy who makes the most
fundamental of mistakes. A healthy amount of worry keeps me on my toes.

You can take HOL is a simplified version of FOL + set theory, but with
an explicit type system. Paradoxically HOL is weaker that FOL + ZF set
theory, despite arbitrary abstraction and quantification in HOL.

Here, I don't even know what "weaker" means because I've found no
standard definition anywhere for "logically weaker".

However, I've found enough information to assume that it means that
everything you can prove with HOL can be proved with FOL+ZF, but not
vice versa.

Immediately, I ask the question, "How can what each logic proves be
compared? They're two different logics, with two different sets of
primitive symbols, with two different rules for combining the symbols".

I answer my question and say, "Ah, those logicians, they must have
formalized the study of translating statements from one logic into another."

Anyway, in practice you don't need to worry about the foundations and
their relative logical strenth.

Well, I think you're projecting your understanding of foundations onto
me. You see clearly to the bottom. I see through a glass darkly.

You just do your applications with
high-level specification mechanims in Isabelle (datatype, function,
inductive, locale etc.), better not axiomatization.

And so I'll make the switch as soon as possible, and either redo
everything based on some sort of definition, or I'll maintain two
different versions. I'm jumping to conclusions here about me being able
to achieve overall success with my current idea.

Thanks,
GB

view this post on Zulip Email Gateway (Aug 19 2022 at 07:48):

From: Makarius <makarius@sketis.net>
On Fri, 20 Jul 2012, Gottfried Barrow wrote:

On 7/20/2012 9:34 AM, Makarius wrote:

On Fri, 20 Jul 2012, Gottfried Barrow wrote:

Quantifying over functions has me a little worried.

Why? Semantically a function is just a set, and in mathematics you
qantify over collections of sets all the time.

Because I read things that I don't completely understand that make me
worry, such as that certain things can be proved about FOL that can't be
proved about HOL. Was that "completeness" that I read about? I don't
even look it up.

Depends you how completeness is formalized. The book by Peter B. Andrews
"An Introduction to Mathematical Logic and Type Theory: To Truth Through
Proof" contains the standard canon of correctness and completeness 3
times: (1) propositional logic, (2) first-order logic, (3) higher-order
logic. The latter goes back to Henkin 1950, who was discussing the Church
1940 version of HOL, which was not called HOL back then.

For using Isabelle/HOL, all of this is of little practical relevance.
You just do mathematical modelling with the definitional mechanisms that
the system gives you, and trust that the internal foundations
(implementation wrt. the core logic) work out. It is like getting on a
high-speed train: you merely need to buy a ticket to participate, not
study all the technical manuals first.

You can take HOL is a simplified version of FOL + set theory, but with
an explicit type system. Paradoxically HOL is weaker that FOL + ZF set
theory, despite arbitrary abstraction and quantification in HOL.

Here, I don't even know what "weaker" means because I've found no standard
definition anywhere for "logically weaker".

However, I've found enough information to assume that it means that
everything you can prove with HOL can be proved with FOL+ZF, but not
vice versa.

Yes. HOL can live semantically in a small subspace of FOL+ZF. So you can
do most of classic mathematics with some restrictions. It will take a
long time until you get to the foundational limits, though.

Makarius


Last updated: Apr 18 2024 at 16:19 UTC