Stream: Archive Mirror: Isabelle Users Mailing List

Topic: [isabelle] value "set [0,0::nat]" = "{0, 0}"::"nat set" –...


view this post on Zulip Email Gateway (Aug 19 2022 at 11:45):

From: Christoph LANGE <math.semantic.web@gmail.com>
Dear Isabelle developers,

in Isabelle2013 the expression

value "set [0,0::nat]"

evaluates to

"{0, 0}"
:: "nat set"

Looks like a bug to me. Even more so because

value "{0, 0}::(nat set)"

yields

"{0}"
:: "nat set"

as expected.

Cheers, and have a nice weekend,

Christoph

view this post on Zulip Email Gateway (Aug 19 2022 at 11:45):

From: Gottfried Barrow <gottfried.barrow@gmx.com>
Christoph,

Duplication of elements is allowed in sets.

theorem
"set [0,0::nat] = {0} &
{0::nat} = {0,0,0,0,0}"
by(simp)

From List.thy:

primrec set :: "'a list => 'a set" where
"set [] = {}" |
"set (x # xs) = insert x (set xs)"

Regards,
GB

view this post on Zulip Email Gateway (Aug 19 2022 at 11:46):

From: Tobias Nipkow <nipkow@in.tum.de>
Am 03/09/2013 20:26, schrieb Makarius:

On Fri, 30 Aug 2013, Christoph LANGE wrote:

in Isabelle2013 the expression

value "set [0,0::nat]"

evaluates to

"{0, 0}"
:: "nat set"

Looks like a bug to me. Even more so because

value "{0, 0}::(nat set)"

yields

"{0}"
:: "nat set"

as expected.

Sigh. Let me clarify this a little. If value "set [0,0::nat]" had returned, for
example, "{1,1}", I would agree with Christoph that it's a bug, and a highly
embarrassing one: value is meant to return a term that is equal to the start
term. Since there are three different evaluation engines for the value command
and there is also the simp proof method, you can obtain different results in
different situations. But in all cases you should get an equality. Christoph
also expected some kind of normal form. You should get this for ground terms of
types that have normal forms (like lists), but sets do not have normal forms.

Tobias

Someone who understands the details of the Isabelle/HOL code generator setup
needs to say if this is right or wrong, desirable or undesirable etc. I can
only make some general remarks.

To the constant amusement of the two Isabelle mailing list, the word "bug" has
lost any meaning to me many years ago. I don't even remember when it was still
in my vocabulary. You just don't walk out in the street of a city like e.g.
London, Paris, Rome, see old and beautiful buildings, maybe with some parts in
need of cleaning or some repair and say "bug" or "needs to be fixed". Beyond a
certain complexity of a system it simple does not make any sense to pretend that
there are individual small bits to be exterminated, in order to make it better.

Whenever there is an incident of surprise or unexpected behaviour by the user
one has to ask the canonical questions, (1) if the system needs to be changed to
meet user's expectations, (2) the user needs to change his expectations, or (3)
the system needs to change its way of presenting its results such that they look
expectedly to the user.

More to the point of this particular example. I am sensing here an expectation
of a universal normal form of expressions in Isabelle/HOL. This concept exists
in some schools of Type Theory, but not in HOL nor mathematical logic nor
mathematics in general. (HOL is also not a programming language that you could
run deterministically to get "the" result.) Instead there are just logical
terms to start with (with some mathematical meaning, potentially more than one
way), and tools operating on that representation symbolically.

For example, the proof method "simp" uses the Isabelle Simplifier to produce
normal forms wrt. a given term rewriting system in the context, according to
certain strategies and add-on tools (simprocs).

In contrast, the 'value' command uses Isabelle/HOL code generator +
Normalization by Evaluation to produce certain normal forms of ML datatypes that
are then printed again as terms. This you get the illusion that there is some
"executable fragment" of HOL that evaluates efficiently (with the help of the
underlying Isabelle/ML system).

There is no necessity of any such tools to agree on the result, both
conceptually and accidentally due to different tool setups. This non-uniformity
of tools is sometimes bad, but I think it is generally a good thing in not
insisting in one grand unified logical environment that determines everything
from the start. Thus new ideas and techniques may get implemented over time
(here we need to think in terms of 20-30 years of the current generation of
proof assistants).

Makarius

view this post on Zulip Email Gateway (Aug 19 2022 at 11:55):

From: Christoph LANGE <math.semantic.web@gmail.com>
2013-08-30 17:35 Gottfried Barrow:

Duplication of elements is allowed in sets.

Thanks, Gottfried, for explaining. Still seems odd to me, but I'm sure
there are good reasons, e.g. in terms of efficiency.

Cheers,

Christoph

view this post on Zulip Email Gateway (Aug 19 2022 at 11:56):

From: Gottfried Barrow <gottfried.barrow@gmx.com>
I assume it's just because they don't go the extra step to simplify the
answer. I don't do a lot of computation, but I've seen that what "value"
returns and what "simp" returns is frequently not identical. I guess it
all depends on how "value" uses the simp rules, or it's own custom rules.

Examples:

(1)
value "set [0::nat, 1, 0]"
(* Output: "{0, 1, 0}" *)

(2)
value "{0::nat, 1, 0}"
(* Output: "{1, 0}" *)

(3)
declare[[simp_trace=true]]
theorem "{0::nat, 1, 0} = z"
apply(simp)
(* Output: {0, Suc 0, 0} = z *)
oops

(4)
theorem "{0::nat, 0, 1} = z"
apply(simp)
(* Output: {0, Suc 0} = z
Partial Trace:
[1]Applying instance of rewrite rule "Set.insert_absorb2":
insert ?x1 (insert ?x1 ?A1) ≡ insert ?x1 ?A1 *)
oops

(5)
theorem "{0::nat, 1, 0} = {0::nat, 0, 1}"
apply(simp)
(* Output: {0, Suc 0, 0} = {0, Suc 0} *)
apply(auto)
(* Partial Trace:
[1]SIMPLIFIER INVOKED ON THE FOLLOWING TERM:
{0, Suc 0, 0} = {0, Suc 0}
[1]SIMPLIFIER INVOKED ON THE FOLLOWING TERM:
⋀x. x = 0 ⟹ x ∉ {} ⟹ 0 < x ⟹ x = Suc 0 *)
done

"Bug" is a risky word to use for the logic. I usually just describe the
behaviour, imply that it could be a bug, and then let someone tell me
it's a feature.

https://lists.cam.ac.uk/mailman/htdig/cl-isabelle-users/2013-February/msg00169.html

The lesson about permutative rewrite rules could apply here also. In
(3), they don't order the set with a simp rule, so rule
Set.insert_absorb2 can't be used as it is in (4).

In (2), "value" is eliminating the duplicate without ordering the set.

In (5), "simp" alone can't simplify it. They resort to expanding
something when I use "auto". I can't find where the finite set braces is
defined, so I can conveniently give up for now.

It's six of one, or a half dozen of the other.

Regards,
GB


Last updated: Apr 16 2024 at 08:18 UTC