Stream: Archive Mirror: Isabelle Users Mailing List

Topic: [isabelle] Atb.: Getting the immediate consequences from ...


view this post on Zulip Email Gateway (Aug 22 2022 at 20:32):

From: Alex Meyer <alex153@outlook.lv>
I am aware of the Chapter "6.2 Variations of backward vs. forward reasoning" (with samples of forward proof style) of https://isabelle.in.tum.de/library/HOL/HOL-Isar_Examples/document.pdf but it is no answer for my question, because such reasoning requires the user to provide both premises and the consequences and Isabelle then just checks the suggested proof step. Of course, one should not ask to the proof assistant to generated something. But the following arguments are in favour for proof assistant to be creative/generative as well:

* Maybe internally the check of forward inference step is done using the generation of the consequence and then comparing the generated consequence with the provided consequence, so, maybe one can make the generated consquence explicit;
* https://www.isa-afp.org/entries/FOL_Seq_Calc1.html is example how the sequente calculus are formalized in the Isabelle and any sequent calculus provides direct means for forward reasoning, so, maybe it is possible to embed any sequence calculus in Isabelle and then do some forward reasoning with them (why to embed otherwise?).

Alex

A Sequent Calculus for First-Order Logic - Archive of Formal Proofs - isa-afp.org<https://www.isa-afp.org/entries/FOL_Seq_Calc1.html>
Title: A Sequent Calculus for First-Order Logic: Author: Andreas Halkjær From: Contributors: Alexander Birch Jensen, Anders Schlichtkrull and Jørgen Villadsen: Submission date:
www.isa-afp.org


No: Alex Meyer
Nosūtīts: otrdiena, 2019. gada 10. septembris 20:49
Kam: cl-isabelle-users@lists.cam.ac.uk <cl-isabelle-users@lists.cam.ac.uk>
Tēma: Getting the immediate consequences from applying some theorem or tactic, Isabelle/HOL for knowledge base completion and forward chaining?

Hello!

Isabelle, of course, is proof assistant and one declares theorem and tries to find the proof. That is fine. Enormous amount of formal knowledge (formalization of notions, concepts, rules and axioms, formally proved theorems and so on) is gathered in such way in Isabelle language and format (e.g. AFP). But is it possible to use all this wealth for forward reasoning, for calculation?

Simple questions: is it possible to use Isabelle as calculator, e.g. to declare that x=4+5 and then try to find x? is it possible to calculate _immediate_ consequences for some set of axioms&proved theorems and some valid rules, tactics? Of course, the complete set of consequences if infinite and contains a lot of garbage, but my question is about the immediate consequences.

The method described in the article "Guding inference in Relational Reinforcement Learning" http://cll.stanford.edu/~langley/papers/icarus.rrl.ilp05.pdf is the reason why I am asking for such forward reasoning feature in Isabelle: external software (or even Isabelle tactics) can decide on the set of interesting premises (subset of already proved theorems and assumed facts) and on the promising theorems/rules/tactics and may all this information forward to the Isabelle and then Isabelle can make immediate consequences out, one step for forward reasoning. And then it is up to the external system to estimate the usefulness of acquired/deduced knowledge and to go forward (possibly useing reinforcement learning approach - neural or symbolic/relational).

That is why the forward reasoning feature would be of importance for the Isabelle and it should be enabled somehow? Is it possible? It would be very sad to translate all the results of Isabelle formalization into other systems for doing forward reasoning (besides, I don't know any such system that can do it, although https://content.sciendo.com/view/journals/ausi/8/1/article-p41.xml "Contralog: a Prolog conform forward-chaining environment and its application for dynamic programming and natural language parsing" is example how forward inference is being enabled in Prolog, which is backward reasoning system originally as well, there are some other papers about Datalog syntax for Rete algorithm/forward reasoning from some more historic time as well).

Well - maybe the overall vision is try to use Isabelle for encoding expert system and knowledge base. Isabelle is good at automating the logics (one can embedd them into) and that is why some mixing of logics is automatically enabled for such knowledge base. Of course, there is always the question about selection of premises and selection of applicable rules, but, as I said, that is up to the external system, possibly, neural-symbolic/neural agent that is learning/improving itself using reinforcement learning.

Essentially - Isabelle orignially is about providing proof to some statement. But what about the use case when one is eager to deduce more or less all possible theorems for some concept? The forward reasoning to this interesting region of possible lemmas/theorems seems to be the right approach and exactly that is described in "Guiding inference..." paper and exactly that I am trying to do with Isabelle.

Thanks, Alex

view this post on Zulip Email Gateway (Aug 22 2022 at 20:32):

From: Walther Neuper <walther.neuper@jku.at>
Hi Alex,

Isabelle/Isar supported calculational proof from the beginning, see [2].

Earliest support by theorem provers for calculations in a format close
to traditional paper&pencil work seems to go back to Ralph-Johan Back.
He calls this format "structured derivation" [1] and used PVS in the
background.

A similar prototyping project uses Isabelle in the background [3].

The ThEdu [4] workshop series pursues ideas you seem to be looking for.

Hope this helps,

Walther

[1]
http://users.abo.fi/Ralph-Johan.Back/index.php?page=searchform.php&menu=3

[2] https://link.springer.com/chapter/10.1007%2F3-540-44755-5_7

[3] http://eptcs.web.cse.unsw.edu.au/paper.cgi?ThEdu17.6

[4] https://www.uc.pt/en/congressos/thedu


Last updated: Nov 21 2024 at 12:39 UTC