Stream: Archive Mirror: Isabelle Users Mailing List

Topic: [isabelle] Getting the immediate consequences from applyi...


view this post on Zulip Email Gateway (Aug 22 2022 at 20:32):

From: Alex Meyer <alex153@outlook.lv>
Hello!

Isabelle, of course, is proof assistant and one declares theorem and tries to find the proof. That is fine. Enormous amount of formal knowledge (formalization of notions, concepts, rules and axioms, formally proved theorems and so on) is gathered in such way in Isabelle language and format (e.g. AFP). But is it possible to use all this wealth for forward reasoning, for calculation?

Simple questions: is it possible to use Isabelle as calculator, e.g. to declare that x=4+5 and then try to find x? is it possible to calculate _immediate_ consequences for some set of axioms&proved theorems and some valid rules, tactics? Of course, the complete set of consequences if infinite and contains a lot of garbage, but my question is about the immediate consequences.

The method described in the article "Guding inference in Relational Reinforcement Learning" http://cll.stanford.edu/~langley/papers/icarus.rrl.ilp05.pdf is the reason why I am asking for such forward reasoning feature in Isabelle: external software (or even Isabelle tactics) can decide on the set of interesting premises (subset of already proved theorems and assumed facts) and on the promising theorems/rules/tactics and may all this information forward to the Isabelle and then Isabelle can make immediate consequences out, one step for forward reasoning. And then it is up to the external system to estimate the usefulness of acquired/deduced knowledge and to go forward (possibly useing reinforcement learning approach - neural or symbolic/relational).

That is why the forward reasoning feature would be of importance for the Isabelle and it should be enabled somehow? Is it possible? It would be very sad to translate all the results of Isabelle formalization into other systems for doing forward reasoning (besides, I don't know any such system that can do it, although https://content.sciendo.com/view/journals/ausi/8/1/article-p41.xml "Contralog: a Prolog conform forward-chaining environment and its application for dynamic programming and natural language parsing" is example how forward inference is being enabled in Prolog, which is backward reasoning system originally as well, there are some other papers about Datalog syntax for Rete algorithm/forward reasoning from some more historic time as well).

Well - maybe the overall vision is try to use Isabelle for encoding expert system and knowledge base. Isabelle is good at automating the logics (one can embedd them into) and that is why some mixing of logics is automatically enabled for such knowledge base. Of course, there is always the question about selection of premises and selection of applicable rules, but, as I said, that is up to the external system, possibly, neural-symbolic/neural agent that is learning/improving itself using reinforcement learning.

Essentially - Isabelle orignially is about providing proof to some statement. But what about the use case when one is eager to deduce more or less all possible theorems for some concept? The forward reasoning to this interesting region of possible lemmas/theorems seems to be the right approach and exactly that is described in "Guiding inference..." paper and exactly that I am trying to do with Isabelle.

Thanks, Alex

view this post on Zulip Email Gateway (Aug 22 2022 at 20:32):

From: Alexandre Rademaker <arademaker@gmail.com>
Alex,

On the forward chain rules in Prolog, there is the mature CHR approach:

https://en.m.wikipedia.org/wiki/Constraint_Handling_Rules

The connection between ITP and KR is very interesting indeed. I have been looking for projects about that too. I know about some work on Coq

https://link.springer.com/chapter/10.1007/978-3-642-35786-2_11

and I am trying some experiments in Lean.

Alexandre
Sent from my iPhone


Last updated: Nov 21 2024 at 12:39 UTC