Stream: Machine Learning for Isabelle

Topic: Cambridge reading group


view this post on Zulip Welcome Bot (Jul 05 2019 at 10:00):

Welcome to #Machine learning and theorem proving reading group.

view this post on Zulip Yiannos (Jul 05 2019 at 10:02):

For the next reading group (Monday 12:00, FC22) we'll be discussing:

"Learning Heuristics for Automated Reasoning through Reinforcement Learning"

https://arxiv.org/pdf/1807.08058.pdf

Kind regards,

view this post on Zulip Anthony Bordg (Jul 07 2019 at 18:29):

Reinforcement learning algorithms have shown to often find and exploit subtle implementation errors in the environment, instead of solving the intended problem. While testing and manual inspection of the results is a feasible approach for board games and Atari games, it is neither possible nor sufficient in large-scale formal reasoning - a solver run is simply too large to inspect manually and even tiny mistake can invalidate the result. In order to ensure correctness, we need an environment with the ability to produce formal proofs, and check the proofs by an independent tool.

Interesting!
@Yiannos @Angeliki Koutsoukou Argyraki

view this post on Zulip Yiannos (Jul 08 2019 at 09:10):

Yeah, claim without source though.

view this post on Zulip Yiannos (Jul 08 2019 at 09:10):

Please feel free to make paper suggestions for the reading group in this stream.

view this post on Zulip Anthony Bordg (Jul 08 2019 at 12:43):

Fair enough.

view this post on Zulip Angeliki Koutsoukou-Argyraki (Jul 08 2019 at 14:10):

For next time: "Understanding Deep Learning requires rethinking generalisation" by Zhang et al
https://arxiv.org/abs/1611.03530

view this post on Zulip Anthony Bordg (Jul 14 2019 at 21:11):

@Lawrence Paulson @Angeliki Koutsoukou Argyraki @Yiannos
Not clear at all what can be learnt from this preprint wrt to interactive theorem proving.

view this post on Zulip Yiannos (Jul 15 2019 at 13:19):

@Anthony Bordg The paper might lean towards learning theory but I agree with Angeliki that it's useful in introducing the "internals" of deep learning. Do you have any papers you'd like the group to discuss?

view this post on Zulip Lawrence Paulson (Jul 16 2019 at 11:22):

The point of our weekly meetings is to discuss what we have done and where we are headed. Even where we have discussed machine learning papers, it's been in the context of our own ideas and plans as opposed to simply the paper itself. It's important to know what others have done when making our own plans.

The project tasks have three distinct threads of work: one connected with mathematics, one connected with machine learning and one connected with computer algebra and related algorithms. For a while our focus was almost entirely on mathematics. It may be that we have gone too far the other way, allowing machine learning to push aside the other goals. That's also wrong. Possibly the right approach is to set aside two meetings a month (the first and third say) for machine learning and the other two or three meetings for mathematics. And nobody should feel obliged to attend a meeting if they aren't interested in the topic being discussed at the time.


Last updated: Apr 20 2024 at 01:05 UTC