Nina Gierasimczuk (Technical University of Denmark)

From grammar inference to learning action models. A case for Explainable AI

Abstract: The endeavour of finding patterns in data spans over many different fields, from computational linguistics to self-driving cars. Machine learning is undeniably a great source of progress across the board. In my talk I will resurrect the old way of thinking about learning, long the lines of symbolic artificial intelligence. I will report on a recent work about inferring action models from descriptions of action executions. Such a framework constitutes an informed, reliable and verifiable way of learning, in which the learner can not only classify objects correctly, but can also report on the symbolic representation she bases her conjectures on. It is nice that the ability for AI to “explain itself” in such a way is nowadays a growing demand in the community. My action models are those of Dynamic Epistemic Logic, and the methodology is automata-theoretic in spirit. Needless to say, Prof. Johan van Benthem’s contributions in both fields are tremendous.

The results on DEL action learning model are based on Thomas Bolander and Nina Gierasimczuk, Learning to Act: Qualitative Learning of Deterministic Action Models, Journal of Logic and Computation, Volume 28, Issue 2, 6 March 2018, pp 337-365.