Explainable Belief Merging
Laufzeit: 01.11.2022 - 30.10.2025
Partner: Prof. Dr. Matthias Thimm, Artificial Intelligence Group Faculty of Mathematics and Computer Science FernUniversität in Hagen, Germany
Förderkennzeichen: GZL SO 956/2-1
Förderung durch: DFG
Projektmittel (€): 319.800
Kurzfassung
Dealing with noisy and inconsistent information adequately is one of the core challenges in knowledge-driven AI applications. In scenarios where experts share their knowledge in order to build a joint knowledge base or sensor information is to be added, inconsistencies easily occur. In the field of Knowledge Representation and Reasoning (KRR), the formal framework for addressing such problems is belief merging (and its related areas such as belief change and information fusion), which...Dealing with noisy and inconsistent information adequately is one of the core challenges in knowledge-driven AI applications. In scenarios where experts share their knowledge in order to build a joint knowledge base or sensor information is to be added, inconsistencies easily occur. In the field of Knowledge Representation and Reasoning (KRR), the formal framework for addressing such problems is belief merging (and its related areas such as belief change and information fusion), which provides computational approaches for automatically resolving these issues in some sensible way. The field of belief merging bears a close relationship with the fields of judgement and preference aggregation and also features its own version of Arrow's impossibility result, insofar that there cannot be any "rational" belief merging approach. This calls for semi-automatic methods that take human background knowledge into account when knowledge has to be merged, in order not to remove important pieces of information. However, classical belief merging approaches usually work in a way that is hard to interpret by users, choosing the pieces of information to be removed based on, e.g., notions of distances of interpretations.
In this project, we address the above challenge of "explainable belief merging" by developing new belief merging operators that are able to explain their results and allow for the semi-automatic repair of knowledge-driven systems. Our method for this endeavour will be based on the computation and analysis of "Craig Interpolants". Informally, given two knowledge bases, an interpolant is a formula which can be derived from one of the knowledge bases, such that its insertion into the other will lead to an inconsistency. Therefore, an interpolant provides a concise explanation of why a particular conflict between two knowledge bases occurs. We believe that using the information obtained from the analysis of interpolants will allow us to extend existing approaches to belief merging with better explanation capabilities - and even develop new formal approaches to belief merging. In fact, preliminary work by the grant applicants already showed that interpolants can be used to measure the conflicts between multiple knowledge bases in a sensible fashion. A further aspect we wish to explore in this project is the application of the belief merging framework to more expressive logics. Knowledge-driven scenarios usually require functionalities as arithmetics and first-order reasoning, but the bulk of the work on belief merging is concerned with the setting of propositional logic. We will therefore also lay the foundations for using belief merging in more expressive logics and investigate the use of interpolants therein. » weiterlesen» einklappen