Springe direkt zu Inhalt

Argumentation-based justification and explanation in AI ethics

08.11.2019 | 14:00 c.t.

Kolloquiumsvortrag Beishui Liao, Zhejiang University, China 

Ethical and explainable artificial intelligence is an interdisciplinary research area involving computer science, philosophy, logic, the social sciences, etc. For an ethical autonomous system, the ability to justify and explain its decision making is a crucial aspect of transparency and trustworthiness. In this talk, I will first discuss some basic notions of explainable AI and ethical AI, and then introduce an argumentation-based approach to represent, justify and explain the decision making of a value driven agent (VDA). By using a novel formalism, some implicit knowledge of a VDA is made explicit. Based on this formalism,we formulate an approach to justify and explain the decision-making process of a VDA, in terms of a typical argumentation formalism, Assumption-based Argumentation (ABA). As a result, a VDA in a given situation is mapped onto an argumentation framework in which arguments are defined by the notion of deduction. Justified actions with respect to semantics from argumentation correspond to solutions of the VDA. The acceptance (rejection) of arguments and their premises in the framework provides an explanation for why an action was selected (or not). Furthermore, we go beyond the existing version of VDA, considering not only practical reasoning, but also epistemic reasoning, such that the inconsistency of knowledge of the VDA can be identified, handled and explained.


Prof. Beishui Liao is visiting FU Berlin from November 1-11. 

Contact Person: Christoph Benzmüller

Website Beishui Liao

Zeit & Ort

08.11.2019 | 14:00 c.t.

Takustr. 9, SR 005