Springe direkt zu Inhalt

Bachelor Thesis Defense: "Konzept und Implementierung einer visuellen Methode zur Verbesserung der Interpretierbarkeit der automatisierten Qualitatsbewertung mit ORES in Wikidata" by Sajeera Gnanasegaram (Oct 08, 2020)

News from Oct 01, 2020

October 8th, 2 pm, the Thesis Defense: "Konzept und Implementierung einer visuellen Methode zur Verbesserung der Interpretierbarkeit der automatisierten Qualitatsbewertung mit ORES in Wikidata" by Sajeera Gnanasegaram will take place at HCC Lab.

Abstract

The context of this work is Wikimedia's ORES, an ML-based service that provides rating information such as the quality of a contribution to Wiki contributions. This service is used because in the constantly growing community everyone can create and edit an item. However, it is not possible for the editors to manually check the quality of the items and edit them in a short time. A gadget is used to communicate the quality of the selected item. The gadget is a visual interface.

The problem is that the gadget is not meaningful enough. So far it is very difficult to understand what its output means and why it was hit. One of the reasons for this unsolved problem of interpretability is that interpretability is a very subjective concept and therefore difficult to formalize. Depending on the context, different types of explanations can be useful.

An explanation is not only a product but also a process that includes a cognitive dimension and a social dimension. The improved version of the gadget with Post hoc Explanations should help every user (experts and non-experts) to understand the output and the reasons for the decision quality assessment, but also which factors influenced the decision.

With the help of user-centered explanations, a design for the gadget is developed. The system will query the ORES API once at startup, save the results and present them in the form of an Explanation Interface (gadget). As a design approach, Explanation Interfaces from the area of Recommender Systems are combined with human-friendly explanations for better interpretability. It is not only important to understand what information we get about a system, but rather why this decision was made.

First I created several low-fidelity prototypes from the concept I created and then developed a high-fidelity prototype. With three members of the HCC research group of the Free University Berlin, I conducted a usability test via Cisco WebEx to evaluate my high-fidelity prototype.

I came to the conclusion that the explanation interface developed by me was understood by the test persons, they were able to interpret ORES retrospectively and could also tell me why ORES chose the respective quality class.

The gadget still shows small weaknesses in user-friendliness, but these can easily be corrected.

All in all, the design approach from the area of recommender systems with human-friendly explanations for better interpretability was a good solution for me because the usability test showed that this way the "why" behind the system output was communicated. Thus, ORES, the ML-based service, appears transparent and fair and the users develop trust towards the system.

The defense will be held in German.

First assessor: Prof. Dr. Claudia Müller-Birn
Second assessor: Prof. Dr. Marian Margraf

Location: WebEx

10 / 40