Springe direkt zu Inhalt

Tanita Daniel:

Design and implementation of an explanation interface for the ORES articlequality model for Editors to make ORES more interpretable

Requirements

  • Full-stack web-development experience
  • Python
  • Basic understanding of the human-centered design process
Discipline
Software Engineering, Human-Centered Design Process, Web Development, Data Visualization, Explainable AI
Degree
Bachelor of Science (B.Sc.)

Contents

Context

Many Wikipedia projects use ORES, a web service maintained by the Wikimedia Machine Learning team, which provides machine learning as a service . In general ORES is used to help people who work for or with the Wikipedia sites to maintain articles and develop tools. ORES provides different models, such as the articlequality model. This model assesses existing articles and predicts the quality of an article based on a scale defined by Wikipedia (see [2] for the content assessment on the english Wikipedia). [1]

Problem

Decisions made by ORES and other Machine Learning models in general are often hard to comprehend for humans, especially for non-technical users.

When humans are working with Machine Learning models mere scores or suggestions are not enough for most of the time. In this case explanations are needed to make an AI trustworthy and understandable. [3]

To make AIs more explainable ‘Explainable AI methods’ are used and differ depending on specific viewpoints like datatype and purpose. The explanation methods of pre-trained black-box models such as ORES are also known as post-hoc interpretability methods. [4]

Objectives

The goal is to design and implement a visualization tool targeted at editors that explains predictions made by the articlequality model of ORES.

LIME will be used for the visualization of single predictions made by ORES. (see [5])

Procedure

  • Get familiar with the articlequality model of ORES
  • Get familiar with the values and needs of Wikipedia Editors
  • Design and create prototypes for the web application that will be evaluated by editors
  • Implementation of the web application (using the feedback from the prototypes)
  • Evaluation of the web application

References

[1] https://www.mediawiki.org/wiki/ORES, last accessed: 2021-06-22, 13:43

[2] https://en.wikipedia.org/wiki/Wikipedia:Content_assessment, last accessed: 2021-06-22, 13:47

[3] Q. Vera Liao, Daniel Gruen, and Sarah Miller. 2020. Questioning the AI: Informing Design Practices for Explainable AI User Experiences. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI '20). Association for Computing Machinery, New York, NY, USA, 1–15.

[4] P. Linardatos, V. Papastefanopoulos, and S. Kotsiantis. Explainable ai: A review of

machine learning interpretability methods. Entropy, 23:18, 12 2020.

[5] https://github.com/marcotcr/lime, last accessed: 2021-06-22, 14:10