Springe direkt zu Inhalt

Gobie Nanthakumar:

A tool driven approach to Mechanical Turk user experiments

Requirements

  • Process Modeling
  • Web-Technologies
  • Proficiency in German & English
Discipline
Crowd-Sourcing, Collaborative Ideation, Process Modeling
Degree
Bachelor of Science (B.Sc.)

Contents

Context

At the HCC research group, we conduct user studies using the crowd-sourcing platform Amazon Mechanical Turk (Mturk), where workers use an interface we designed, and then answer survey questions about it. Furthermore we collect tracking data from the conducted tasks. This is done ad hoc and without an a priori identification of metrics, methodology, used technologies and evaluation frameworks. In order to get a faster process of conducting user studies and comparable results between studies, we need a tool-driven methodology of conducting Mturk studies.

Problem

Currently there is no well defined approach to conducting and evaluating mechanical turk studies at the workgroup. This includes the steps needed to get a publishable artifact (for mturk) for the study and the collection of results after the study is conducted. Many steps from the idea to the study and then the evaluation are done decentralized and manually.

Objectives

Based on literature (1,2) and interviews with members at the workgroup about previous work (5,6), define a model for mechanical turk usability studies; that, starting from the traditional approach to study design, defines a methodology for crowd-sourced user experiments. This includes human tasks that need to be done, software artifacts that need to be prepared and integrated and steps on how to handle the data generated during the study. Use this methodology to inform a software tool that helps users go through this process.

Procedure

  1. Integrate approaches from literature (i.e., requirement analysis), and build a process model in cooperation with researchers from the workgroup
  2. Identify software tool support for different steps in the model
  3. Evaluate the process model by using it as a framework for one of the studies in the research group
  4. Recommend refinements of the process model based on the findings of the case study

References

  1. Olson, Judith S., and Wendy A. Kellogg, eds. Ways of Knowing in HCI. Vol. 2. New York, NY, USA:: Springer, 2014. Especially Chapter: Crowdsourcing in HCI Research Egelman, Serge (et al.)
  2. Rubin, Jeffrey, and Dana Chisnell. Handbook of usability testing: howto plan, design, and conduct effective tests. John Wiley & Sons, 2008.
  3. Shneiderman, Ben. Designing the user interface: strategies for effective human-computer interaction. Pearson Education India, 2010.
  4. Komarov, Steven, Katharina Reinecke, and Krzysztof Z. Gajos. "Crowdsourcing performance evaluations of user interfaces." Proceedings of the SIGCHI conference on human factors in computing systems. ACM, 2013.
  5. Balaraman, V., Razniewski, S., & Nutt, W. 2018. Recoin: Relative Completeness in Wikidata. http://wikiworkshop.org/2018/papers/wikiworkshop2018_paper_2.pdf
  6. Maximilian Mackeprang, Abderrahmane Khiat, and Claudia Müller-Birn. 2018. Concept Validation During Collaborative Ideation and Its Effect on Ideation Outcome. In Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems (CHI EA ’18), LBW033:1–LBW033:6. https://doi.org/10.1145/3170427.3188485