Springe direkt zu Inhalt

A Human-Centered Dashboard for Course Evaluations to Enable Sensemaking


  • Successful participation in the course "Data Visualization" or "Human-Computer Interaction"
Data Analysis, Data Visualization, Web Technologies
B.Sc. / M. Sc.


It is a central concern of the Department of Mathematics and Computer Science to ensure a high quality of teaching. Therefore, course evaluations are conducted every semester. Course evaluations are used to assess how students perceive the quality of the course. During these evaluations, questionnaires are distributed and later analyzed to provide an overview of the results. However, the data analysis (csv format) is done manually using Microsoft Excel. This process neither ensures the reproducibility of the results nor includes any plausibility checks. In addition, the results are not presented in a clear and appealing way. Therefore, the process needs to be revised, and learning analytics and educational data mining should be applied to the evaluation process. 

Therefore, the data must first be stored in a database (e.g. SQLite). Based on this, an analysis pipeline must be built (in Jupyter notebook or GNU R), where the data is cleaned, prepared, and analyzed. (Initially, the existing evaluation process should be implemented but extended during the dashboard evaluation). The results of the data analysis pipeline should be presented in a dashboard. A Jupyter-based dashboard can be realized with the Jupyter Dashboards Layout Extension; when using GNU R, the shiny package is a valuable option.

Therefore, this thesis aims to create a dashboard that presents the evaluation results of different subjects and disciplines (computer science, mathematics, bioinformatics) in a clearly structured way. Such dashboards allow people to make sense of data if it is presented in a meaningful way.

The requirements for the dashboard have to be defined by reviewing course evaluations, interviewing different stakeholders (teachers, students, program coordinators, evaluation teams) about their expectations for such a dashboard, and studying existing dashboard solutions in these fields (see Schwendimann et al. 2017). The dashboard design needs to be evaluated through usability testing. These tests should also focus on sensemaking. Thus, based on rigorous usability testing, the analysis pipeline and the dashboard design should be improved in at least two iterations.

Please note: This thesis can be extended to a Master's thesis by focusing on sensemaking. Then, from the following literature, an appropriate research question needs to be developed:
  • S. Lee, S. -H. Kim, Y. -H. Hung, H. Lam, Y. -A. Kang and J. S. Yi, "How do People Make Sense of Unfamiliar Visualizations?: A Grounded Model of Novice's Information Visualization Sensemaking," in IEEE Transactions on Visualization and Computer Graphics, vol. 22, no. 1, pp. 499-508, 31 Jan. 2016, doi: 10.1109/TVCG.2015.2467195.
  • Dervin, B. (1998), "Sense‐making theory and practice: an overview of user interests in knowledge seeking and use", Journal of Knowledge Management, Vol. 2 No. 2, pp. 36-46. https://doi.org/10.1108/13673279810249369
  • Cohen, M. S., Freeman, J. T., & WOLF, S. (1996). Metarecognition in Time-Stressed Decision Making: Recognizing, Critiquing, and Correcting. Human Factors, 38(2), 206–219. https://doi.org/10.1177/001872089606380203
  • Klein, G., Moon, B.M., & Hoffman, R.R. (2006). Making Sense of Sensemaking 1: Alternative Perspectives. IEEE Intelligent Systems, 21, 70-73.
  • Klein, G., Moon, B.M., & Hoffman, R.R. (2006). Making Sense of Sensemaking 2: A Macrocognitive Model. IEEE Intelligent Systems, 21, 88-92.


B. A. Schwendimann et al., "Perceiving Learning at a Glance: A Systematic Literature Review of Learning Dashboard Research," in IEEE Transactions on Learning Technologies, vol. 10, no. 1, pp. 30-41, 1 Jan.-March 2017, doi: 10.1109/TLT.2016.2599522.

Katrien Verbert, Xavier Ochoa, Robin De Croon, Raphael A. Dourado, and Tinne De Laet. 2020. Learning analytics dashboards: the past, the present and the future. In Proceedings of the Tenth International Conference on Learning Analytics & Knowledge (LAK '20). Association for Computing Machinery, New York, NY, USA, 35–40. https://doi.org/10.1145/3375462.3375504

Abudalfa, S., & Salem, M. (2022). An Analysis of Course Evaluation Questionnaire on UCAS Students’ Academic Performance by Using Data Clustering. In Explore Business, Technology Opportunities and Challenges‎ After the Covid-19 Pandemic (pp. 231-240). Cham: Springer International Publishing.

Universal Design for Learning https://udlguidelines.cast.org/

Jupyter Dashboards Layout Extension https://jupyter-dashboards-layout.readthedocs.io/en/latest/

Shiny https://shiny.rstudio.com/