Survey of Ontology Evaluation Methods

Academic Advisor: Jonas Oppenländer
Discipline: Semantic Web, Ontology Engineering, Data Modelling
Degree: Master of Science (M.Sc.)

Requirements:

  • Very good knowledge of English

  • Interest in writing and publishing scientific papers

  • Knowledge in conducting systematic literature reviews would be beneficial

Contents

Context

Ontologies are the basis for realizing the vision of the Semantic Web [1]. An ontology describes a domain (or the world in general) with a set of concepts and their relations [2]. “Ontology engineering” denotes the practice of developing an ontology [3, 4, 17].

The Problem

The ontology – as the result of a systematic ontology engineering effort – must be summatively evaluated. Most ontologies are complex. The evaluation of the quality of ontologies [9] therefore poses a challenge. The summative evaluation is driven by questions such as:

  • How well does the ontology fit the users’ needs?

  • What is the quality of the ontology?

One pragmatic approach is to check the conformance of the ontology with competency questions [5]. This approach requires the construction of a set of questions that must be translated into a set of formal and executable queries (in SPARQL or a dialect of logics). Additionally, this approach requires sample data against which the queries can be executed. But what if no (or not enough) sample data is available?

Objectives of the Thesis

A number of ontology evaluation methods try to tackle the problem of summatively evaluating the quality of ontologies, such as AEON [10], OntoClean [12], OntoQA [14], ODEval [15], or via a gold standard [11] and crowdsourcing [13], along a number of different metrics (e.g. consistency, completeness, conciseness, expandability, and sensitiveness) [6].

This thesis should start with a systematic literature review with the aim of creating a comprehensive survey of ontology evaluation methodologies. Past surveys on ontology evaluation techniques [79] should be the starting point for the literature review.

A further step should identify commonalities in the approaches, following a grounded theory approach [16]. What are the features and underlying theories of the methodologies? Who is involved in the evaluation (domain experts, knowledge engineers, end users, etc.)? Which tools and resources do the methodologies require? In which form do they provide their outcome?

From this information, a conceptual framework should be derived that classifies the existing ontology evaluation approaches. The framework would be a useful decision support tool for ontology engineers. The framework could also help identify gaps in the methodologies to guide future research.

The outcome of the thesis should ideally be a scientific paper at an international conference or a scientific journal.

Further Information

We value a close supervision of the thesis and support you along the way.

Please contact Jonas Oppenländer {firstname.lastname@fu-berlin.de}, Königin-Luise-Str. 24-26, Room 115, for further information.


Referenzen/References

[1] Berners-Lee, T., Hendler, J., Lassila, O. (2001): The Semantic Web, Scientific American: Feature Article: May 2001

[2] Gruber, T.R. (1993): A Translation Approach to Portable Ontologies. Knowledge Acquisition, 5(2):199–220

[3] Gruber, T.R. (1995): Toward Principles for the Design of Ontologies Used for Knowledge Sharing. Int. Journal of Human Computer Studies, 43(5–6): 907–928

[4] Allemang, D., Hendler, J. (2008): Semantic Web for the Working Ontologist. Effective Modelling in RDFS and OWL. Elsevier, Amsterdam and Boston

[5] Gruninger, M., Fox, M.S. (1994): The Role of Compentency Questions in Enterprise Engineering. IFIP WG5.7 Workshop on Benchmarking - Theory and Practice, Trondheim, Norway

[6] Gomez-Perez, A. (2004): Ontology Evaluation. In: Staab, S., Studer, R. (Eds.) (2004): Handbook on Ontologies, Springer, p. 251–274

[7] Brank, J., Grobelnik, M., Mladenic, D. (2005): A Survey of Ontology Evaluation Techniques. Proc. 8th Int. multi-conf. Information Society, 166169

[8] Vrandedic, D. (2010): Ontology Evaluation. Dissertation, Karlsruhe Institute of Technology

[9] Zaveri, A., Rula, A., Maurino, A., Pietrobon, R., Lehmann, J., Auer, S. (2015): Quality Assessment for Linked Data: A Survey. A Systematic Literature Review and Conceptual Framework. Semantic Web Journal

[10] Völker, J., Vrandedic, D., Sure, Y., Hotho, A. (2008): AEON - An Approach to the Automatic Evaluation of Ontologies. JournalApplied Ontology - Ontological Foundations of Conceptual Modelling. 3(1-2), 4162

[11] Dellschaft, K., Staab, S. (2006): On How to Perform a Gold Standard Based Evaluation of Ontology Learning. Proc. ISWC 2006, 228241

[12] Guarino, N., Welty, C.A. (2004): An Overview of OntoClean. In: Staab S., Studer R. (eds.) Handbook on Ontologies. International Handbooks on Information Systems. Springer, Berlin, Heidelberg

[13] Mortensen, J.M., Alexander, P.R., Musen, M.A., Noy, N.F. (2013): Crowdsourcing Ontology Verification. Proc. Int. Conf. Biomedical Ontology

[14] Tartir, S., Arpinar, I.B., Moore, M., Aleman-Meza, B. (2005): OntoQA: Metric-Based Ontology Quality Analysis. Proc. IEEE ICDM 2005 Workshop on Knowledge Acquisition from Distributed, Autonomous, Semantically Heterogeneous Data and Knowledge Sources, Houston, Texas

[15] Corcho, O., Gomez-Perez, A., Gonzalez-Cabero, R., Suarez-Figueroa, M.C. (2004) ODEval: A Tool for Evaluating RDF(S), DAML+OIL, and OWL Concept Taxonomies. In: Bramer M., Devedzic V. (eds.) Artificial Intelligence Applications and Innovations. AIAI 2004. IFIP International Federation for Information Processing, 154, Springer, Boston, MA

[16] Strauss, A.L., Glaser, B.G. (1967): The Discovery of Grounded Theory. Strategies for Qualitative Research. Transaction Publishers, New Brunswick, U.S.A. and London, UK

[17] Corcho, O., Fernandez-Lopez, M., Gomez-Perez, A. (2007): Ontological Engineering: What Are Ontologies and How Can We Build Them? In: Cardoso, J. (ed.) Semantic Web Services: Theory, Tools and Applications. IGI Global, Hershey, PA, U.S.A. and London, UK, 44–70