Bachelor's and Master's Theses

Bachelor's and Master's theses can be written in German or, by arrangement, in English.



In general, we offer topics from the fields of databases, information retrieval and semantic information systems. More precisely, our topics mainly belong to one or more of the areas of searching on semistructured data, integration of heterogeneous information sources, efficiency of large-scale search engines, conversational information retrieval, natural language processing, human computer interaction, data integration, query processing, semantic web, computational argumentation (ranking, clustering, validating and extracting arguments from natural language texts), scholarly recommendation systems, domain-specific query languages and scientometrics.

The topic of a thesis determines which person supervises the thesis. The thematic focus of the advisors can be found on their personal page under Team.

If you are interested in a topic suggested by the chair or if you have your own topic suggestion for a Bachelor's or Master's thesis, please contact Prof. Dr. Ralf Schenkel. If you have already spoken with staff of the chair about a possible topic, please also include this in your email.


Please send us a list of your successfully completed modules with your request for a thesis. This overview helps us to assess which possible topic might fit your skills.

For a Bachelor's thesis, we expect that you have already successfully completed the following modules (if included in your module plan as a compulsory module) before you apply for a topic with us, as the content is very helpful for the successful completion of a Bachelor's thesis in our topics: Database Systems (Datenbanksysteme), Non-Relational Information Systems (Nichtrelationale Informationssysteme), CS-Project (Informatik-Projekt or Großes Studienprojekt), Advanced Programming (Fortgeschrittene Programmierung or Programmierung II).

Completed Bachelor's theses

[BT] Development of an interface for the realisation of explainable alignments for the FiLiPo system

Abstract: Data integration of RDF knowledge bases is an important task that plays an increasingly important role. By using many different data sources, it is possible to expand the data stock of a knowledge base or, if necessary, to correct erroneous information in the knowledge base. For this purpose, alignment systems are increasingly used, which relate the schema of one data source to that of another data source in such a way that the data can then be transferred between the data sources. One such system is FiLiPo (Finding Linkage Points). It automatically finds mappings between the schema of a local RDF knowledge base and the schema of a web API. One of the current challenges with such systems is to integrate users more into the process.  Especially when it comes to explaining to the users how and why the system made certain decisions. This bachelor thesis therefore presents a user interface for the FiLiPo alignment system that graphically presents FiLiPo's data to users. The user interface should enable users to understand, analyse and, if necessary, change or remove the alignments generated by FiLiPo.

[BT] Interactive exploration of RDF records using FCA

Abstract: Within the framework of the Semantic Web, information (knowledge) can be recorded in so-called knowledge graphs. However, these can quickly grow to an unmanageable size, so that both the content and the structure of the graph are difficult for people to comprehend. Therefore, it is necessary to find ways to create a basic understanding of the properties of knowledge graphs.

The aim of this work is to determine "knowledge about knowledge graphs" automatically by means of the mathematical model of Formal Concept Analysis (FCA) and to present it to the user. Therefore, an interactive tool was developed with which a user can perform and control the exploration of knowledge graphs.

To confirm the effectiveness of the tool, it was tested by a number of people and then evaluated. The test persons assessed the user experience and usability of the tool as predominantly positive. The aspects rated as less good offer clues for future improvements and optimisations to make the use of the tool even more attractive.

[BT] Development of a User Interface for Relation Alignment of RDF Knowledge Bases and Web APIs using the FiLiPo Framework

Abstract: In this final thesis the user interface for the FiLiPo system is presented. The development of such user interface requires a further study of problems and risks, the drafting of a concept and its implementation. One of the main goals was to develop an intuitive user interface that allows to use all the functionalities of the FiLiPo system. The thesis provides with the short introduction into schema alignment of RDF based knowledge bases and Web APIs. It also gives short information about the Angular framework that was used for the implementation. After describing the main requirements that have to be taken into consideration and giving answers on how to implement an intuitive user interface, the main concept is presented. It is based on already known solutions and examples, but still requires some creativity for the visualization of the alignment results. Then the implementation is documented. Using the Angular allows a quick integration of different components and their easy manipulation. The results of the user evaluation are presented that show if the concept and implementation were successful or not. In the end, we discuss on the further possible improvements.

[BT] Prediciting Paper Impact based on Citation Networks

 - no abstract available -

[BT] Connecting Linked Data and Web APIs using SPARQL

Abstract: Databases are used to store information and it is therefore essential that they are complete. In reality, however, databases have gaps and therefore methods must be used to supplement this missing information. Existing Linked Data systems use interfaces (SPARQL endpoints) for this purpose, which are not provided by all data providers. The common solution in practice is to provide a web API to still be able to request information. In order to be able to supplement missing information via Web APIs, a programme is implemented in this thesis that enables the connection of Linked Data systems and Web APIs. Thus, the programme ExtendedSPARQL developed in this thesis can completely answer a query to the local knowledge base by filling in missing information on-the-fly with the help of external web APIs. In doing so, the programme decides which external Web APIs are relevant for missing information and how to request the external Web APIs. It also decides how to extract the information it is looking for from Web API responses and how to add it to the results of the query. Furthermore, ExtendedSPARQL executes as few Web API requests as possible so that missing information is added with the least effort and redundant information is avoided. It is also easy to use, so that even users with only basic SPARQL knowledge can successfully perform ExtendedSPARQL queries. ExtendedSPARQL also provides a graphical user interface, which makes it even easier to use. In a subsequent evaluation, the programme proved that missing information can be successfully added using external web APIs and that redundant results rarely occur.

[BT] Appropriate Journal Search for Publications

Abstract: Researchers are normally not familiar with the thematic orientation of all journals and conferences in their field of research. As soon as researchers want to publish their work, they face the problem of finding a suitable journal or conference where they want to submit the paper. The aim of this thesis is the development of a recommender system, which can find suitable ones in respect of a given title of a publication. The system is based on data from dblp and Semantic Scholar, which contain titles of publications as well as their abstracts and keywords. Different methods for determining the similarity and relevance of papers were investigated. These include Tf/idf, BM25 and cosine similarity in conjunction with Doc2Vec. Various techniques were analysed in order to find and rank the journals and conferences associated with the corresponding papers. In addition, methods were developed to improve the results of the recommender system, such as looking at the number of citations from journals and conferences. The methods were evaluated automatically and manually. It turned out that cosine similarity with Doc2Vec did not achieve good results in contrast to the other two methods. To improve the usability of the recommender system, a visualisation in form of a web service was implemented.

[BT] A visual query language for SPARQL

Since the development of the Semantic Web by Tim Berners-Lee, more and more information is being published on the internet as Linked Open Data. These are specially designed to be analysed by machines. All elements are given unique identifiers. The elements can then be linked to each other via relations and form ever larger networks. The result is a "Giant Global Graph" in which all things of interest can be referenced. 

But while the amount of data in the SemanticWeb is constantly growing, only a few can use it. Searching for information is difficult because the user needs some prior knowledge. On the one hand, he needs to know how the data in the graph are connected and how they are labelled. On the other hand, they need knowledge about the query language SPARQL, which can be used to make queries to data sources in the Semantic Web. The visual query language developed in this work makes it easier for the user to get started and thus enables even non-experts to search the Semantic Web for information. Instead of a written query, the user graphically constructs a query from prefabricated elements. For this purpose, the Visual Query Builder programme was developed in this work, which implements such a visual query language. By specifying a schema for the respective data endpoint, the user is given the elements he can use. Thus, the user can see which elements exist at all and which attributes they have. The programme developed in this work and the underlying visual query language were then evaluated by a group of test persons. Visual Query Builder was able to prove that it enables both beginners and advanced users to successfully search a data source in the Semantic Web for desired information. In the evaluation, particular attention was paid to the usability of the application. The evaluation showed that the application achieved good results in both test procedures used and was able to prove its effectiveness.

[BT] Hybrid SPARQL queries via Linked Data and Web APIs

Digital libraries, such as dblp or the German National Library (DNB), aim to bring knowledge together online and make it available via the internet. Unfortunately, incomplete data sets are part of the everyday life of a digital library. Missing information, such as titles or author names, could be added using external web APIs. The main problem here is the integration of the external data into the local database, since a common schema, which serves to describe the structure of the data, must first be found. This is the main task of schema integration, which is a subfield of information integration and data migration. The ActiveSPARQL programme designed in this thesis exploits schema integration to use data from Web APIs to answer queries on-the-fly. When a user makes a query to the application, both the data from the local database and the externalWeb-APIs should be used to answer it satisfactorily. Using both sources is called a hybrid request. The design is based on the already existing framework ANGIE. In contrast to this, no wrapper is generated to answer the query, but an extended SPARQL query. In addition, ANGIE requires that the access methods of the web APIs must be declared manually. This step can be automated by the AID4SPARQL programme. This is able to find linkage points between the local and external data and thus ensure that external information is compatible with the local data. The results from AID4SPARQL are prepared in such a way that they can be used as a configuration for communication with web APIs. In addition to ActiveSPARQL, aWeb interface was designed to enable non-experts to create and execute hybrid queries without prior knowledge. Finally, a concept for evaluating the framework is presented, which can be used to compare ANGIE and ActiveSPARQL.

[BT] Comparison of contextualised embedding methods for similarity calculation of statements

 - no abstract available -

[BT] Feature Evaluation of Citation Distance Networks: Exploring new ways of measuring Scientific Impact

Abstract: This thesis introduces improvements to current approaches of classifying scientific work by observing the semantic similarity of publications in the same citation neighborhood. Available patterns in the neighborhood structures are used to generate an initial set of features. Different text representations, similarity measures and feature modes are implemented and studied to explore new approaches of generating meaningful features that improve classification procedures. Features are evaluated in terms of their predictive power when learning a model that distinguishes between seminal and survey publications. Learning patterns from features to better distinguish between the publications will be a proxy of the effectiveness of these features in evaluating research impact. The state-of-the-art research in this area achieved a result of 68.97% prediction accuracy whereas the approaches presented in this thesis achieved a prediction accuracy of up to 86.98% and therefore beat the latest results by a large margin. Thorough evaluation of the feature sets reveals which relationships in a neighborhood structure provide information that can help improve current research evaluation metrics by identifying high impact scientific work.

Keywords: Semantometrics - Feature Engineering - Natural Language Processing

[BT] Learning the Interface of Web APIs

All kinds of information can be retrieved from web APIs, for example metadata of publications. However, it is not always obvious what kind of data must be sent to the web API in order to receive a meaningful response. For this problem, a programme was developed that learns the appropriate transfer parameters of web APIs with the help of a source database. For this purpose, each type of data from the source database is sent to the web API and it is checked whether the response of the API is related to the sent data. Various parameters can be used to configure how closely the responses of the web API must match the data of the source database in order to be considered meaningful. For this purpose, several metrics for calculating string similarities were used to find the matches of both data sets. Through a suitable evaluation, it could be shown that with good configuration parameters all matches are found. In the presented system, a user also has the possibility to choose different metrics to compare the similarity of two values. For example, it is possible to specify that there must be an exact match between some data, such as ISBNs or other IDs. With the right configuration parameters, as well as knowing and specifying which metric is best for which type of data, almost any data can be recognised as a match that a human would also consider a match.

[BT] Resolver for conference venues

 - no abstract available -

[BT] Development of a Web-Based Time Clock System with Additional Project Tracking Functionality

 - no abstract available -

[BT] Meter reading app

 - no abstract available -

[BT] Development of a procedure for the automatic matching of bibliographic databases

 - no abstract available -

[BT] Development of a crawler for comments in online media

 - no abstract available -

[BT] Similarity measures for the literature database DBLP

 - no abstract available -

[BT] Development of a procedure for the automatic matching and merging of institute records

 - no abstract available -

[BT] Design and implementation of a search engine for a web archive

 - no abstract available -

[BT] Development of a browser extension to build a personal web archive

 - no abstract available -

Completed Master's theses

[MT] Natural Language Processing in Accounting

Abstract: This thesis offers an approach to detect booking duplicates by calculating sentence similarity as an application of Natural Language Processing. These bookings are exports of an accounting software. Among lots of other information, each booking has a booking note which is a short text written by the person who created the booking in the accounting software. The presented approach is part of a larger project in which all booking information is analyzed but in this thesis, solely the textual information of the notes is used for determining the similarity of two bookings. Several models are used for calculating the similarity of booking pairs and their results are compared. One important research objective is the comparison of the TFIDF as an application of the vector space model and language models as BERT and sentenceBERT which are using word and sentence embedding vectors. The best models achieve a F1-score of 0.6004 and an AUC-score of 0.555. Thorough analysis of True Positives, False Positives and False Negatives shows that embedding vectors not only offer advantages but other challenges are a consequence of using word embedding vectors when short texts are analyzed.

Keywords: Natural Language Processing - Duplicate Detection - Accounting - Short Texts

[MT] Automatic Fake News Detection on Tweets

 - no abstract available -

[MT] Validation of expert testimony and quantitative arguments

 - no abstract available -

[MT] Emotion Analysis of COVID-19 related Tweets

 - no abstract available -

[MT] Leyk: A Paper Recommendation System on bibliographic meta data

 - no abstract available -

[MT] Bilingual Argumentative Discourse Unit Detection for Argument Mining on French and German Proceedings of the European Parliament

Abstract: Argumentation Mining aims at automatically extracting structured arguments from unstructured textual documents. This work addresses the conduction of a cross-lingual argumentation mining task, the detection of argumentative discourse units (ADU)s. Our contribution is two-fold: firstly, we extract a German and French ADU-annotated parallel corpus for further research, secondly, we thereupon compare five state-of-the-art language models (LM)s. Following the CRISP-DM framework for data mining, we prepare the data from the popular Europarl corpus by conducting a topic modeling to semantically trim corpus size. On the French and German subcorpus, annotations are made, distinguishing between the labels “non-argumentative”, “claim” and “premise”. Given the human baseline, in the modeling phase, the five LMs German BERT, German DistilBERT, CamemBERT, mBERT and mDistilBERT are compared on the sentence classification task. The task is performed by the LMs with moderate success. There is a performance difference between German and French models, leading to the insight that considering the input language as a feature and not only a parameter is crucial. Other than that, the beneficial influence of multilingual pretraining is discussed, triggering a need for further research.

[MT] Generation of recommendations for reviewers of scientific publications

Abstract: Due to the increasing flood of publications, quality assurance of scientific work is playing an increasingly important role. One of the most important methods for quality assurance of scientific work is the so-called peer review process. In this context, the process of selecting a suitable reviewer to review the submitted manuscript is of great importance. However, this process is time-consuming and, if implemented incorrectly, leads to poor reviews. Therefore, the aim of this work is to make the previously described assignment process more efficient and at the same time more objective. This is to be achieved by automating the assignment process. For this purpose, a reviewer recommendation system was developed on the one hand and a classification system was provided on the other. The Reviewer Recommendation System receives a request in the form of a publication as input and suggests a certain number of suitable reviewers. In contrast, the classification system receives a reviewer and a manuscript as input and predicts whether the given reviewer is relevant to the manuscript in question or not. In creating these systems, the effects of different combinations of document representations, similarity measures, levers and voting techniques were also analysed. The results of this work show that both systems can support the assignment process in the peer review process within their use cases. Furthermore, the evaluation of the RR system shows that the tf-idf method in combination with the cosine measure provides the best results. CombSUM TOP 5, CombSUM TOP 10 and Reciprocal Rank were identified as the best performing voting techniques. The evaluation of the classifiers led to the result that the SciBERT classifier achieves a classification accuracy of 80.2 % and thus performs best.

[MT] Methods for resolving references in argument structures in the German language

Abstract: This paper deals with the investigation of systems that are supposed to recognise Named Entities (NE) and references in the German language. The identification of NEs is important in several respects. On the one hand, they can be used to embed additional information from an external source into a text, for example the office of a politician. Secondly, they play a role in recognising references, such as the resolution of personal pronouns. The resolution of references is helpful when only a section of a text is available to a system at the end. To increase its performance, it is advantageous if all references in this section have been resolved correctly. An example of this is the ReCAP project, which processes queries about an assertion and returns isolated nodes containing theses for or against this assertion.

Therefore, in this paper, first a corpus of twelve German texts with educational policy content is elaborated with regard to the NEs and references they contain. Subsequently, three NE systems as well as two coreference resolution systems are evaluated on these twelve texts. The evaluation of these systems is a time-consuming process that can only be automated to a certain extent. This is mainly because the gold standard has been annotated in such a way that an entity has the maximum information content. However, systems often only recognise a partial string; in such cases, manual evaluation is unavoidable.

Accordingly, the final comparison between the systems is also not trivial. In the recognition of NE, a distinction was made between the exact hits and the partial hits between a candidate system and the gold standard. For the exact hits, the Stanford Named Entity Recognizer (NER) comes out ahead with an F1 score of 57.67% and 54.44%, respectively, depending on how the results of the different texts are calculated on average. When partial hits are taken into account, FLAIR comes first with an F1 score of 72.63 % and 67.44 % respectively. However, it would be too simplistic to limit the results to the F1 score alone; the systems have different strengths and weaknesses, such as the recognition of persons. In fact, the Stanford NER performs worst in this category.

In contrast to Named Entity Recognition, the results of Coreference Resolution are weak. CorZu achieves a maximum F1 score of 27.4 % and IMS HotCoref DE a value of 29.1 %. The systems make many references that are no gain, for example {the students - the students}. When these are ignored, the precision increases from 22.86 % to 41.86 % in the best case.

A final examination on isolated text passages in the ReCAP project, in which a resolution of references was carried out manually in the course of the project, shows that these values are insufficient for use in practice.

[MT] Automatic selection of thematically suitable publications for indexing in a subject-specific bibliographic database

 - no abstract available -

[MT] A Web-Interface for Exploration and Visualization of Bibliographic Metadata

Abstract: There are many systems for the exploration of bibliographic metadata. However, retrieving and filtering information that is actually relevant often requires complicated search interfaces and long search paths, especially for complex information needs. In this work a web interface for the exploration and visualization of bibliographic metadata is proposed. The core idea is based on a Domain Specific Query Language (DSQL) called SchenQL which aims to be easy to learn and intuitive for domain experts as well as casual users for efficiently retrieving information on bibliographic metadata. This is achieved by using natural sounding keywords and specially designed functions for this particular domain. In addition, the web interface implements useful visualizations of citations and references or co-author relationships. The interface also offers keyword suggestions and an auto completion feature that allows for easily creating SchenQL queries, without having to learn all the keywords of the language beforehand. A three-part user study with 10 students and employees from the field of computer science was conducted where the effectiveness and usability of the SchenQL web interface was evaluated.

[MT] Implementation of an Auto-Test Framework based on Web Technology for Desktop Applications

 - no abstract available -