On March 17, we were invited by the National Library of the Netherlands to present the results of our study on retrievability bias in the Dutch historic newspaper archive.
Summary of the talk:
Search engines are not “objective” pieces of technology, and bias in Delpher’s search engine may or may not harm user access to certain type of documents in the collection. In the worst case, systematic favoritism for a certain type can render other parts of the collection invisible to users. This potential bias can be evaluated by measuring the “retrievability” for all documents in a collection. We explain the ideas underlying the retrievability metric, and how we measured it on the KB Newspaper collection. We describe and quantify the retrievability bias imposed on the newspaper collection by three different commonly used Information Retrieval models. For this, we investigated how document features such as length, type, or date of publishing influence the retrievability.
We also investigate the effectiveness of the retrievability measure, featuring two characteristics that set our experiments apart from previous studies: (1) the newspaper collection contains noise originating from OCR processing, and historical spelling and use of language; and (2) rather than the simulated queries used in other studies, we use real user query logs including click data. We show how simulated queries differ from real user queries regarding term frequency and prevalence of named entities, and how this affects the results of a retrieval task.
We presented our paper on “Impact Analysis of OCR Quality on Research Tasks in Digital Archives” at this year’s International Conference on Theory and Practice of Digital Libraries (TPDL2015).
We describe how humanities scholars currently use digital archives and the challenges they face in adapting their research methods compared to using a physical archive. The required shift in research methods has the cost of working with digitally processed historical documents. Therefore, a major concern for the scholars is the question how much trust they can place in analyses based on noisy representations of source texts.
Based on interviews with humanities scholars and a literature study, we classify scholarly research tasks according to their susceptibility to errors originating from OCR-induced biases. Search results for “Amsterdam”, for example, are likely to be influenced by the confusion of the letters “s” and “f”, especially for material that was created before 1800, when the “long s” was still used.
In order to reduce the impact of such errors, we investigated which kind of data would be required for this and whether or not it is available in the archive.
We describe our study of example research tasks performed on the digital newspaper archive of the National Library of The Netherlands. In this study, we tried to reduce the uncertainty of the results as much as possible with the data publicly available in the archive.
We conclude that the current knowledge situation on the scholars’ side as well as on the tool makers’ and data providers’ side is insufficient and needs to be improved.
Together with the eHumanities group of KNAW and the Amsterdam Data Science Center, we organize a workshop on Tool Criticism in the Digital Humanities.
The workshop will take place in Amsterdam on May 22, 2015.
For more information please have a look at the website.
[Update:] A report that summarizes the discussions and results from the workshop is now available.
On Tuesday, March 24, 2015, the National Library of The Netherlands organized a symposium on the use of digitized newspapers in the Digital Humanities. The goal of the symposium was to engage information specialists and end users in a discussion with the KB on future possibilities of using the (data in the) digital newspaper archive.
We presented our research ideas on estimating the impact of OCR errors on research tasks.
For more information about the event, please have a look at the report on the event at the KB website.
EKAW 2014 short paper presentation – Using Linked Data to diversify search results: a case study in cultural heritage. The paper can be found here.
Slides of the panel presentation of the Accurator nichesourcing framework at MCN 2014. You can find a trip report of MCN 2014 here.
Cultural Heritage domain has opened up to contributions from the users on the web. The contributions are mainly in the form of tags which describe certain aspect of the cultural heritage object. With a wide range of users on the web, it becomes important to determine the quality of the user contributed content before it is published online. However, manually evaluating the quality of these user generated contributions is exhausting in terms of resources for the Cultural Heritage institutions. In this talk, I will describe methods which can semi-automatically predict the quality of tags. These methods address three research questions: How can we trust an online contributor?, How can we assess the quality of annotation process? and How can we trust the contributed data?. The slides for the presentation can be found here.
Large datasets such as Cultural Heritage collections require detailed annotations when digitised and made available online. Annotating dierent aspects of such collections requires a variety of knowledge and expertise which is not always possessed by the collection curators. Artwork annotation is an example of a knowledge intensive image annotation task, i.e. a task that demands annotators to have domain-specic knowledge in order to be successfully completed. Today, Lora Aroyo will present WebSci2014 conference the results of a study aimed at investigating the applicability of crowdsourcing techniques to knowledge intensive image annotation tasks. We observed a clear relationship between the annotation difficulty of an image, in terms of number of items to identify and annotate, and the performance of the recruited workers. Here you can see the poster and the slides of the presentation.
We are going to present the new version of Accurator at Naturalis! Slideshare preview:
On April 14, we presented our full paper on our results on crowdsourcing an expert task at the European Conference on Information Retrieval 2014. Here are the slides of the presentation.