Linguistic Appropriateness and Pedagogic Usefulness of Reading Comprehension Questions

Automatic generation of reading comprehension questions is a topic receiving growing interest in the NLP community, but there is currently no consensus on evaluation metrics and many approaches focus on linguistic quality only while ignoring the pedagogic value and appropriateness of questions. This paper overcomes such weaknesses by a new evaluation scheme where questions from the questionnaire are structured in a hierarchical way to avoid confronting human annotators with evaluation measures that do not make sense for a certain question. We show through an annotation study that our scheme can be applied, but %. The research results regarding English questions show that ...(1) that expert annotators with some level of expertise are needed. We also created and evaluated two new evaluation data sets from the biology domain for Basque and German, composed of questions written by people with an educational background, which will be publicly released. Results show that manually generated questions are in general both of higher linguistic as well as pedagogic quality and that among the human generated questions, teacher-generated ones tend to be most useful.
Egileak: 
Andrea Horbach, Itziar Aldabe, Marie Bexte, Oier Lopez de Lacalle and Montse Maritxalar
Fitxategi publikoak: 
Urtea: 
2020
Artikuluaren erreferentzia: 
Proceeding of 12th Edition of its Language Resources and Evaluation Conference (LREC2020). Marseille, France

Argitalpen mota:

Argitalpen mota fina (argitalpen_sailkapen_ohia):

Kongresuaren balorazioa: