Evaluating Automatic Generation of English Assessments

  • Publication Date: 2021-02-09
Application Dept. Dept. of Foreign Languages Literatures
Principal Investigator Asst Prof. Ching-Yu Yang
Project Title Evaluating Automatic Generation of English Assessments
Co-Principal Investigator 1. Asst. Prof. Yao-Chung Fan, Dept. of Computer Science and Engineering 2. Prof. Yuh-fang Chang, Dept. of Foreign Languages and Literatures 3. Asst. Prof. I-ming Shi, Dept. of Foreign Languages and Literatures
Co-Investigator
Abstract Testing and assessment play an important role in language education. Controlling the difficulty level and the validity of a test can, however, be a challenge because of the differences in teachers’ standards. In addition, as a time-consuming task, constructing test items takes up too much of teachers’ time. Using technology to assist teachers in creating test questions can reduce teachers’ workload and standardize the testing difficulty level, and therefore will be a future trend. One purpose of this project is to create datasets with questions from a reliable source. The datasets will be used as training data to optimize the BERT Highlight Sequential Question Generation model for reading comprehension questions generation. New models will be developed as well to generate questions of other types. Additionally, English tests generated by the automatic test-generation system will be administered by NCHU Language Center in Freshmen English courses to measure whether these questions are good at discriminating language learners’ proficiency. Large-scale evaluation of the generated questions as well as language teaching experts’ evaluation on the question quality will be carried out to fine-tune the machine learning models. It is expected that the results will help develop an innovative automatic test-generation system that meets the assessment needs of English language learners and teachers.