人工智慧自動生成英文測驗之成效評估

  • 刊登日期: 2021-02-09
申請系所(單位) 外國語文學系
計畫主持人 楊謦瑜 助理教授
計畫名稱(中文) 人工智慧自動生成英文測驗之成效評估
計畫名稱(英文) Evaluating Automatic Generation of English Assessments
共同主持人 1. 范耀中 副教授 / 資工系 2. 張玉芳 教授 / 外文系 3. 施以明 副教授 / 外文系
協同主持人
中文摘要 命題向來是語言教育中十分重要的一環,人工出題難易度與效度難免因教師標準不同而異,且佔據教師的教學與研究能量,運用科技協助教師命題應是趨勢。本計畫擬搜集來源可靠的考題建置資料集,用以優化人工智慧問句生成模型以及開發新模型以輔助不同題型命題。此外,運用語言中心開課優勢,自動命題成果將用於大一英文的評量上,藉此評估開發之模型是否具有鑑別力,並根據測驗量化的評估以及教師質性的對考題品質的評估,對模型進行調教,預期開發出符合使用情境、具有新創價值的自動命題模型與系統。
英文摘要 Testing and assessment play an important role in language education. Controlling the difficulty level and the validity of a test can, however, be a challenge because of the differences in teachers’ standards. In addition, as a time-consuming task, constructing test items takes up too much of teachers’ time. Using technology to assist teachers in creating test questions can reduce teachers’ workload and standardize the testing difficulty level, and therefore will be a future trend. One purpose of this project is to create datasets with questions from a reliable source. The datasets will be used as training data to optimize the BERT Highlight Sequential Question Generation model for reading comprehension questions generation. New models will be developed as well to generate questions of other types. Additionally, English tests generated by the automatic test-generation system will be administered by NCHU Language Center in Freshmen English courses to measure whether these questions are good at discriminating language learners’ proficiency. Large-scale evaluation of the generated questions as well as language teaching experts’ evaluation on the question quality will be carried out to fine-tune the machine learning models. It is expected that the results will help develop an innovative automatic test-generation system that meets the assessment needs of English language learners and teachers.