- 著者
-
中谷 安男
- 出版者
- 法政大学経済学部学会
- 雑誌
- 経済志林 = The Hosei University Economic Review (ISSN:00229741)
- 巻号頁・発行日
- vol.87, no.1・2, pp.21-50, 2019-09-20
This paper explores the relationship between the results of the automated scoring system based on CEFR-J and human raters’ assessments. As a pilot study for further investigation dealing with more subjects, this study examines 3 different levels of CEFR-J writing test tasks for 37 participants. First, two independent raters evaluated a total of 111 test samples by using the CEFR-J assessment guidelines for each individual level. These results were compared with the assessment of a CEFR-J automated level judging system that utilized leveraged error types and text quality measures. The results show that although the indicators used for correlation are low, the consistency between each method of evaluation tends to be better at a higher level: B1.2.1. The qualitative analysis of the test samples with large discrepancies indicates that it is effective to use both human raters and methods and the automated level judging system when deciding candidates’ final scores and giving feedback on results.