TY - GEN
T1 - A WoZ Study for an Incremental Proficiency Scoring Interview Agent Eliciting Ratable Samples
AU - Saeki, Mao
AU - Demkow, Weronika
AU - Kobayashi, Tetsunori
AU - Matsuyama, Yoichi
N1 - Funding Information:
This paper is based on results obtained from a project, JPNP20006 (“Online Language Learning AI Assistant that Grows with People”), subsidized by the New Energy and Industrial Technology Development Organization (NEDO).
Publisher Copyright:
© 2022, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
PY - 2022
Y1 - 2022
N2 - To assess the conversational proficiency of language learners, it is essential to samples that are representative of the learner’s full linguistic ability. This is realized through the adjustment of oral interview questions to the learner’s perceived proficiency level. An automatic system eliciting ratable samples must incrementally predict the approximate proficiency from a few turns of dialog and employ an adaptable question generation strategy according to this prediction. This study investigates the feasibility of such incremental adjustment of oral interview question difficulty during the interaction between a virtual agent and learner. First, we create an interview scenario with questions designed for different levels of proficiency and collect interview data using a Wizard-of-Oz virtual agent. Next, we build an incremental scoring model and analyze the accuracy. Finally, we discuss the future direction of automated adaptive interview system design.
AB - To assess the conversational proficiency of language learners, it is essential to samples that are representative of the learner’s full linguistic ability. This is realized through the adjustment of oral interview questions to the learner’s perceived proficiency level. An automatic system eliciting ratable samples must incrementally predict the approximate proficiency from a few turns of dialog and employ an adaptable question generation strategy according to this prediction. This study investigates the feasibility of such incremental adjustment of oral interview question difficulty during the interaction between a virtual agent and learner. First, we create an interview scenario with questions designed for different levels of proficiency and collect interview data using a Wizard-of-Oz virtual agent. Next, we build an incremental scoring model and analyze the accuracy. Finally, we discuss the future direction of automated adaptive interview system design.
UR - http://www.scopus.com/inward/record.url?scp=85142708560&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85142708560&partnerID=8YFLogxK
U2 - 10.1007/978-981-19-5538-9_13
DO - 10.1007/978-981-19-5538-9_13
M3 - Conference contribution
AN - SCOPUS:85142708560
SN - 9789811955372
T3 - Lecture Notes in Electrical Engineering
SP - 193
EP - 201
BT - Conversational AI for Natural Human-Centric Interaction - 12th International Workshop on Spoken Dialogue System Technology, IWSDS 2021
A2 - Stoyanchev, Svetlana
A2 - Ultes, Stefan
A2 - Li, Haizhou
PB - Springer Science and Business Media Deutschland GmbH
T2 - 12th International Workshop on Spoken Dialogue System Technology, IWSDS 2021
Y2 - 15 November 2021 through 17 November 2021
ER -