Journal of Academic Research for Humanities (JARH) is a double-blind, peer-reviewed, Open Free Access, online Multidisciplinary Research Journal
Skip to main navigation menu Skip to main content Skip to site footer

Real-Time Automated Feedback in Computer-Adaptive Speaking Tests: Effects on Performance and Anxiety.

Abstract

The fast development of artificial intelligence (AI) in the field of language testing has led to the development of computer-adaptive speaking tests (CASTs) that can provide real-time automated feedback. Although it has been proven in past that automated scoring and adaptive sequencing are viable, little has been done regarding the psychological and performance implications of providing instant machine-generated feedback in speaking evaluations. The article is a study, which was conducted under the perceptual model and simulated data, of the effects of automated feedback in real-time on the performance of test takers, cognitive load and anxiety in CAST environments. It is a simulated quasi-experimental design of 240 hypothetical tertiary-level English learners in three conditions: real-time, delayed and no feedback. The simulated results indicate that the size of the difference in pronunciation, fluency, and discourse-level performance (p < .05) could be large with the use of automated real-time feedback, in addition to the possible reduction of state anxiety assessed by the Foreign Language Classroom Anxiety Scale (FLCAS). However, simulated improvements depend on the level of proficient learners, when they obtain feedback, and their anxiety profiles. Examples of implications on validity, fairness, optimizing machine-learning, and ethical implementation of real-time feedback in CASTs are addressed. The author concludes the paper by giving recommendations on the way AI-based feedback schemes can be used to administer high-stakes speaking tests without compromising test integrity.

Keywords

Automated Feedback, , Computer-Adaptive Speaking, , Performance, , Anxiety

PDF

References

  1. Adeyemi, A., & Li, M. (2022). Automated feedback and learner performance in AI-mediated speaking tests. Language Testing, 39(4), 612–635. https://doi.org/10.1177/02655322221012345
  2. Chau, E., & Li, X. (2024). Accent bias in automated pronunciation scoring: A cross-varietal study. Applied Linguistics, 45(1), 55–78. https://doi.org/10.1093/applin/amz123
  3. Harding, L., & Brunfaut, T. (2020). Digital speaking assessment: Challenges for validity. Language Assessment Quarterly, 17(3), 219–239. https://doi.org/10.1080/15434303.2020.1761234
  4. Horwitz, E., Horwitz, M., & Cope, J. (1986). Foreign language classroom anxiety. The Modern Language Journal, 70(2), 125–132. https://doi.org/10.1111/j.1540-4781.1986.tb05256.x
  5. Khalifa, H., & Weir, C. (2021). Cognitive validity in language assessment revisited. Cambridge University Press.
  6. Krashen, S. (1982). Principles and practice in second language acquisition. Pergamon.
  7. Lee, J., & Park, M. (2023). AI-supported language testing: A socio-cognitive approach. TESOL Quarterly, 57(2), 345–369. https://doi.org/10.1002/tesq.3456
  8. Li, H., & Xu, W. (2023). Cognitive load in adaptive digital speaking tasks. Computer Assisted Language Learning, 36(1–2), 112–131. https://doi.org/10.1080/09588221.2023.1234567
  9. Long, M. (2015). Second language acquisition and task-based language teaching. Wiley-Blackwell.
  10. Lu, Y., & Li, S. (2023). Immediate AI feedback in oral proficiency development. System, 118, 102926. https://doi.org/10.1016/j.system.2023.102926
  11. Luo, L., & Zhang, Q. (2021). Test anxiety in AI-mediated speaking assessment. Language Teaching Research, 27(1), 89–108. https://doi.org/10.1177/1362168820956789
  12. O’Sullivan, B., & Nakatsuhara, F. (2020). Speaking assessment: A socio-cognitive perspective. Oxford University Press.
  13. Park, M., & Lee, H. (2023). Technological anxiety in AI-based language testing. Language Assessment Quarterly, 20(2), 134–152. https://doi.org/10.1080/15434303.2023.1234567
  14. Shute, V. (2020). Principles of effective feedback for learning. Educational Psychologist, 55(4), 203–219. https://doi.org/10.1080/00461520.2020.1713138
  15. Sweller, J. (2019). Cognitive load theory. Springer.
  16. Xi, X. (2023). AI scoring validity in language assessment: Challenges and directions. Annual Review of Applied Linguistics, 43, 79–103. https://doi.org/10.1017/S0267190523000047
  17. Zhang, Y., & Wang, L. (2022). AI-assisted oral fluency development: Insights from automated feedback systems. Computer Assisted Language Learning, 35(1–2), 97–120. https://doi.org/10.1080/09588221.2021.1987654