Optimizing English pronunciation teaching through motion analysis and intelligent speech feedback systems

  • Jieru Wang Department of Economic Management, Weihai Ocean Vocational College, Weihai 264200, China
Keywords: Motion Analysis; Intelligent Speech Feedback; articulatory precision; Pronunciation Error reduction; English Pronunciation; non-native speakers
Article ID: 652

Abstract

This study investigates the effectiveness of integrating Motion Analysis (MA) and Intelligent Speech Feedback Systems (ISFS) to enhance English Pronunciation (EP) accuracy among Chinese learners. Leveraging the OptiTrack Prime 13 Motion Capture System (MCS) and SpeechAce Pronunciation API, the study aims to address challenges non-native English speakers face, particularly in producing accurate articulatory movements and reducing Pronunciation Errors. Forty-three participants were divided into Experimental Groups (EG) and Control Groups (CG), with the EG receiving real-time feedback on articulation and phoneme accuracy. Key metrics, including Pronunciation Accuracy Score (PAS), Articulatory Movement Score (AMS), and Pronunciation Error Rate (PER), were measured alongside engagement indicators, such as session duration and self-corrections. The results show that the EG experienced a significant improvement in pronunciation accuracy, with a 31.2% increase in PAS and a 57.1% reduction in PER. Enhanced AMS scores also indicated refined articulatory precision across various articulatory points, including lip rounding and tongue positioning. Engagement metrics demonstrated higher consistency and task completion rates in the EG, suggesting increased motivation and sustained participation due to the real-time feedback provided. These findings suggest that combining MA with ISFS can provide targeted, adaptive support, enabling learners to make precise corrections and accelerate their progress in achieving native-like pronunciation. This study contributes valuable insights into the potential of advanced feedback-driven approaches in language learning and pronunciation training.

References

1. Kissová O. Contrastive analysis in teaching English pronunciation. SWS Journal of Social Sciences and Art. 2020; 2(1), 39–65.

2. Dao DNA. Critical success factors in learning English pronunciation: A look through the lens of the learner (Doctoral dissertation, University of Nottingham). 2021.

3. AbdAlgane M, Idris SAM. Challenges of pronunciation to EFL learners in spoken English. Multicultural Education. 2020; 6(5).

4. Duyen TMT. Exploring Phonetic Differences and Cross-Linguistic Influences: A Comparative Study of English and Mandarin Chinese Pronunciation Patterns. Open Journal of Applied Sciences. 2024; 14(7), 1807–1822.

5. Lavitskaya Y, Zagorodniuk A. Acquisition of English onset consonant clusters by L1 Chinese speakers. English Pronunciation Instruction: Research-based insights. 2021; 11, 255–278.

6. Mehta S. Effects of Visual Feedback on the Production and Perception of Second Language Speech Sounds: A Comparison of Articulatory and Auditory Instruction. The University of Texas at Dallas. 2020.

7. O’Connell S. Investigating a speech and language therapy-informed approach to pronunciation teaching in the English language teaching classroom (Doctoral dissertation, University of Limerick). 2023.

8. Sconiers MG. Guided Articulation Treatment with Supplemental Visual Feedback (Doctoral dissertation, California State University San Marcos). 2021.

9. Bogach N, Boitsova E, Chernonog S, et al. Speech processing for language learning: A practical approach to computer-assisted pronunciation teaching. Electronics. 2021; 10(3), 235.

10. Rogerson-Revell PM. Computer-assisted pronunciation training (CAPT): Current issues and future directions. Relc Journal. 2021; 52(1), 189–205.

11. Liu Y, Quan Q. AI recognition method of pronunciation errors in oral English speech with the help of big data for personalized learning. Journal of Information & Knowledge Management. 2022; 21(Supp02), 2240028.

12. Kholis A. Elsa speak app: automatic speech recognition (ASR) for supplementing English pronunciation skills. Pedagogy: Journal of English Language Teaching. 2021; 9(1), 1–14.

13. Nemani P, Krishna GS, Supriya K, Kumar S. Speaker independent VSR: A systematic review and futuristic applications. Image and Vision Computing. 2023; 104787.

14. Wang Y, Huang H. Audio–visual deepfake detection using articulatory representation learning. Computer Vision and Image Understanding. 2024; 248, 104133.

15. Indumathi N, Savaram P, Sengan S, et al. Impact of Fireworks Industry Safety Measures and Prevention Management System on Human Error Mitigation Using a Machine Learning Approach, Sensors, 2023, 23 (9), 4365; DOI:10.3390/s23094365.

16. Parkavi K, Satheesh N, Sudha D, et al. Effective Scheduling of Multi-Load Automated Guided Vehicle in Spinning Mill: A Case Study, IEEE Access, 2023, DOI:10.1109/ACCESS.2023.3236843.

17. Ran Q et al., English language teaching based on big data analytics in augmentative and alternative communication system, Springer-International Journal of Speech Technology, 2022, DOI:10.1007/s10772-022-09960-1.

18. Ngangbam PS, Suman S, Ramachandran TP, et al. Investigation on characteristics of Monte Carlo model of single electron transistor using Orthodox Theory, Elsevier, Sustainable Energy Technologies and Assessments, Vol. 48, 2021, 101601, doi: 10.1016/j.seta.2021.101601

19. Huang H, Wang X, Sengan S, et al. Emotional intelligence for board capital on technological innovation performance of high-tech enterprises, Elsevier, Aggression and Violent Behavior, 2021, 101633, doi: 10.1016/j.avb.2021.101633.

20. Sudhakar S, Kumar K, Subramaniyaswamy V, et al., Cost-effective and efficient 3D human model creation and re-identification application for human digital twins, Multimedia Tools and Applications, 2021. DOI:10.1007/s11042-021-10842-y.

21. Prabhakaran N, Sengan S, Marimuthu BP, et al. Novel Collision Detection and Avoidance System for Mid-vehicle Using Offset-Based Curvilinear Motion. Wireless Personal Communication, 2021. DOI:10.1007/s11277-021-08333-2.

22. Balajee A, Rajagopal V, Sengan S, et al., Modeling and multi-class classification of vibroarthographic signals via time domain curvilinear divergence random forest, J Ambient Intell Human Comput, 2021, DOI:10.1007/s12652-020-02869-0.

23. Omnia SN, Setiawan R, Jayanthi P, et al. An educational tool for enhanced mobile e-Learning for technical higher education using mobile devices for augmented reality, Microprocessors and Microsystems, 83, 2021, 104030, doi: 10.1016/j.micpro.2021.104030

24. Firas TA, Ayasrah FT, Alsharafa NS, et al. Strategizing Low-Carbon Urban Planning through Environmental Impact Assessment by Artificial Intelligence-Driven Carbon Foot Print Forecasting, Journal of Machine and Computing, 4(4), 2024, doi: 10.53759/7669/jmc202404105.

25. Shaymaa HN, Sadu VB, Sengan S, et al. Genetic Algorithms for Optimized Selection of Biodegradable Polymers in Sustainable Manufacturing Processes, Journal of Machine and Computing, 4(3), 563–574, https://doi.org/10.53759/7669/jmc202404054.

26. Hayder MAG, Sengan S, Sadu VB et al. An open-source MP + CNN + BiLSTM model-based hybrid model for recognizing sign language on smartphones. Int J Syst Assur Eng Manag. 2024. https://doi.org/10.1007/s13198-024-02376-x

27. Bhavana Raj K, Webber JL, Marimuthu D, et al. Equipment Planning for an Automated Production Line Using a Cloud System, Innovations in Computer Science and Engineering. ICICSE 2022. Lecture Notes in Networks and Systems, 565, 707–717, Springer, Singapore. DOI:10.1007/978-981-19-7455-7_57.

28. Zhai X, Haudek KC, Shi L, et al. From substitution to redefinition: A framework of machine learning‐based science assessment. Journal of Research in Science Teaching. 2020; 57(9), 1430–1459.

29. Lee HS, Gweon GH, Lord T, et al. Machine learning-enabled automated feedback: Supporting students’ revision of scientific arguments based on data drawn from simulation. Journal of Science Education and Technology. 2021; 30(2), 168–192.

30. Ibragimova S. Creation of An Intelligent System for Uzbek Language Teaching Using Phoneme-Based Speech Recognition. Revue d’Intelligence Artificielle. 2023; 37(6).

31. Liakina N, Liakin D. Speech technologies and pronunciation training: What is the potential for efficient corrective feedback. Second Language Pronunciation: Different Approaches to Teaching and Training, 2023; 287–312.

32. Sun W. The impact of automatic speech recognition technology on second language pronunciation and speaking skills of EFL learners: a mixed methods investigation. Frontiers in Psychology, 2023; 14, 1210187.

33. Zhu J, Zhang X, Li J. Using AR filters in L2 pronunciation training: Practice, perfection, and willingness to share. Computer Assisted Language Learning, 2024; 37(5-6), 1364–1396.

34. Lan EM. A comparative study of computer and mobile-assisted pronunciation training: The case of university students in Taiwan. Education and Information Technologies, 2022; 27(2), 1559–1583

Published
2024-12-20
How to Cite
Wang, J. (2024). Optimizing English pronunciation teaching through motion analysis and intelligent speech feedback systems. Molecular & Cellular Biomechanics, 21(4), 652. https://doi.org/10.62617/mcb652
Section
Article