keyboard_arrow_up
Towards Stable AI systems for Evaluating Arabic Pronunciations

Authors

Hadi Zaatiti 1, Hatem Hajri 2, Osama Abdullah 1 and Nader Masmoudi 1, 1 New York University Abu Dhabi, UAE, 2 Institut de recherche technologique SystemX, France

Abstract

Modern Arabic ASR systems such as wav2vec 2.0 excel at word- and sentence-level transcription, yet struggle to classify isolated letters. In this study, we show that this phoneme-level task, crucial for language learning, speech therapy, and phonetic research, is challenging because isolated letters lack co-articulatory cues, provide no lexical context, and last only a few hundred milliseconds. Recogniser systems must therefore rely solely on variable acoustic cues, a difficulty heightened by Arabic's emphatic (pharyngealized) consonants and other sounds with no close analogues in many languages. This study introduces a diverse, diacritised corpus of isolated Arabic letters and demonstrate that state-of-the-art wav2vec 2.0 models achieve only 35 % accuracy on it. Training a lightweight neural network on wav2vec embeddings raises performance to 65 %. However, adding a small amplitude perturbation (ϵ = 0.05) cuts accuracy to 32 %. To restore robustness, we apply adversarial training, limiting the noisy-speech drop to 9 % while preserving clean-speech accuracy. We detail the corpus, training pipeline, and evaluation protocol, and release, on-demand, data and code for reproducibility. Finally, we outline future work extending these methods to word- and sentence-level frameworks, where precise letter pronunciation remains critical.

Keywords

Arabic letters pronunciation, adversarial training, classification

Full Text  Volume 15, Number 16