Automatic Speech Recognition (ASR) can be very helpful for speakers who suffer from dysarthria, a neurological disability that damages the control of motor speech articulators. Although a few attempts have been made to apply ASR technologies to sufferers of dysarthria, previous studies show that such ASR systems have not attained an adequate level of performance. In this study, a Dysarthric Multi-Networks Speech Recogniser (DM-NSR) model is provided using a realisation of Multi-Views Multi-Learners approach called Multi-Nets Artificial Neural Networks, which tolerates variability of dysarthric speech. In particular, the DM-NSR model employs several ANNs (as learners) to approximate the likelihood of ASR vocabulary words and to deal with the complexity of dysarthric speech. The proposed DM-NSR approach was presented as both speaker-dependent (SD) and speaker-independent (SI) paradigms. In order to highlight the performance of the proposed model over legacy models, Multi-Views Single-Learner models of the DM-NSRs were also provided and their efficiencies were compared in detail. Moreover, a comparison among the prominent dysarthric ASR methods and the proposed one is provided. The results show that the DM-NSR recorded improved recognition rate by up to 24.67% and the error rate was reduced by up to 8.63% over the reference model. Keywords: Multi-views multi-learners;Multi-nets artificial neural networks; Dysarthric speech recognition; Dysarthria Published by: IEEE Transactions on Neural Systems and Rehabilitation Engineering, Impact Factor: 3.24, Indexed by Web of Science (ISI) Full Title: On the use of Multi-Nets Artificial Neural Networks towards Dysarthric Speech Recognition: A Multi-Views Multi-Learners approach Full paper link: IEEE Explorer |
Date: Wednesday, August 20, 2014 Language: English Downloded 11 times. |