パトリックハフナー, アレックス ワイベル,沢井秀文,鹿野消宏
音声ニューラルネットワークのための
バックプロパゲーションアルゴリズムの高速化
Abstract:Several improvements to the Back-Propagation learning algorithm are
proposed to achieve fast optimization of speech tasks in Time Delay Neural Networks. A
steep error surface is used, weights are updated more frequently and both the step
size and the momentum are scaled to the largest values that do not result in
overshooting. Training for the speaker-dependent recognition of the phonemes/b/,/d/and/g/
takes less than 1 minute on an Alliant parallel computer. The same algorithm needs one hour and 5000
training tokens to recognize all the
Japanese consonants with 96.7% correct on test data. Moreover, these fast
methods make it possible to study generalization performance on large Neural Network
tasks.