TR-H-0003 :1992.7.20 ( Internal Use )

Yasuhiro Wada, Mitsuo Kawato

A New Information Criterion Combined with Cross-Validation Method to Estimate Generalization Capability

Abstract:Neural network learning processes use only a limited number of examples of a given problem. Thus, generally speaking, it is not necessarily theoretically guaranteed that the trained network can give correct answers for unknown examples. A new method of selecting the optimal neural network structure with maximum generalization capability is proposed. In statistical mathematics, several information. criteria, such as AIC (Akaike's information criterion), BIC (Bayesian information criterion), and MDL (minimum description length), are used widely to select a suitable model. Applications of these criteria were quite successful, especially for linear models. These criteria assume that the model parameters are estimated correctly by using the maximum likelihood method. Unfortunately, however, this assumption does not hold for conventional iterative learning processes such as backpropagation in multilayer perceptrons or Boltzmann machine learning. Thus, we should not apply AIC directly to the selection of the optimal neural network structure. In this paper, by expanding AIC, a new information criterion is proposed that can estimate generalization capability without the maximum likelihood estimator of synaptic weights. The cross-validation method is used to calculate the new information criterion. By computer simulation, we show that the proposed information criterion can accurately predict the generalization capability of multilayer perceptrons, and thus the optimal number of hidden units can be determined.