キーワード:正則化ネットワーク、関数近似、写像学習、正則化理論、多価写像、計算論的学習理論、多層ネットワーク、ニューロコンピューティング、逆問題
The regularization network (RN) is extended to approximate multi-valued functions so that many-to-h mapping, where h denotes multiplicity of the mapping, can be represented and learned from a finite number of input-output examples without hard clustering operations on the training data set. Multi-valued function approximations are useful for learning ambiguous input-output relations from examples. This extension, which we call the Multi-Valued Regularization Network (MVRN), is derived from the Multi-Valued Standard Regularization Theory (MVSRT), which is an extension of standard regularization theory to multi-valued functions. MVSRT is based on a direct algebraic representation of multi-valued functions by using tensor product (Kronecker's product). By simple transformation of the unknown functions, we can obtain linear Euler-Lagrange equations. Therefore, the learning algorithm for MVRN is reduced to solving a linear system. It's rather surprising that the dimension of the linear system is invariant to the multiplicity h. The proposed theory can be specialized and extended into Radial Basis Function (REF) Methods, Generalized REF (GRBF), spline approximation, and HyperBF networks of multi-valued functions. We also describe how the vector-valued function approximations can be extended into the multi- and vector-valued function approximations.
Keywords: Regularization network, Function approximation, Regularization theory, Multi-valued mapping, Computational learning theory, Multilayer network, Neural computation, Inverse problem.