TR-H-0045 :1993.12.22

Masahiko SHIZAWA

Multi-Valued Standard Regularization Theory (2): Regularization Networks and Learning Algorithms for Approximating Multi-Valued Functions

Abstract:The Regularization Network (RN) is extended to approximate multi-valued functions so that the one-to-h mapping, where h denotes the multiplicity of the mapping, can be represented and learned from a finite number of input-output samples without clustering operations on the sample data set. Multi-valued function approximations are useful for learning ambiguous input-output relations from examples. This extension, which we call the Multi-Valued Regularization Network (MVRN), is derived from the Multi-valued Standard Regularization Theory (MVSRT), which is an extension of the standard regularization theory to multi-valued functions. MVSRT is based on a direct algebraic representation of multi-valued functions. By simple transformation of the unknown functions, we can obtain linear Euler-Lagrange equations. Therefore, the learning algorithm for MVRN is reduced to solving a linear system. It is rather surprising that the dimension of the linear system is invariant to the multiplicity h. The proposed theory can be specialized and extended to Radial Basis Function (RBF), Generalized RBF (GRBF), and Hyper BF networks of multi-valued functions.

Keywords: Multi-Valued Function, One-to-Many Mapping, Computational Learning, Ambiguous Relations, Function Approximation, Regularization Network, Radial Basis Function Network, GRBF Network, Spline Approximation Network, Feedforward Neural Network.