L^p approximation problems in system identification with RBF neural networks are investigated. It is proved that by superpositions of some functions of one variable in L^ploc(R), one can approximate continuous functionals defined on a compact subset of L^P(K) and continuous operators from a compact subset of L^p1 (K1) to a compact subset of L^p2 (K2). These results show that if its activation function is in L^ploc(R) and is not an even polynomial, then this RBF neural networks can approximate the above systems with any accuracy.
L p approximation capability of radial basis function (RBF) neural networks is investigated. If g: R +1 → R 1 and $g(\parallel x\parallel _{R^n } )$g(\parallel x\parallel _{R^n } ) ∈ L loc p (R n ) with 1 ≤ p < ∞, then the RBF neural networks with g as the activation function can approximate any given function in L p (K) with any accuracy for any compact set K in R n , if and only if g(x) is not an even polynomial.
In this work, a gradient method with momentum for BP neural networks is considered. The momentum coefficient is chosen in an adaptive manner to accelerate and stabilize the learning procedure of the network weights. Corresponding convergence results are proved.
In this paper we prove a finite convergence of online BP algorithms for nonlinear feedforward neural networks when the training patterns are linearly separable.