Meade's On-line Preprints 
    Regularization of a Programmed Recurrent Artificial Neural Network 
    Andrew J. Meade, Jr.,
    Submitted to Journal of Guidance, Controls, and Dynamics, 2000.

    Keywords:regularization, recurrent artificial neural networks, neural computation, differential equations, chaos, network training.

    Abstract: A method is developed for manually constructing recurrent artificial neural networks to model the fusion of experimental data and mathematical models of physical systems. The construction requires the use of Generalized Tikhonov Regularization (GTR) and imposing certain constraints on the values of the input, bias, and output weights. The attribution of certain roles to each of these parameters allows for mapping a polynomial approximation into an artificial neural network architecture. GTR provides a rational means of combining theoretical models, computational data, and experimental measurements into a global representation of a domain. Attention is focused on a second-order nonlinear ordinary differential equation, which governs the classic Duffing's oscillator. The nonlinear ordinary differential equation is modelled by the recurrent artificial neural network architecture in conjunction with the popular hyperbolic tangent transfer function. GTR is then used to smoothly merge the response of the RANN and experimental data. Moreover, this approach is shown to be capable of incorporating other smooth neuron transfer functions, as long as they can be described by a Taylor series expansion. Numerical examples are presented illustrating the accuracy and utility of the method. 

    This work was supported under Office of Naval Research grant N00014-95-1-0741.


23 pages.