Meade's On-line Preprints

    Application of Multilayer Feedforward Networks in the Solution of Compressible Euler Equations
    Andrew J. Meade, Jr.
    NASA Report, March 1998.

    Keywords: Artificial neural networks, incremental approximation, basis functions, meshless solution adaptivity, optimization, direct search methods, parallel processing, computational fluid dynamics.

    Abstract: Recent work has proven that it is possible to actually ``program'' multilayer feedforward artificial neural networks (ANNs). This new paradigm is not only making it possible to logically and predictably extend the capabilities of ANNs to ``hard'' computing, such as Computational Fluid Dynamics (CFD), but it is also revealing a different, quantitative way of looking at neural networks that could help advance the level of understanding we possess about these systems. Accurate modeling of linear ordinary differential equations (ODEs) and linear partial differential equations (PDEs) has already been successfully completed using this paradigm. It is proposed to extend this capability further by attempting to use ANNs to model the solution of the two-dimensional compressible Euler equations. The code thus created should not only have the same accuracy as a more conventional computer code, but should still retain the ability of an ANN to modify itself when exposed to experimental data, thus yielding software that could be specialized with experimental results. To accomplish these objectives, the method of optimal incremental function approximation has been developed for the adaptive solution of differential equations using ANN architecture. Two major attractive features of this approach are that: 1) the developed method is flexible enough to use any of the popular transfer functions and 2) the developed method requires minimal user interaction. The latter is especially advantagous when dealing with complicated physical or computational domains. The method of optimal incremental function approximation is formulated from the combination of various concepts utilized by computational mechanics and artificial neural networks (e.g. function approximation and error minimization, variational principles and weighted residuals, and adaptive grid optimization). The basis functions and associated coefficients of a series expansion, representing the solution, are optimally selected by a parallel direct search technique at each step of the algorithm according to appropriate criteria; the solution is built sequentially. Complicated data structures and expensive remeshing algorithms and systems solvers, that are common in conventional computational mechanics, are avoided. Computational efficiency is increased by using augmented radial bases and concurrent computing. Variational principles are utilized for the definition of the objective function to be extremized in the associated optimization problems, ensuring that the problem is well-posed. Numerical results and convergence rates are reported for steady-state problems, including the nonlinear compressible Euler equations.

    This work was supported under NASA Graduate Fellowship number NGT-70353.

139 pages.