Meade's On-line Preprints
Optimal Incremental Approximation for the Solution of Differential Equations
Andrew J. Meade, Jr., Michael Kokkolaras, and Boris A. Zeldin
Submitted to International Journal for Numerical Methods in Engineering, 1998.
Keywords:: Incremental approximation, basis functions, variational principles, adaptivity, optimization, parallel direct search.
Abstract: A method for optimal incremental function approximation is proposed for the solution of differential equations. The basis functions and associated coefficients of a series expansion, representing the solution, are optimally selected at each step of the algorithm according to appropriate error minimization criteria; the solution is built sequentially. In this manner, the computational technique is adaptive in nature, although a grid is neither built nor adapted in the traditional sense using a-posteriori error estimates. Variational principles are utilized for the definition of the objective function to be extremized in the associated optimization problems, ensuring that the problems are well-posed. Complicated data structures, expensive remeshing algorithms, and systems solvers are avoided. Computational efficiency is increased by using low-order basis functions and the parallel direct search optimization technique. Numerical results and convergence rates are reported for linear nonself-adjoint and nonlinear problems associated with general boundary conditions. Generalization aspects of the method are discussed.
This work was supported under NASA grant number CRA2-35504 and ONR grant N00014-95-1-0741.