Improving Neural Network Efficiency With Multifidelity and Dimensionality Reduction Techniques

Vignesh Sella · January 10, 2022

Design problems in aerospace engineering often require numerous evaluations of expensive, non-linear high-fidelity models, resulting in prohibitive computational costs. One way to address the computational cost is through building surrogates, such as deep neural networks (DNNs). Specifically, DNNs may be effective when sufficient evaluations of the high-fidelity model are required such that the up-front training cost is amortized, or in situations that require real-time responses (such as interactive visualizations). Nevertheless, the data requirements for adequate training of DNNs are often impractical for engineering applications. To alleviate this issue, the proposed work utilizes output dimensionality reduction along with information from multiple models of varying fidelities and cost to develop accurate projection-enabled multifidelity neural networks (MF-NNs) with a limited computational budget. The dimensionality reduction leads to a more parsimonious network and the multifidelity aspect adds more training data from lower-cost, lower-fidelity models. Three approaches for MF-NNs which leverage proper orthogonal decomposition based projections are introduced: (i) pre-training, (ii) additive, and (iii) a multi-step method. The MF-NN is applied to approximate the optimal design of 2D aerodynamic airfoils given the performance and design requirements.

Authors: Vignesh Sella, Anirban Chaudhuri, Thomas O’Leary-Roseberry, Xiaosong Du, Mengwu Guo, Joaquim Martins, Omar Ghattas, Karen Willcox

Twitter, Facebook