From efficient high-order methods to scientific machine learning.

Abstract

We aim to improve and solve some limitations of the current Finite Element Methods (FEM) packages. We focus on the efficient implementation of implicit time discretization schemes, achieving computational savings by exploiting the structure of serendipity elements and developing a wrapper that integrates Machine Learning with FEM packages. First, we focus on implicit Runge-Kutta methods which possess high-order accuracy and important stability properties. Implementation difficulties and the high expense of solving the coupled algebraic system at each time step are frequently cited as impediments. We present Irksome, a high-level library for manipulating UFL (Unified Form Language) expressions of semidiscrete variational forms to obtain UFL expressions for the coupled Runge-Kutta stage equations at each time step. Irksome works with the Firedrake package to enable the efficient solution of the resulting coupled algebraic systems. Secondly, we develop some additive Schwarz methods (ASM) based on solving local patch problems with serendipity elements, allowing us to obtain the same order of accuracy as rectangular tensor-product elements with many fewer degrees of freedom (DoFs). Adapting arguments from Pavarino for the tensor-product case, we prove that patch smoothers give conditioning estimates independent of the polynomial degree for a model problem. We combine this with a low-order global operator to provide an optimal two-grid method, with conditioning estimates independent of the mesh size and polynomial degree. The theory holds for two- and three-dimensional serendipity elements and can be extended to multigrid algorithms. Lastly, we introduce Torchfire, a differentiable programming interface that combines PyTorch and Firedrake to perform model-constrained deep learning of solutions of parameterized PDEs and PDE-constrained inverse problems. It leverages PyTorch's high-level interface for training neural networks, its automatic differentiation module for computing derivatives, and Firedrake's capabilities for computing finite element residuals. Torchfire provides differentiable wrappers of Firedrake-based computations for use in PyTorch, enabling users to write Firedrake code linked to the network without causing a disconnection of the graph for backpropagation. Numerical experiments using Firedrake, PETSc, and PyTorch, respectively confirm our theory and demonstrate the efficacy of the software and our solver techniques for each case.

Description

Keywords

Citation