Alexander Upjohn Beatson will present his FPO "Learned surrogates and stochastic gradients for accelerating numerical modeling, simulation, and design" on Monday, June 21,2921 at 12:00PM via Zoom.

 

Zoom link: https://princeton.zoom.us/j/99943194862

 

The members of Alexander’s committee are as follows: Ryan Adams (adviser);

Readers: Szymon Rusinkiewicz, Sigrid Adriaenssens;

Examiners: Ryan Adams, Elad Hazan, Mark Girolami (Cambridge University)

 

A copy of Alex’s thesis is available upon request. Please email gradinfo@cs.princeton.edu if you would like a copy of the thesis.

 

Everyone is entitled to attend his talk.

 

Abstract follows below:

 

Numerical methods, such as discretization-based methods for solving ODEs and PDEs, allow us to model and design complex devices, structures, and systems. However, this is often very costly in terms of both computation and the time of the expert who must specify the physical model, the discretization, the solver, etc. In this talk I will present a line of work using deep learning and stochastic gradient estimation to speed up solution and optimization of systems characterized by such numerical methods.

 

First, I will present composable energy surrogates, in which neural surrogates are trained to model a potential energy in sub-components or sub-domains of a PDE, and then composed or stitched together to solve a larger system by minimizing the sum of potentials across components. This allows surrogate modeling without costly ground-truth simulation of the full system, as training data are generated by performing finite element analysis with individual components. We show that these surrogates can accelerate simulation of parametric meta-materials and produce accurate macroscopic behavior when composed.

 

Next, I will discuss randomized telescoping gradient estimators, which provide unbiased gradient estimators for objectives which are the limit of a sequence of increasingly accurate, increasingly costly approximations. These estimators represent the limit as a telescoping sum and sample linear combinations of terms to provide cheap unbiased estimates. We discuss conditions which permit finite variance and computation, optimality of certain estimators within this class, and application to problems in numerical modeling and machine learning.

 

Finally, I will discuss meta-learned implicit PDE solvers, which allow a new API for surrogate modeling. These models condition on a functional representation of a PDE and its domain by directly taking as input the PDE constraint and a method which returns samples in the domain and on the boundary. By avoiding having to fix a parametric basis, this allows fitting surrogate models for classes of PDEs with arbitrarily varying geometry and governing equations. It also does not require supervision from expensive ground-truth or FEA solutions. We apply Meta-PDE to a nonlinear Poisson problem, and show it learns to solve PDEs accurately and quickly across different boundary conditions, governing equations, and problem geometries. The resulting meta-model solves these PDEs much faster than FEA methods which achieve similar accuracy.