Skip to the content.

Data-Driven Model Reduction, Scientific Frontiers, and Applications ()

Rami Younis, Harold Vance Department of Petroleum Engineering
Amortizing the Costs of Scientific Machine Learning at Scale: Timely Challenges and Opportunities

Abstract

A basic premise is that an approximator can be regressed to data while locally honoring some physical (often theoretical) constraints. Such trained approximators are then used to provide inferences with some estimation accuracy while amortizing training costs. A timely debate within the broader community concerns the costs of this paradigm as we scale to massive proliferation. Extrapolating from current compute demand trends, some (technology outsiders) project that by 2027, the training cost of the largest AI model could exceed the total U.S. GDP. Frontier topics for research and innovation are maturing from studies of initial adoption (e.g., whether off-the-shelf or slightly adapted methods can be applied to specific scenarios) to developing improved models, hardware logistics, and deployments.

In this talk, I will present two lines of ongoing research that fuse ML inference with online mathematical computation to realize precise and widely general simulation methods. The first topic is an ML Finite Volume Method for complex nonlinear PDE, and the second is a fusion of local linearization (Newton’s method) and operator learning.