Our preprint is freely available here.
The joint prediction of continuous fields and statistical estimation of the underlying discrete parameters is a common problem for many physical systems governed by PDEs. Until now, these two problems were often tackled separately, even in case the underlying parameterization is known. In this work, we show that by incorporating the discrete parameters into the prediction of continuous fields, it is possible to extend the neural operator probabilistically and represent the parametric system uncertainty. Moreover, it adds a level of interpretability, surpassing the black box paradigm of previous neural operator approaches and allowing for human understanding of complex systems. We present the capabilities of the proposed methodology for predicting continuous and discrete biomarkers in full-body haemodynamics simulations under different levels of missing information. In addition, we consider a test case for atmospheric large-eddy simulation of a two-dimensional dry cold bubble, where we infer both continuous time-series and information about the systems conditions. In order to showcase significantly increased accuracy in both the inverse and the surrogate tasks, the performance of FUSE is compared to several baseline models.
Unifying Forward and Inverse Problems
The goal of supervised operator learning is to learn a parameterized family of neural operators
As
In other words, the operator learning objective can be split into two separate objectives. As we show in our work, these two separate objectives can be applied to learn inverse and forward problems using two distinct model components. Subsequently, we can fuse these two components together at inference time to emulate a forward problem, based on the posterior distributions of parameters obtained from solving the associated inverse problem. This leads the way to understanding uncertainties in infinite-dimensional spaces via their relationship with finite-dimensional parameters which are human-interpretable.
Network Structure, Training, and Evaluation
The forward and inverse problems are each solved under the same principles. Each problem necessitates learning a relationship between a finite-dimensional and infinite-dimensional space. For the purpose of learning on infinite-dimensional spaces, neural operators are a clear choice, due to their discretization invariance. In this work, we choose to instantiate this operator learning component of our model with the Fourier Neural Operator, although any reasonably suitable neural operator would work. In order to bridge the gap between finite- and infinite-dimensional spaces, we rely on the space of band-limited functions. For this purpose, a constant set of low-frequency modes of a standard Fourier transform is used. The coefficients of the discrete Fourier transform are finite-dimensional scalars which each correspond to a function, offering a pathway to learn the relationship between finite-dimensional PDE parameters and their infinite-dimensional functions. In order to learn a probabilistic estimate of the inverse problem, we use a conditional generative model for inference. We find the best performance with Flow Matching Posterior Estimation, but we also investigate conditional denoising diffusion probabilistic models. In the case of the forward problem, we learn a deterministic lifting operator, increasing the dimensionality of the low-dimensional parameters to the represent the Fourier coefficients of band-limited functions, which will subsequently be passed to the neural operator.
The training process for the forward and inverse problem is performed separately. Given access to continuous inputs
For the sake of brevity, we refer the interested reader to the paper for more details on these loss objectives.
At evaluation time, we can choose to evaluate both the forward and inverse problem, using the pushforward of the propagated uncertainty from the inverse problem to quantify the uncertainty in the continuous outputs, representing the propagated uncertainty. Alternatively, we can evaluate only the forward model if the systems parameters are known. Likewise, we can vary these parameters in order to explore their effect on the continuous outputs, a process known as fingerprinting.
Results from the Atmospheric Cold Bubble Experiment
The atmospheric cold bubble (ACB) experiment aims to learn the relationship between velocity measurements and the initial and system conditions. Velocities are measured at point locations, resembling measurements taken by turbulence towers. The initial condition of the ACB is parameterized by height
The cold bubble test case: Time evolution of velocities and the temperature anomaly. The triangles mark the measurement locations where the time series measurements
Combined Forward and Inverse Problem:
Given one set of continuous measurements
Fingerprinting: Sweeping through different parameters uncovers their effects on the continuous output functions. Since each parameter value requires a full model run, fingerprinting at this detail is not feasible with the full numerical model.
Code Structure
├── ACB
│ ├── FUSE
│ │ ├── FUSE.py
│ │ ├── __pycache__
│ │ ├── config.yaml
│ │ ├── eval_fuse.py
│ │ └── train_fuse.py
│ ...
├── PWP
│ ├── FUSE
│ │ ├── FUSE.py
│ │ ├── __pycache__
│ │ ├── config.yaml
│ │ ├── eval_fuse.py
│ │ └── train_fuse.py
│ ...
├── _Models
│ ├── ACB
│ │ ├── FUSE_TurbTowers.pt
│ │ ├── GAROM_TurbTowers.pt
│ │ ├── InVAErt_TurbTowers_Decoder_model.pt
│ │ ├── InVAErt_TurbTowers_Encoder_model.pt
│ │ └── UNet_TurbTowers.pt
│ └── PWP
│ ├── FUSE_FullBody.pt
│ ├── GAROM_FullBody.pt
│ ├── InVAErt_Decoder_FullBody.pt
│ ├── InVAErt_Encoder_FullBody.pt
│ └── UNet_FullBody.pt
├── _Data
│ ├── ACB
│ │ ├── continuous.pt
│ │ ├── discrete.pt
│ │ ├── OOD_continuous.pt
│ │ └── OOD_discrete.pt
│ └── PWP
│ └── PW_input_data.npz
The directories PWP
and ACB
contain all the necessary files to recreate the experiments in our work.
FUSE and any baselines may be trained by running python3 train_<model>.py
. Likewise, trained models may be evaluated by running python3 eval_<model>.py
.
The FMPE model employed in our FUSE code comes from the lampe library, which can be installed with
pip install lampe
Data
All data for the experiments is located here. Please store _Data
at the location specified in the file tree, or specify a new location in the code.
Cite As
L.E. Lingsch, D. Grund, S. Mishra, and G. Kissas (2024). FUSE: Fast Unified Simulation and Estimation for PDEs. doi: 10.48550/arXiv.2405.1455
@misc{lingsch2024,
title = {{FUSE}: Fast Unified Simulation and Estimation for {PDEs}},
author = {Lingsch, Levi E. and Grund, Dana and Mishra, Siddhartha and Kissas, Georgios},
url = {http://arxiv.org/abs/2405.14558},
doi = {10.48550/arXiv.2405.14558},
date = {2024-05-23}
}