Times are displayed in (UTC-07:00) Pacific Time (US & Canada) Change
2/4/2025 | 9:10 AM - 10:30 AM | Regency B
Ning Liu
Fei Lu | JHU
Tian Gao | IBM
Siavash Jafarzadeh | Lehigh University
Stewart Silling | Sandia National Labs
Nonlocal attention operator: Materializing hidden knowledge towards interpretable physics discovery
Author(s)
Ning Liu
Fei Lu | JHU
Tian Gao | IBM
Siavash Jafarzadeh | Lehigh University
Stewart Silling | Sandia National Labs
Abstract
Despite the recent popularity of attention-based neural architectures in core AI fields like natural language processing (NLP) and computer vision (CV), their potential in modeling complex physical systems remains under-explored. Learning problems in physical systems are often characterized as discovering operators that map between function spaces based on a few instances of function pairs. This task frequently presents a severely ill-posed PDE inverse problem. In this work, we propose a novel neural operator architecture based on the attention mechanism, which we coin Nonlocal Attention Operator (NAO), and explore its capability towards developing a foundation physical model. In particular, we show that the attention mechanism is equivalent to a double integral operator that enables nonlocal interactions among spatial tokens, with a data-dependent kernel characterizing the inverse mapping from data to the hidden parameter field of the underlying operator. As such, the attention mechanism extracts global prior information from training data generated by multiple systems, and suggests the exploratory space in the form of a nonlinear kernel map. Consequently, NAO can address ill-posedness and rank deficiency in inverse PDE problems by encoding regularization and achieving generalizability. On three material modeling problems, we empirically demonstrate the advantages of NAO over baseline neural models in terms of generalizability to unseen data resolutions and system states. Our work not only suggests a novel neural operator architecture for learning interpretable foundation models of physical systems, but also offers a new perspective towards understanding the attention mechanism.
Nonlocal attention operator: Materializing hidden knowledge towards interpretable physics discovery
Description
Date and Location: 2/4/2025 | 09:10 AM - 09:30 AM | Regency BPrimary Session Chair:
Jeff Simmons | Air Force Research Laboratory
Session Co-Chairs:
Greg Buzzard | Purdue University
Megna Shah | Air Force Research Laboratory
Stephen Niezgoda | Ohio State University
Suhas Sreehari | Oak Ridge National Laboratory
Paper Number: COIMG-134
Back to Session Gallery