Speaker
Description
In recent years, deep learning, and particularly operator learning, has emerged as a powerful paradigm for solving PDEs, driven by its promise of high-speed surrogate simulations once models are trained. In practice, however, many high-impact applications remain bottlenecked by the cost of generating large, high-fidelity training datasets and the substantial compute required to train expressive models, even with access to high-performance computing.
In this talk, we present Neural-HSS, a parameter-efficient architecture motivated by recent insights into the structure of Green’s functions for elliptic PDEs. Neural-HSS leverages the Hierarchical Semi-Separable (HSS) matrix representation to encode this structure directly, yielding models that are markedly more data-efficient while maintaining strong approximation capability. We provide theoretical justification for its data-efficiency on a certain class of PDEs and demonstrate its performance empirically across a range of benchmark problems.