
The main objective of the workshop is to address recent advancements in the development of numerical methods for the solution of systems governed by evolutionary parametric partial differential equations (PDEs) of interest in applied sciences. In addition to classical fields such as fluid dynamics and kinetic theory, the workshop addresses PDEs relevant to environmental modeling, epidemiology, finance, social science, pharmacometrics, and other areas.
Developing efficient tools for accurate solution of these systems is potentially of high impact in all such fields.
The workshop is organized in the ambit of the conclusion of the PRIN 2022 project "Advanced numerical methods for time-dependent parametric partial differential equations with applications" supported by the Italian MUR.
Organizers:
Local organizers:
Plenary Speakers:
Invited Lecturers:
If you want to attend the workshop, the registration is now open and free, not mandatory.
The workshop is sponsored by:


Differential equations of fractional order (in space and/or time) model several well-known phenomena in nature, engineering and other disciplines. From a mathematical point of view, they are interesting also due to the fact that they are non-local operators. This offers an interesting arena for model reduction - if the equation contains parameters and one ins interested in realtime, multi-query or cold computing scenarios. We introduce some recent results concerning the Reduced Basis Method for problems of fractional order.
Solving partial differential equations (PDEs) in domains with complex geometries is central to many scientific and engineering applications, yet it poses significant computational challenges, particularly in the accurate and stable imposition of boundary conditions. This talk presents high-accuracy discretization strategies for boundary conditions within the framework of unfitted boundary methods.
We introduce boundary discretization schemes based on the ghost point method. The approach extends the computational grid beyond the physical domain by introducing ghost points whose values are determined so as to enforce boundary conditions with high accuracy. In contrast to traditional techniques that assign ghost point values through local or one-sided extrapolation, our method adopts a coupled formulation in which ghost point values are solved simultaneously with neighboring interior and ghost points. This leads to an augmented linear system that significantly improves both accuracy and numerical stability.
To efficiently solve the resulting systems, we develop a specialized multigrid solver tailored to the presence of curved and irregular boundaries. The effectiveness of the proposed approach is demonstrated through numerical experiments on elliptic PDEs and its applicability is further illustrated in incompressible flow simulations, including dynamic configurations such as oscillating bubbles. In addition, we present recent extensions of the method to hyperbolic equations, highlighting its challenges for time-dependent, transport-dominated problems.
In the talk I will review stochastic particle approximations of Boltzmann-like equations and present a novel quantitative error analysis. The convergence result is based on a coupling tailored for the collisional dynamics and on Wasserstein concentration inequalities. I will also discuss possible extension to binary Monte Carlo methods for mean-field models. Joint work with Lorenzo Pareschi.
In this work, we investigate IMEX Runge–Kutta schemes of type I and II applied to stiff kinetic equations, with the goal of capturing the compressible Navier–Stokes limit without resolving the small scales associated with the Knudsen number ε. In particular, we focus on the ES-BGK model, a variant of the classical BGK one, which yields satisfactory transport coefficients at the first-order correction in ε, including the correct Prandtl number at the Navier–Stokes level.
The main objective is to extend the results obtained for IMEX-RK schemes in Boscarino–Pareschi (2017), which were originally developed and analyzed for a specific class of hyperbolic relaxation systems. In that work, an asymptotic expansion up to order O(ε) was carried out for IMEX-RK methods of type I and II. Here, we perform a similar asymptotic analysis in the context of the ES-BGK model.
We analyze the asymptotic behavior of IMEX-RK schemes of type I and II applied to the ES-BGK equations and prove that, for small values of ε, they are able to capture the compressible Navier–Stokes limit without resolving the small parameter ε, thereby yielding consistent limiting numerical schemes for the corresponding compressible Navier–Stokes equations.
This presentation is concerned with eliminating some difficulties that can be encountered in the evolution of partial differential equations based on matrix-valued states, like for example the inverse deformation gradient "tensor" in Eulerian hyperelasticity, sometimes called "distortion", as used in the unified model of continuum mechanics by Godunov, Peshkov, and Romenski.
In this framework, fluids are represented as visco-elasto-plastic solids with arbitrarily large deformations. We show the benefits of casting the governing equations in a new state space based on a quaternion field, motivated by the breakdown of traditional discretization schemes in under-resolved fluid simulations.
We present results employing both Cartesian and unstructured Voronoi meshes, showing that the proposed quaternion-field representation can capture extremely fine features in the distortion field even when standard second-order methods are used. This is thanks to the peculiar interaction of the quaternion PDE with Godunov-type flow solvers.
The results include numerical examples of low- and high-Reynolds number simulations in two and three dimensions, which could not be carried out with the previous formulation of the model.
U-Net--style architectures are widely adopted for modeling physical systems, as their multiscale structure enables efficient processing of high-resolution data while reflecting the hierarchical structure of many physical phenomena. The most successful U--Net-based neural physics simulators typically combine convolutions with transformers; however, when applied to regular grids, they depend on highly structured components that limit adaptability across different spatial and spatiotemporal dimensions. In this work, we revisit transformer-based U-Nets with the goal of maximizing flexibility without sacrificing multiscale efficiency. We introduce a U-Net composed entirely of transformer blocks operating on a one-dimensional latent sequence, making it easy to extend across spatial and spatiotemporal dimensions. Evaluated on seven challenging benchmarks (four 2D and three 3D), our model scales to resolutions of up to $512\times512$ in 2D and $256\times128\times256$ in 3D, while reducing training memory and accelerating training compared to state-of-the-art transformer baselines, all while achieving competitive or state-of-the-art predictive accuracy.
The study of the problem of plasma confinement in huge devices, such as for example Tokamaks and Stellarators, has attracted a lot of attention in recent years. Strong magnetic fields in these systems can lead to instabilities, resulting in vortex formation. Due to the extremely high temperatures in plasma fusion, physical materials cannot be used for confinement, necessitating the use of external magnetic fields to control plasma density. This approach involves studying the evolution of plasma, made up of numerous particles, using the Vlasov-Poisson equations. In the first part of the talk, the case without uncertainty is explored. Particle dynamics are simulated using the Particle-in-Cell (PIC) method, known for its ability to capture kinetic effects and self-consistent interactions. The goal is to derive an instantaneous feedback control that forces the plasma density to achieve a desired distribution. Various numerical experiments are presented to validate the results. In the second part, uncertainty and collisions are introduced into the system, leading to the development of a different control strategy. This method is designed to steer the plasma towards a desired configuration even in the presence of uncertainty. The presentation concludes by outlining a future perspective focused on developing a robust control strategy based on neural networks.
In this talk, we present a new numerical technique for approximating solutions of the Navier–Stokes equations in arbitrarily moving domains. The proposed approach relies on a space discretization based on the ghost finite element method (ghost-FEM), which allows computations on unfitted meshes and avoids costly remeshing as the domain evolves in time. Time integration is performed using a high-order implicit-explicit (IMEX) schemes, ensuring robustness and efficiency for incompressible flows. A key contribution of this work is the extension of the finite element formulation to moving domains through an extrapolation strategy based on the Aslam technique. Originally introduced in the context of finite difference methods, this technique is here carefully adapted to the finite element framework to transport and extend the numerical solution across the evolving computational domain. This approach enables a consistent and accurate treatment of the solution in newly activated regions of the mesh while maintaining the stability properties of the underlying discretisation. The error introduced by the geometrical approximation of the moving domain is addressed using the shifted boundary method, which allows an accurate representation of boundary conditions on unfitted meshes. Dirichlet boundary conditions are imposed weakly by means of Nitsche’s method. The associated stabilisation parameter is chosen optimally by solving a generalised eigenvalue problem, ensuring stability and accuracy of the numerical scheme. We present a series of numerical experiments designed to validate the accuracy and convergence properties of the proposed method. The performance of the scheme is further assessed through comparisons with established benchmark problems involving moving boundaries. These results demonstrate the effectiveness of the method in handling complex domain motions while retaining high-order accuracy in both space and time.
We present a second-order numerical scheme to address the solution of optimal control problems constrained by the evolution of nonlinear Fokker-Planck equations arising from socio-economic dynamics. To design an appropriate numerical scheme for control realization, a coupled forward-backward system is derived based on the associated optimality conditions. The forward equation, corresponding to the Fokker-Planck dynamics, is discretized using a structure preserving scheme able to capture steady states.The backward equation, modeled as a Hamilton-Jacobi-Bellman problem, is solved via a semi-Lagrangian scheme that supports large time steps while preserving stability. Coupling between the forward and backward problems is achieved through a gradient descent optimization strategy, ensuring convergence to the optimal control. Numerical experiments demonstrate second-order accuracy, computational efficiency, and effectiveness in controlling different examples across various scenarios in social dynamics.
The choice of the time step is important not only for stability, but also for efficiency and consistency with the equations under consideration. In the context of kinetic equations, these aspects are particularly critical. On the one hand, efficiency must be optimized because of the high dimensionality of phase space, which may range from 2 to 6 depending on the problem, leading to a very large computational cost. On the other hand, the simulation of models, especially in Plasma Physics, often involves very fast dynamics that require an appropriate time discretization in order to be accurately resolved. The aim of this talk is to present recent results on time-step selection strategies applied to a numerical scheme for the Vlasov–Poisson equation in the quasi-neutral limit. The scheme is based on a Hermite spectral decomposition in velocity and a finite-difference discretization in space, and was originally proposed in 2025 by Blaustein, Dimarco, Filbet, and Vignal. It relies on a fully implicit, L-stable DIRK time discretization, yielding an Asymptotic-Preserving method in the quasi-neutral limit, at the expense of a significant computational cost per time step. A key issue is that, although the L-stable scheme can capture the oscillating behavior in time of the Vlasov–Poisson system (particularly those of the electric field and current density) in the asymptotic regime, its dissipative nature may eventually damp these oscillations if the time step is not properly chosen. In this work, we instead adopt an IMEX approach, treating the nonlinear part implicitly and the linear part explicitly. This substantially improves the efficiency while maintaining stability, although the Asymptotic-Preserving property is lost. Combined with this semi-implicit treatment, we aim to exploit the accuracy control performed by the time-step selection technique and use it as main ingredient for recovering the fast and ample oscillations in time.
Exponential splitting methods are well-established tools for the numerical approximation of partial differential equations. They are commonly used for time discretisation in situations where the differential equation can be decomposed into two (or more) subproblems, each of which can be treated efficiently, either numerically or even analytically.
The most common approaches to their derivation and rigorous analysis rely on Taylor expansions or the Baker–Campbell–Hausdorff formula. However, the convergence of these methods is guaranteed only in the case of bounded operators. When unbounded domains are considered, this assumption is naturally violated.
In this talk, I will present the ideas behind the derivation and rigorous analysis of Strang splittings, compact splittings, and finally focus on symmetric Zassenhaus splittings. These methods are particularly advantageous for time-dependent linear Schrödinger equations, although the analysis will be carried out in the setting of general operators.
In this talk, I present a contour integral method (CIM) for the efficient computation of outputs from parametric linear systems in control form over specified time intervals and under a user-prescribed accuracy. The CIM framework utilises a quadrature rule to approximate the inverse Laplace transform, which is executed along a modified integration contour. First, I discuss the non-parametric case and show how an accurate and extremely fast approximation of the output function can be achieved, without using any reduction approach, just by a suitable construction of the integration contour and the precomputation of small-size matrices. Then I discuss the parametric case and demonstrate how this method integrates effectively with projection-based model order reduction (MOR), where the projection spaces are constructed via a greedy-type algorithm guided by an appropriately derived error estimate, which relies on a suitably defined adjoint problem, and adhering to Hermite interpolation conditions. This methodology significantly reduces computational expenses when evaluating the input-output relations for varied parameters across the parametric domain, as well as for a wide class of input functions and initial conditions sufficiently captured by a low-dimensional subspace. The natural selection of the frequency points and interpolation parameters, together with the ability to target a prescribed time interval for different initial conditions, represents a distinctive feature compared with state-of-the-art methodologies. Finally, the precision and efficacy of the approach are demonstrated by numerical experiments on a range of benchmark test problems for both non-parametric and parametric linear control systems.
Many systems governed by evolutionary parametric partial differential equations involve parameters that evolve in time and must be inferred from partial, noisy observations. Accurately tracking these nonstationary parameters is essential for reliable prediction.
We present a continuous data assimilation framework for time-dependent parameter estimation based on the ensemble Kalman filter. Unknown coefficients are incorporated into an augmented state, enabling their online adaptation as new data become available in a computationally efficient manner.
The approach is illustrated mainly on age-structured epidemiological models, where surveillance data are assimilated to reconstruct evolving mortality and incidence trends.
We study the Generalized Convolution Quadrature (gCQ) based on Runge-Kutta methods to approximate the solution of an important class of convolution equations. The gCQ generalizes Lubich's original Convolution Quadrature to variable steps. High order versions of the gCQ have been developed in the last decade in a rather general setting, which includes applications to hyperbolic problems, where the convolution kernels are typically non smooth. However, in the case of convolutions with smoother, sectorial kernels, recent developments show that the available stability and convergence theory was suboptimal, both in terms of convergence order and regularity requirements of the data. Optimal results for the gCQ of the first order are now available, for a special important class of sectorial problems. We address here the generalization of this theory to high order and prove the same order of convergence as for the original Runge-Kutta based CQ with fixed steps, under the same regularity hypotheses about the data, and for arbitrary time meshes. Moreover, for data with known singularities of algebraic type, we show how to choose optimally graded time meshes in order to achieve maximal order of convergence, overcoming the well-known order reduction of the original CQ in these situations. We also show that a fast and memory reduced implementation of the gCQ is possible for this class of problems and illustrate our theoretical results with several numerical experiments.
Kinetic models offer a comprehensive description of the dynamics of plasmas in fusion, astro- and space physics. Since the description is in phase-space, its numerical discretization is very demanding but in recent years became feasible with the increase in hardware capabilities. However, simulations can routinely only be done with rather coarse resolution, leaving microscopic scales underresolved. Structure-preserving methods address this issue by focusing on recovering fundamental physical properties of the model exactly rather than on high-order convergence. In this talk, I will introduce structure-preserving discretization principles, like discrete de Rham complexes and discrete action priniciples, and show their application to the kinetic equations in plasma physics.
In this talk, we further develop the multiscale model introduced in Astuto, Raudino and Russo (2023) by deriving a gradient-flow structure that incorporates the time-dependent boundary condition obtained therein. The model describes a drift-diffusion equation for ion transport in the presence of a trapping potential, focusing on surface traps whose attraction range, denoted by $\delta$, is much smaller than the characteristic length scale of the problem. The multiscale formulation is derived via an asymptotic expansion with respect to this small parameter, leading to an effective boundary condition that captures the ion adsorption process at the surface.
Using the same asymptotic framework, we derive the associated free energy and show that, at steady state, the ionic concentration concentrates on the trap surface in the sense of measures, giving rise to a Dirac-type contribution. This observation naturally motivates the study of the time evolution of the concentration in the space of positive measures. Numerical results for the drift-diffusion and the multiscale models are compared in one spatial dimension.
In the second part of the talk, we extend the multiscale approach to a Poisson-Nernst-Planck (MPNP) system, accounting for the correlated dynamics of positive and negative ions and for Coulomb interactions. In the regime of very small Debye lengths, the quasi-neutral limit reduces the system to an effective diffusion equation for a single carrier. While this simplification facilitates the analysis, it may fail to capture relevant behaviors close to the quasi-neutral regime. To overcome this limitation, we develop and validate an Asymptotic Preserving numerical scheme that remains stable as the Debye length tends to zero, without imposing restrictive time-step constraints.
In recent years, deep learning, and particularly operator learning, has emerged as a powerful paradigm for solving PDEs, driven by its promise of high-speed surrogate simulations once models are trained. In practice, however, many high-impact applications remain bottlenecked by the cost of generating large, high-fidelity training datasets and the substantial compute required to train expressive models, even with access to high-performance computing.
In this talk, we present Neural-HSS, a parameter-efficient architecture motivated by recent insights into the structure of Green’s functions for elliptic PDEs. Neural-HSS leverages the Hierarchical Semi-Separable (HSS) matrix representation to encode this structure directly, yielding models that are markedly more data-efficient while maintaining strong approximation capability. We provide theoretical justification for its data-efficiency on a certain class of PDEs and demonstrate its performance empirically across a range of benchmark problems.
Probabilistic Tsunami Hazard Assessment (PTHA) for landslide-generated events is often constrained by the prohibitive computational cost of simulating thousands of potential scenarios. To address this, we present a "Digital Twin" framework that integrates high-fidelity GPU-accelerated physical modeling with efficient statistical surrogate models.
The physical foundation is the Non-Hydrostatic Multilayer HySEA model. It employs a depth-averaged formulation capturing essential dispersive effects and vertical velocity profiles. Using very few vertical layers, the model realistically simulates the complex interaction between granular mass movements—modeled via Savage-Hutter type rheology, and the water column. The implementation is fully GPU-accelerated, significantly reducing simulation time.
To enable rapid uncertainty quantification, we employ generalized Polynomial Chaos Expansions (gPCE) to construct surrogate models. These surrogates approximate tsunami metrics (elevation, velocity, and arrival time) as continuous functions of uncertain parameters such as volume, location, and rheology. By using Delayed Gauss-Patterson (DGP) sparse grids, we achieve model convergence with a small set of simulations per zone.
We validate this framework in Mayotte (France), where a recent seismo-volcanic crisis has raised concerns regarding submarine slope stability. The resulting surrogates produce probabilistic hazard maps and density functions in less than 2 seconds, providing a critical tool for faster-than-real-time forecasting and risk mitigation.
Joint work with: Cléa Denamiel (Ruđer Bošković Institute/IACRK, Croatia), Alexis Marboeuf, Anne Mangeney, Anne Le Friant, Antoine Lucas (University Paris Cité, IPGP, France), Marc Peruzzetto (BRGM, France), and Enrique Fernández-Nieto (Universidad de Sevilla, Spain).
In this work, we present a high-order hierarchical dynamic domain decomposition method for the Boltzmann equation based on moment realizability matrices, a concept introduced by Levermore, Morokoff, and Nadiga. This criterion is used to dynamically partition the two-dimensional spatial domain into two regimes: the Euler regime, and the kinetic regime. The key advantage of this approach lies in the use of Euler equations in regions where the flow is near hydrodynamic equilibrium, and the Boltzmann equation where strong non-equilibrium effects dominate, such as near shocks and boundary layers. This allows for both high accuracy and significant computational savings, as the Euler solver is considerably cheaper than the kinetic Boltzmann model.
We implement a coupling mechanism between the two regimes capable of preserving the high-order accuracy of both Euler and kinetic solvers, and we use state-of-the-art numerical techniques. This combination enables robust and scalable simulations of multi-scale kinetic flows with complex geometries.
Joint work with Lorenzo Pareschi (University of Ferrara & Heriot-Watt University), Thomas Rey (Université Côte-d'Azur) and Tommaso Tenna (Université Côte-d'Azur & University of Rome "La Sapienza").
The Serre-Green-Naghdi (SGN) equations provide a valuable framework for modelling fully nonlinear and weakly dispersive shallow-water flows. However, their elliptic formulation can considerably increase the computational cost compared to the Saint-Venant equations. To overcome this difficulty, hyperbolic models (hSGN) have been proposed that replace the elliptic operators with first-order hyperbolic formulations augmented by relaxation terms, which recover the original elliptic formulation in the stiff limit. Nevertheless, although explicit treatments of such models are relatively easy to implement, they suffer from severe time-step restrictions induced by the relaxation parameter, thereby making semi-implicit techniques necessary to achieve computational efficiency.
I will describe a new class of methods that is essentially explicit and provides global conservation of mass, momentum, and energy for some dispersive wave equations. The methods are based on Fourier Galerkin discretization in space along with a new relaxation-projection method in time. Numerical tests show that these methods outperform existing approaches by orders of magnitude and offer advantages for long-time simulations.
This is joint work with Hendrik Ranocha.
This talk explores the interplay between geometric numerical integration and stochastic numerical methods. The focus is on recent advances in the design and analysis of numerical schemes that preserve qualitative and quantitative properties, as well as invariance laws, governing the dynamics of stochastic ordinary and partial differential equations.
The presentation is organized around two main themes. The first concerns geometric numerical integration for stochastic Hamiltonian systems. Two distinct settings are considered: when the noise is interpreted in the Itô sense, the expected value of the Hamiltonian exhibits a linear drift in time, whereas in the Stratonovich setting the Hamiltonian is preserved pathwise. For both cases, the behavior of selected numerical methods with respect to the preservation of these properties is discussed, together with a long-time analysis based on backward error analysis. The second theme addresses structure-preserving numerical methods for dissipative stochastic problems. In particular, the numerical preservation of mean-square contractivity in the time integration of dissipative systems is investigated using stochastic theta-methods. The analysis reveals that mean-square contractivity at the discrete level is ensured under suitable stepsize restrictions.
A unifying perspective is provided by establishing a conceptual bridge between the numerical treatment of stochastic problems and that of their underlying deterministic counterparts.
This talk is based on joint work with H. Biscevic (GSSI), Chuchu Chen (Chinese Academy of Sciences), David Cohen (Chalmers University of Technology & University of Gothenburg), Stefano Di Giovacchino (University of L’Aquila), and Annika Lang (Chalmers University of Technology & University of Gothenburg).
In this talk, we present a second-order Strang splitting approach for efficiently solving a class of stiff matrix differential equations with Sylvester-type structure. The key idea is to decompose the dynamics into a stiff linear component, which can be handled exactly using matrix exponentials, and a nonlinear component, which is treated by a second-order dynamical low-rank method. We discuss the motivation behind this splitting strategy and highlight its advantages for stiff problems. A central result is that, under suitable assumptions, the scheme retains second-order accuracy. Numerical experiments illustrate the accuracy, robustness, and computational efficiency of the method. This is a joint work with N. Guglielmi.
In this talk, we will discuss the quasineutral limit in plasma physics and outline some ideas to address this challenge from a numerical perspective.