In many engineering applications, a partial differential equation (PDE) has to be solved very often (“multi-query”) and/or extremely fast (“realtime”) and/or using restricted memory/CPU (“cold computing”). Moreover, the mathematical modeling yields complex systems in the sense that:
(i) each simulation is extremely costly, its CPU time may be in the order of several weeks;
(ii) we are...
One of the most fruitful tasks in data processing is to identify structures in the set where data lie and exploit them to design better models and reliable algorithms.
As a paradigm of this process we show how the cone of positive definite matrices can be endowed with Riemannian geometries alternative to the customary Euclidean geometry. This can provide new tools for data scientists, in...
Numerical homogenization is a methodology for the computational solution of multiscale partial differential equations. It aims at the compression of the corresponding partial differential operators to finite-dimensional sparse surrogate models. The surrogates are valid on a given target scale of interest, thereby accounting for the impact of features on under-resolved scales. This talk shows...
In many engineering applications, a partial differential equation (PDE) has to be solved very often (“multi-query”) and/or extremely fast (“realtime”) and/or using restricted memory/CPU (“cold computing”). Moreover, the mathematical modeling yields complex systems in the sense that:
(i) each simulation is extremely costly, its CPU time may be in the order of several weeks;
(ii) we are...
One of the most fruitful tasks in data processing is to identify structures in the set where data lie and exploit them to design better models and reliable algorithms.
As a paradigm of this process we show how the cone of positive definite matrices can be endowed with Riemannian geometries alternative to the customary Euclidean geometry. This can provide new tools for data scientists, in...
When solving PDEs over tensorized 2D domains, the regularity in the solution often appears in form of an approximate low-rank
structure in the solution vector, if properly reshaped in matrix
form. This enables the use of low-rank methods such as Sylvester solvers (namely, Rational Krylov methods and/or ADI) which allow to treat separable differential operators. We consider the setting where...
Neural networks are a fundamental tool for solving various machine learning tasks, such as supervised and unsupervised classification.
Despite this success, they still have a number of drawbacks, including lack of interpretability and large number of parameters.
In this work, we are particularly interested in learning neural network architectures with flexible activation functions (contrary...
Tensor structured linear operators play an important role in matrix equations and low-rank modelling. Motivated by this we consider the problem of approximating a matrix by a sum of Kronecker products. It is known that an optimal approximation in Frobenius norm can be obtained from the singular value decomposition of a rearranged matrix, but when the goal is to approximate the matrix as a...
Koopman operators are infinite-dimensional operators that globally linearise nonlinear dynamical systems, making their spectral information valuable for understanding dynamics. Their increasing popularity, dubbed “Koopmania”, includes 10,000s of articles over the last decade. However, Koopman operators can have continuous spectra and lack finite-dimensional invariant subspaces, making...
The application of neural networks (NNs) to the numerical solution of PDEs has seen growing popularity in the last five years: NNs have been used as an ansatz space for the solutions, with different training approaches (PINNs, deep Ritz methods, etc.); they have also been used to infer discretization parameters and strategies.
In this talk, I will focus on deep ReLU NN approximation theory. I...
In this talk, I will review several recent results about the sparse optimization of infinite-dimensional variational problems. First, I will focus on the so-called representer theorems that allow to prove, in the case of finite-dimensional data, the existence of a solution given by the linear combination of suitably chosen atoms. In particular, I will try to convey the importance of such...
The talk will be devoted to continuous-time affine control systems and their reachable sets. I will focus on the case when all eigenvalues of the linear part of the system have zero real part. In this case, the reachable sets usually have a non-exponential growth rate as T→∞, and it is usually polynomial. The simplest non-trivial example is the problem of stabilisation (or, conversely,...
Graph $p$-Laplacian eigenpairs, and in particular the two limit cases $p=1$ and $p=\infty$, reveal important information about the topology of the graph. Indeed, the $1$-Laplacian eigenvalues approximate the Cheeger constants of the graph, while the $\infty$-eigenvalues can be related to distances among nodes, to the diameter of the graph, and more generally to the maximum radius that allows...
We discuss a reduced basis method for linear evolution PDEs, which is based on the application of the Laplace transform. The main advantage of this approach consists in the fact that, differently from time stepping methods, like Runge-Kutta integrators, the Laplace transform allows to compute the solution directly at a given instant, which can be done by approximating the contour integral...
The Trasformer family of Deep-Learning models is emerging as the dominating paradigm for both natural language processing and, more recently, computer vision applications.
An intrinsic limitation of this family of "fully-attentive" architectures arises from the computation of the dot-product attention, which grows both in memory consumption and number of operations as $O(n^2)$ where $n$...
Given a regular matrix polynomial, an interesting problem consists in the computation of the nearest singular matrix polynomial, which determines its distance to singularity. We consider - only for simplicity - the quadratic case $\lambda^2 A_2 + \lambda A_1 + A_0$ with $A_2, A_1, A_0 \in \mathbb{C}^{n \times n}$ and look for the nearest singular quadratic matrix polynomial $\lambda^2 (A_2 +...
The topological structure of data is widely relevant in various applications, hence raising the question of the stability of topological features. In this talk we address the stability of 1-dimensional holes in a simplicial complex through the optimisation of a functional that combines the spectra of the classical graph Laplacian with the one of the higher-order Hodge Laplacian. The proposed...
The problem of hyperparameter optimization (HPO) in learning algorithms represents an open issue of great interest, since it has a direct impact on the performance of the algorithms as well as on the reproducibility of the same, especially in the context of unsupervised learning.
In this scenario are placed the well-known Matrix Decompositions (MDs), which are gaining attention in Data Science...
A new approach to solve eigenvalue optimization problems for large structured matrices is proposed and studied. The class of optimization problems considered is related to compute structured pseudospectra and their extremal points, and to structured matrix nearness problems such as computing the structured distance to instability. The structure can be a general linear structure and includes,...