Graph $p$-Laplacian eigenpairs, and in particular the two limit cases $p=1$ and $p=\infty$, reveal important information about the topology of the graph. Indeed, the $1$-Laplacian eigenvalues approximate the Cheeger constants of the graph, while the $\infty$-eigenvalues can be related to distances among nodes, to the diameter of the graph, and more generally to the maximum radius that allows...
We discuss a reduced basis method for linear evolution PDEs, which is based on the application of the Laplace transform. The main advantage of this approach consists in the fact that, differently from time stepping methods, like Runge-Kutta integrators, the Laplace transform allows to compute the solution directly at a given instant, which can be done by approximating the contour integral...
The Trasformer family of Deep-Learning models is emerging as the dominating paradigm for both natural language processing and, more recently, computer vision applications.
An intrinsic limitation of this family of "fully-attentive" architectures arises from the computation of the dot-product attention, which grows both in memory consumption and number of operations as $O(n^2)$ where $n$...
Given a regular matrix polynomial, an interesting problem consists in the computation of the nearest singular matrix polynomial, which determines its distance to singularity. We consider - only for simplicity - the quadratic case $\lambda^2 A_2 + \lambda A_1 + A_0$ with $A_2, A_1, A_0 \in \mathbb{C}^{n \times n}$ and look for the nearest singular quadratic matrix polynomial $\lambda^2 (A_2 +...
The topological structure of data is widely relevant in various applications, hence raising the question of the stability of topological features. In this talk we address the stability of 1-dimensional holes in a simplicial complex through the optimisation of a functional that combines the spectra of the classical graph Laplacian with the one of the higher-order Hodge Laplacian. The proposed...
The problem of hyperparameter optimization (HPO) in learning algorithms represents an open issue of great interest, since it has a direct impact on the performance of the algorithms as well as on the reproducibility of the same, especially in the context of unsupervised learning.
In this scenario are placed the well-known Matrix Decompositions (MDs), which are gaining attention in Data Science...
A new approach to solve eigenvalue optimization problems for large structured matrices is proposed and studied. The class of optimization problems considered is related to compute structured pseudospectra and their extremal points, and to structured matrix nearness problems such as computing the structured distance to instability. The structure can be a general linear structure and includes,...