Speaker
Description
In this talk, I will review several recent results about the sparse optimization of infinite-dimensional variational problems. First, I will focus on the so-called representer theorems that allow to prove, in the case of finite-dimensional data, the existence of a solution given by the linear combination of suitably chosen atoms. In particular, I will try to convey the importance of such statements in understanding the sparsity in infinite-dimensional settings, describing several possible applications for various relevant problems.
In the second part of the talk, I will focus on a sparse optimization algorithm, named generalized conditional gradient method, that is built on the characterization of sparse objects for infinite-dimensional variational problems. This algorithm is a variant of the classical Frank-Wolfe algorithm, and it does not require an a priori discretization of the domain. I will show convergence results under general assumptions on the variational problem and finally, I will present some numerical examples in the context of dynamic inverse problems regularized with optimal transport energies.