Speaker
Description
The Hadamard decomposition is a powerful technique for data analysis and compression. Given an $m\times n$ matrix $X$ and two ranks $r_1,r_2\ll \min(m,n)$, we look for two low-rank matrices $X_1$ and $X_2$ of the same size of $X$ and with $\text{rank}(X_i)=r_i$ such that $X\approx X_1\circ X_2$, where $\circ$ is the element-wise product. In contrast with the well-known SVD, this decomposition allows to represent higher-rank matrices with the same amount of variables. Remarkably, it can be proved that any Hadamard factorization for $X$ can be rewritten as $X\approx WH^\top$ with $W$ of rank $r_1$ and and $H$ of rank-$r_2$. In this work we focus on the most interesting case where $r=r_1=r_2$; we present some new theoretical results which show the features of this factorization and when it is efficient. Based on these facts, we develop two new manifold-based gradient descent algorithms for computing the rank-$r$ Hadamard decomposition: the first one concerns the representation $X\approx X_1\circ X_2$, while the second one focuses on $X\approx WH^\top$ and it integrates a gradient system associated to the factors $W$ and $H$. We also derive some new initialization guesses that could be used for any iterative method that solves the problem. We compare our results with the SVD and with one of the few existing approaches in the literature [Wertz et al., 2025] to compute the rank-$r$ Hadamard decomposition. Numerical results prove that the new methods are efficient and competitive, both on synthetic and real data. We also show that the new initializations proposed usually improve the performances of the algorithms.