Speaker
Description
The problem of hyperparameter optimization (HPO) in learning algorithms represents an open issue of great interest, since it has a direct impact on the performance of the algorithms as well as on the reproducibility of the same, especially in the context of unsupervised learning.
In this scenario are placed the well-known Matrix Decompositions (MDs), which are gaining attention in Data Science as mathematical techniques capable to capture latent information embedded in large datasets. Among the low-rank MDs, Nonnegative Matrix Factorization (NMF) is one of the most effective methods for analyzing real-life nonnegative data. To emphasize the useful properties of the data matrix, we often use the penalized NMF.
How to automatically choose optimal penalization hyperparameters is an open question in this context. To the best of our knowledge, the literature panorama lacks a general non-black-box framework that addresses this problem.
In this work, we consider the hyperparameter selection problem using a bi-level approach: the selection of hyperparameters is incorporated directly into the algorithm as part of the updating process. This problem is approached from two perspectives: the existence and convergence theorems of numerical solutions, under appropriate assumptions, are presented together with the proposal of a new algorithm for tuning hyperparameters in NMF problems. The proposed approach provides competitive results for controlling sparsity on synthetic and real datasets.