Speaker
Description
Neural networks are a fundamental tool for solving various machine learning tasks, such as supervised and unsupervised classification.
Despite this success, they still have a number of drawbacks, including lack of interpretability and large number of parameters.
In this work, we are particularly interested in learning neural network architectures with flexible activation functions (contrary to fixed activation functions commonly used).
Our approach relies on a tensor-based framework for decomposition of multivariate maps, developed in the context of nonlinear system identification.
We propose a new compression algorithm which is based on a constrained coupled matrix-tensor factorization (CMTF) of the Jacobian tensor and the matrix of function evaluations.