Tensor decompositions are higher-order analogues of matrix decompositions and have proven to be powerful tools for data analysis. In particular, we are interested in the canonical tensor decomposition, otherwise known as CANDECOMP/PARAFAC (CP), which expresses a tensor as the sum of component rank-one tensors and is used in a multitude of applications such as chemometrics, signal processing, neuroscience, and web analysis. The task of computing CP, however, can be difficult. The typical approach is based on alternating least squares (ALS) optimization, but it is not accurate in the case of overfactoring. High accuracy can be obtained by using nonlinear least squares (NLS) methods; the disadvantage is that NLS methods are much slower than ALS. In this paper, we propose the use of gradient-based optimization methods. We discuss the mathematical calculation of the derivatives and show that they can be computed efficiently, at the same cost as one iteration of ALS. Computational experiments demonstrate that the gradient-based optimization methods are more accurate than ALS and faster than NLS in terms of total computation time.
tensor decomposition, tensor factorization, CANDECOMP, PARAFAC, optimization
Online version published January 2011.
@article{AcDuKo11,
author = {Evrim Acar and Daniel M. Dunlavy and Tamara G. Kolda},
title = {A Scalable Optimization Approach for Fitting Canonical Tensor Decompositions},
journal = {Journal of Chemometrics},
volume = {25},
number = {2},
pages = {67--86},
month = {February},
year = {2011},
doi = {10.1002/cem.1335},
}