The tensor eigenproblem has many important applications, generating both mathematical and application-specific interest in the properties of tensor eigenpairs and methods for computing them. A tensor is an m-way array, generalizing the concept of a matrix (a 2-way array). Kolda and Mayo have recently introduced a generalization of the matrix power method for computing real-valued tensor eigenpairs of symmetric tensors. In this work, we present an efficient implementation of their algorithm, exploiting symmetry in order to save storage, data movement, and computation. For an application involving repeatedly solving the tensor eigenproblem for many small tensors, we describe how a GPU can be used to accelerate the computations. On an NVIDIA Tesla C 2050 (Fermi) GPU, we achieve 318 Gflops/s (31% of theoretical peak performance in single precision) on our test data set.
tensors, tensor eigenvalues, GPU computing
@inproceedings{BaKoPl11,
author = {Grey Ballard and Tamara G. Kolda and Todd Plantenga},
title = {Efficiently Computing Tensor Eigenvalues on a {GPU}},
booktitle = {IPDPSW'11: Proceedings of the 2011 IEEE International Symposium on Parallel and Distributed Processing Workshops and PhD Forum},
eventtitle = {12th IEEE International Workshop on Parallel and Distributed Scientific and Engineering Computing (PDSEC-11)},
venue = {Anchorage, Alaska},
eventdate = {2011-05-16/2011-05-20},
publisher = {IEEE Computer Society},
pages = {1340--1348},
month = {May},
year = {2011},
doi = {10.1109/IPDPS.2011.287},
}