Project Background

In Matheon project D-SE10 we aspire to recover higher order tensors from a relatively small number of measurements using a low rank assumption. To this end we combine tools developed for low rank matrix recovery with recent tensor product approximation techniques.

Low rank matrix recovery builds upon the ideas of compressive sensing and shows that matrices with small rank can be recovered exactly from incomplete measurements via efficient algorithms, see e.g. [Recht, Fazel, Parrilo 2010]. In order to extend these results to higher order tensors, we utilize the hierarchical Tucker (HT) representations and its special case the tensor train (TT) representation, see [Hackbusch, Kühn, 2009]. These formats define the rank of a tensor as a multi-linear rank tuple \(\mathbf{r}\), where each entry corresponds to the matrix rank of a specific matricization of the tensor. This has the advantage that the rank \(\mathbf{r}\) as well as quasi best rank \(\mathbf{r}^*\) approximations of arbitrary tensors can be efficiently calculated using methods from matrix analysis. The complexity of common operations on low rank tensors in the HT format as well as their storage requirements scale only linear in the order of the tensor, circumventing the curse of dimensionality. For the reconstruction we utilize the fact, that the set \(\mathcal{M}_{\bf r}\) of tensors with fixed HT rank \(\mathbf{r}\) constitutes an embedded smooth manifold, allowing the application of differential-geometric optimization techniques.

Low rank matrix recovery rapidly found numerous practical applications including quantum physics, collaborative filtering, computer vision and medical imaging. Interestingly many of these problem settings actually exhibit an underlying tensor structure which is usually reduced to a matrix structure for the reconstruction. These problems could substantially benefit from our tensor based approach, as exploiting the underlying tensor structure could further reduce the required measurements and improve the accuracy of the reconstruction. Furthermore there is a number of high dimensional problems for which a matrix approach is not feasible at all, but which could never the less be tackled by our tensor approach.

Research

Analogous to the low rank matrix recovery setting, we consider tensor recovery as the problem to reconstruct a tensor \(\mathbf{X} \in \mathbb{R}^{n_1 \times n_2 \times \cdots \times n_d}\) with low HT rank \(\mathbf{r}\) from linear measurements \(\mathbf{y} = \mathcal{A} (\mathbf{X} )\), where \(\mathcal{A} : \mathbb{R}^{n_1 \times n_2 \times \cdots n_d} \to \mathbb{R}^m\) and \(m \ll n_1 n_2 \cdots n_d\). We are particularly interested in the special case where single entries of \(\mathbf{X}\) are measured, referred to as tensor completion.

Exemplary application: Uncertainty Quantification of a poisson equation with random diffusion field.
Exemplary application: Uncertainty Quantification of a poisson equation with random diffusion coefficent.

In order to solve these problems, we initially assume that the HT rank \(\mathbf{r}\) of \(\mathbf{X}\) is a priori known. In this case we search for the tensor \(\mathbf{Z}\) in our model class \(\mathcal{M}_{\bf r}\) of hierarchical tensors with desired HT rank \(\mathbf{r}\), which fits best to the given measurements. To solve this task we focus on Riemannian optimization methods, which exploit the fact that the set \(\mathcal{M}_{\bf r}\) forms an analytic manifold and adapt different gradient based techniques using the concepts of retractions and vector transport introduced by [Absil, ‎Mahony, ‎Sepulchre, 2009]. The theoretical aspects of these Riemannian methods for tensor recovery, including local convergence guarantees, are analyzed in [1, 6]. A general convergence theory based on deep analytical techniques like the Lojasiewicz inequality has been developed in [2]. On the numerical side we have implemented a repertoire of different retractions and base algorithms to tackle reconstruction tasks in the Riemannian setting and also several non-Riemannian algorithms mostly based on alternating optimization schemes. These results provide both theoretical as well as numerical evidence that higher order tensors can indeed be reconstructed from measurements scaling only linear in the order of the tensor.

People

Prof. Reinhold Schneider
Prof. Reinhold Schneider
Project Head
Sebastian Wolf
Sebastian Wolf
Research Member

Software

We believe that our results and in particular the numerical algorithms can be of great use to fellow scientist and practitioners from several application areas. Therefore we provide efficient implementations of all our algorithms in the framework of a general C++ tensor library called xerus. The complete library is available as open source licensed under the AGPL v3: DocumentationSource CodeGithub

Publications

Refereed publications

  1. Holger Rauhut, Reinhold Schneider, and Željka Stojanac.
    Tensor completion in hierarchical tensor representations.
    Compressed Sensing and Its Applications, 2015. arXiv:1404.3905 [math.NA]
  2. Reinhold Schneider and André Uschmajew.
    Convergence results for projected line-search methods on varieties of low-rank matrices via Lojasiewicz inequality.
    SIAM Journal on Optimization, 2015. arXiv:1402.5284 [math.OC]

Submitted articles

  1. Markus Bachmayr and Reinhold Schneider.
    Iterative methods based on soft thresholding of hierarchical tensors.
    2015. arXiv:1501.07714 [math.NA]
  2. Markus Bachmayr, Reinhold Schneider, and André Uschmajew.
    Tensor networks and hierarchical tensors for the solution of high-dimensional partial differential equations.
    2015. TU Preprint
  3. Martin Eigel, Max Pfeffer, and Reinhold Schneider.
    Adaptive stochastic galerkin fem with hierarchical tensor representations.
    2015. TU Preprint.
  4. Holger Rauhut, Reinhold Schneider, and Željka Stojanac.
    Low rank tensor recovery via iterative hard thresholding.
    2016. arXiv:1602.05217 [cs.IT]