Comparison to other librariesΒΆ
A brief overview of other libraries that support Tensor Train decomposition (which is also known under the name Matrix Product State in physics community).
Library
|
Language
|
GPU
|
autodiff
|
Riemannian
|
DMRG
AMen
TT-cross
|
---|---|---|---|---|---|
t3f | Python/TensorFlow | Yes | Yes | Yes | No |
tntorch | Python/PyTorch | Yes | Yes | No | No |
ttpy | Python | No | No | Yes | Yes |
mpnum | Python | No | No | No | DMRG |
scikit_tt | Python | No | No | No | No |
mpys | Python | No | No | No | No |
TT-Toolbox | Matlab | Partial | No | No | Yes |
TENSORBOX | Matlab | Partial | No | ?? | ?? |
Tensorlab | Matlab | Partial | No | ?? | ?? |
ITensor | C++ | No | No | No | DMRG |
libtt | C++ | No | No | No | TT-cross |
If you use python, we would suggest using t3f if you need extensive Riemannian optimization support, t3f or tntorch if you need GPU or autodiff support, and ttpy if you need advanced algorithms such as AMen.
The performance of the libraries is a bit tricky to measure fairly and is actually not that different between the libraries because everyone relies on the same BLAS/MKL subruitines. However, GPU can help a lot if you need operations that can be expressed as large matrix-by-matrix multiplications, e.g. computing a gram matrix of a bunch of tensors. For more details on benchmarking t3f see Benchmark.