API reference
Unified backend interface (tensorly
)
There are several libraries for multi-dimensional array computation, including NumPy, PyTorch, TensorFlow, JAX, CuPy and Paddle. They all have strenghs and weaknesses, e.g. some are better on CPU, some better on GPU etc. Therefore, in TensorLy we enable you to use our algorithm (and any code you write using the library), with any of these libraries.
However, while they all loosely follow the API introduced and popularized by NumPy, there are differences. To make switching from one backend to another completely transparent, in TensorLy, we provide a thin wrapper to these libraries.
So instead of using PyTorch or NumPy functions (pytorch.tensor
or numpy.array
for instance),
you should only use functions through the backend (tensorly.tensor
in this case).
Setting the backend
You can simply call set_backend('pytorch')
to set the backend to PyTorch, and similarly for NumPy, JAX, etc.
You can also use the context manager backend_context
if you want to execute a block of code with a different backend.
|
Changes the backend to the specified one |
Returns the name (str) of the currently used backend |
|
|
Context manager to set the backend for TensorLy. |
Dispatch all backend functions dynamically to enable changing backend during runtime |
|
Switch to static dispatching: backend functions no longer will be dynamically dispatched. |
Context of a tensor
In TensorLy, we provide some convenient functions to manipulate backend specific information on the tensors (the context of that tensor), including dtype (e.g. float32, float64, etc), its device (e.g. CPU and GPU) where applicable, etc. We also provide functions to check if a tensor is on the current backend, convert to NumPy, etc.
|
|
|
|
|
|
|
Returns the machine epsilon for a given floating point dtype |
|
Machine limits for floating point types. |
Index assignement (“NumPy style”)
While in some backends (e.g. NumPy), you can directly combine indexing and assignement,
not all backends support this. Instead of
tensor[indices] = values
, you should use
tensor = tensorly.index_update(tensor, tensorly.index, values)
.
|
Updates the value of tensors in the specified indices |
Convenience class used as a an array, to be used with index_update |
Available backend functions
For each backend, tensorly provides the following uniform functions:
Array creation
|
|
|
Return a new array of given shape and type, filled with ones. |
|
Return a new array of given shape and type, filled with zeros. |
|
Return an array of zeros with the same shape and type as a given array. |
|
Return a 2-D array with ones on the diagonal and zeros elsewhere. |
|
Extract a diagonal or construct a diagonal array. |
|
Returns a valid RandomState |
Array manipulation
|
Return the shape of an array. |
|
|
|
Return an array copy of the given object. |
|
Join a sequence of arrays along an existing axis. |
|
Return the complex conjugate, element-wise. |
|
Gives a new shape to an array without changing its data. |
|
Returns an array with axes transposed. |
|
Move axes of an array to new positions. |
|
Return evenly spaced values within a given interval. |
|
Return elements chosen from x or y depending on condition. |
|
|
|
Return the maximum of an array or maximum along an axis. |
|
Return the minimum of an array or minimum along an axis. |
|
Returns the indices of the maximum values along an axis. |
|
Returns the indices of the minimum values along an axis. |
|
Test whether all array elements along a given axis evaluate to True. |
|
Compute the arithmetic mean along the specified axis. |
|
Sum of array elements over a given axis. |
|
Return the product of array elements over a given axis. |
|
Returns an element-wise indication of the sign of a number. |
|
Calculate the absolute value element-wise. |
|
Return the non-negative square-root of an array, element-wise. |
|
Computes the l-order norm of a tensor. |
|
Join a sequence of arrays along a new axis. |
|
Return a sorted copy of an array. |
Algebraic operations
|
Dot product of two arrays. |
|
Matrix product of two arrays. |
|
Compute tensor dot product along specified axes. |
|
Kronecker product of two arrays. |
|
Solve a linear matrix equation, or system of linear scalar equations. |
|
Compute the qr factorization of a matrix. |
|
Core functions (tensorly.base
)
|
Returns the mode-mode unfolding of tensor with modes starting at 0. |
|
Refolds the mode-mode unfolding into a tensor of shape shape |
|
Vectorises a tensor |
|
Folds a vectorised tensor back into a tensor of shape shape |
|
Partially unfolds a tensor while ignoring the specified number of dimensions at the beginning and the end. |
|
Re-folds a partially unfolded tensor |
|
Partially vectorises a tensor |
|
Refolds a partially vectorised tensor into a full one |
Tensors in CP form (tensorly.cp_tensor
)
Core operations on CP tensors.
|
Turns the Khatri-product of matrices into a full tensor |
|
Turns the khatri-product of matrices into an unfolded tensor |
|
Turns the khatri-product of matrices into a vector |
|
Returns cp_tensor with factors normalised to unit length |
|
Returns the l2 norm of a CP tensor |
|
n-mode product of a CP tensor and a matrix or vector at the specified mode |
|
Compares factors of a reference cp tensor with factors of other another tensor (or list of tensor) in order to match component order. |
Tensors in Tucker form (tensorly.tucker_tensor
)
Core operations on Tucker tensors.
|
Converts the Tucker tensor into a full tensor |
|
Converts the Tucker decomposition into an unfolded tensor (i.e. a matrix). |
|
Converts a Tucker decomposition into a vectorised tensor |
|
n-mode product of a Tucker tensor and a matrix or vector at the specified mode |
Tensors in TT (MPS) form (tensorly.tt_tensor
)
Core operations on tensors in Tensor-Train (TT) format, also known as Matrix-Product-State (MPS)
|
Returns the full tensor whose TT decomposition is given by 'factors' |
|
Returns the unfolding matrix of a tensor given in TT (or Tensor-Train) format |
|
Returns the tensor defined by its TT format ('factors') into |
|
Pads the factors of a Tensor-Train so as to increase its rank without changing its reconstruction |
Matrices in TT form (tensorly.tt_matrix
)
Module for matrices in the TT format
|
Returns the full tensor whose TT-Matrix decomposition is given by 'factors' |
|
Returns the unfolding matrix of a tensor given in TT-Matrix format |
|
Returns the tensor defined by its TT-Matrix format ('factors') into |
Tensors in PARAFAC2 form (tensorly.parafac2_tensor
)
Core operations on PARAFAC2 tensors whose second mode evolve over their first.
|
Construct a full tensor from a PARAFAC2 decomposition. |
|
Generate a single slice along the first mode from the PARAFAC2 tensor. |
|
Generate all slices along the first mode from a PARAFAC2 tensor. |
|
Construct an unfolded tensor from a PARAFAC2 decomposition. |
|
Construct a vectorized tensor from a PARAFAC2 decomposition. |
Tensor Algebra (tensorly.tenalg
)
Available functions
TensorLy provides you with all the tensor algebra functions you need:
The tensorly.tenalg
module contains utilities for Tensor Algebra
operations such as khatri-rao or kronecker product, n-mode product, etc.
A unified SVD interface:
|
Dispatching function to various SVD algorithms, alongside additional properties such as resolving sign invariance, imputation, and non-negativity. |
Other tensor algebraic functionalities:
|
Khatri-Rao product of a list of matrices |
|
mode-n unfolding times khatri-rao product of factors |
|
Kronecker product of a list of matrices |
|
n-mode product of a tensor and a matrix or vector at the specified mode |
|
n-mode product of a tensor and several matrices or vectors over several modes |
|
Generalised inner products between tensors |
|
Returns a generalized outer product of the two tensors |
|
Returns a generalized outer product of the two tensors |
|
Batched tensor contraction between two tensors on specified modes |
|
Computes the Higher-Order Momemt |
|
Soft-thresholding operator |
|
Singular value thresholding operator |
|
Procrustes operator |
Tensor Algebra Backend
For advanced users, you may want to dispatch all the computation to einsum (if available) instead of using our manually optimized functions. In TensorLy, we enable this very easily through our tensor algebra backend. If you have your own library implementing tensor algebraic functions, you could even use it that way!
|
Changes the backend to the specified one |
Returns the name (str) of the currently used backend |
|
|
Context manager to set the backend for TensorLy. |
Tensor Decomposition (tensorly.decomposition
)
The tensorly.decomposition
module includes utilities for performing
tensor decomposition such as CANDECOMP-PARAFAC and Tucker.
Classes
Note that these are currently experimental and may change in the future.
|
Candecomp-Parafac decomposition via Alternating-Least Square |
|
Randomised CP decomposition via sampled ALS |
|
CP Decomposition via Robust Tensor Power Iteration |
|
Non-Negative Candecomp-Parafac decomposition via Alternating-Least Square |
|
Tucker decomposition via Higher Order Orthogonal Iteration (HOI). |
|
Decompose a tensor into a matrix in tt-format |
|
PARAFAC2 decomposition [R08ccc3506ae1-1] of a third order tensor via alternating least squares (ALS) |
|
Symmetric CP Decomposition via Robust Symmetric Tensor Power Iteration |
|
CANDECOMP/PARAFAC decomposition via alternating optimization of alternating direction method of multipliers (AO-ADMM): |
|
Decompose a tensor into a matrix in tt-format |
|
Tensor Ring decomposition via recursive SVD |
|
TT decomposition via recursive SVD |
Functions
|
CANDECOMP/PARAFAC decomposition via alternating least squares (ALS) Computes a rank-rank decomposition of tensor [R2df4af999528-1] such that: |
|
A single Robust Tensor Power Iteration |
|
CP Decomposition via Robust Tensor Power Iteration |
|
A single Robust Symmetric Tensor Power Iteration |
|
Symmetric CP Decomposition via Robust Symmetric Tensor Power Iteration |
|
Non-negative CP decomposition |
|
Non-negative CP decomposition via HALS |
|
Random subsample of the Khatri-Rao product of the given list of matrices |
|
Randomised CP decomposition via sampled ALS [Rfe3f26443c3b-3] |
|
Tucker decomposition via Higher Order Orthogonal Iteration (HOI) |
|
Partial tucker decomposition via Higher Order Orthogonal Iteration (HOI) |
|
Non-negative Tucker decomposition |
|
Non-negative Tucker decomposition with HALS |
|
Robust Tensor PCA via ALM with support for missing values |
|
TT decomposition via recursive SVD |
|
Decompose a tensor into a matrix in tt-format |
|
Tensor Ring decomposition via recursive SVD |
|
PARAFAC2 decomposition [Ra07d22ce3f71-1] of a third order tensor via alternating least squares (ALS) |
|
CANDECOMP/PARAFAC decomposition via alternating optimization of alternating direction method of multipliers (AO-ADMM): |
Preprocessing (tensorly.preprocessing
)
|
Compress data with the SVD for running PARAFAC2. |
Decompress the factors obtained by fitting PARAFAC2 on SVD-compressed data |
Tensor Regression (tensorly.regression
)
The tensorly.regression
module includes classes for performing Tensor
Regression.
|
Tucker tensor regression |
|
CP tensor regression |
|
CP tensor regression |
Solvers (tensorly.solvers
)
Tensorly provides with efficient solvers for nonnegative least squares problems which are crucial to nonnegative tensor decomposition, as well as a generic admm solver useful for constrained decompositions. Several proximal (projection) operators are located in tenalg.
|
Non Negative Least Squares (NNLS) |
|
Fast Iterative Shrinkage Thresholding Algorithm (FISTA), see [R60421ea27cb4-1] |
|
Active set algorithm for non-negative least square solution, see [R2a23de012289-1] |
|
Alternating direction method of multipliers (ADMM) algorithm to minimize a quadratic function under convex constraints. |
Performance measures (tensorly.metrics
)
The tensorly.metrics
module includes utilities to measure performance
(e.g. regression error).
|
Returns the mean squared error between the two predictions |
|
Returns the regularised mean squared error between the two predictions (the square-root is applied to the mean_squared_error) |
|
Compute the optimal mean (Tucker) congruence coefficient between the columns of two matrices. |
|
CorrIndex implementation to assess tensor decomposition outputs. |
Sampling tensors (tensorly.random
)
|
Generates a random CP tensor |
|
Generates a random Tucker tensor |
|
Generates a random TT/MPS tensor |
|
Generates a random tensor in TT-Matrix format |
|
Generate a random PARAFAC2 tensor |
Datasets (tensorly.datasets
)
The tensorly.datasets
module includes utilities to load datasets and
create synthetic data, e.g. for testing purposes.
|
Generates an image for regression testing |
Loads tensor of IL-2 mutein treatment responses. |
|
Load an example dataset of COVID-19 systems serology. |
|
Loads Indian pines hyperspectral data from tensorly datasets and returns it as a bunch. |
|
Loads the kinetic fluorescence dataset (X60t) as a tensorly tensor. |
Plugin functionalities (tensorly.plugins
)
Automatically cache the optimal contraction path when using the einsum tensor algebra backend
|
Plugin to use opt-einsum [R7bf64e8a9b16-1] to precompute (and cache) a better contraction path |
Revert to the original einsum for the current backend |
|
|
Plugin to use cuQuantum to precompute (and cache) a better contraction path |
Experimental features (tensorly.contrib
)
A module for experimental functions
Allows to add quickly and test new functions for which the API is not necessarily fixed
|
TT (tensor-train) decomposition via cross-approximation (TTcross) [1] |
|
Perform tensor-train orthogonal iteration (TTOI) [R2611546cc1a1-1] for tensor train decomposition |
Sparse tensors
The tensorly.contrib.sparse
module enables tensor operations on sparse tensors.
Currently, the following decomposition methods are supported (for the NumPy backend, using Sparse):
|
Tucker decomposition via Higher Order Orthogonal Iteration (HOI) |
|
Partial tucker decomposition via Higher Order Orthogonal Iteration (HOI) |
Non-negative Tucker decomposition |
|
|
Robust Tensor PCA via ALM with support for missing values |
|
CANDECOMP/PARAFAC decomposition via alternating least squares (ALS) Computes a rank-rank decomposition of tensor [R8eaaee98a9c3-1] such that: |
Non-negative CP decomposition |
|
Symmetric CP Decomposition via Robust Symmetric Tensor Power Iteration |