API Status & Roadmap

Current capabilities and planned features for ttglow

Overall Progress ~65% complete
Implemented
Partial
Planned

Current Capabilities

tensortrain.py
Function Description Status
TensorTrain Core class with dims, ranks, cores Done
.to_tensor() Convert to dense tensor Done
.random() Random TT with specified ranks Done
dot() Inner product of two TTs Done
hadamard() Elementwise (Hadamard) product Done
add() TT addition (rank grows: r1 + r2) Done
scale() Scalar multiplication Done
+, -, *, @ Operator overloads Done
.shape Logical tensor shape (n_1, ..., n_d) Done
.ttshape TT core shapes [(r_0, n_1, r_1), ...] Done
.numel Logical element count Done
.ttnumel Actual storage size Done
.ndim Number of dimensions Done
.reshape() Reshape via merge/split of adjacent modes Done
.merge_modes() Merge adjacent modes by core contraction Done
.split_mode() Split mode into shape using SVD Done
.swap_adjacent() Swap adjacent modes i and i+1 Done
.transpose() Swap any two modes Done
.unsqueeze() Insert singleton dimension Done
.squeeze() Remove singleton dimension(s) Done
.flatten() Flatten range of dimensions Done
cat() Concatenate tensors along existing dimension Done
stack() Stack tensors along new dimension Done
.to() Device/dtype conversion Done
.clone() Deep copy Done
.detach() Detach from computation graph Done
.from_dense() Convert dense tensor to TT (TT-SVD) Done
tt[i, j, k] Element access Done
tt[:, 2, 1:3] Slicing with ranges and index contraction Done
ttmatrix.py
Function Description Status
TTMatrix TT-matrix / MPO data structure Done
.to_tensor() Convert to dense matrix Done
.random() Random TTMatrix Done
.identity() Identity operator Done
matmul() Matrix-matrix product Done
apply() Apply TTMatrix to TensorTrain Done
add() Matrix addition Done
scale() Scalar multiplication Done
kron() Kronecker product Done
kronadd() Add operators on different sites: (H⊗I) + (I⊗H) Done
transpose() Matrix transpose Done
adjoint() Hermitian conjugate Done
trace() Matrix trace Done
diag() Extract diagonal as TensorTrain Done
norm() Frobenius norm Done
round() Rank truncation via SVD Done
LocatedOp / .on() Apply operator to specific sites Done
Trunc Truncation wrapper for rank control Done
Circuit Composable circuit builder Done
.shape Logical matrix shape (total_rows, total_cols) Done
.ttshape TT core shapes [(r_0, n_row, n_col, r_1), ...] Done
.numel Logical element count Done
.ttnumel Actual storage size Done
.ndim Number of TT sites Done
linalg.py
Function Description Status
qr() QR decomposition (left/right sweep) Done
svd() SVD with rank truncation Done
norm() TT norm Done
sum() Sum over dimension(s) Done
mean() Mean over dimension(s) Done
contract() Contract two TTs Done
tensordot() General tensor contraction Done
ttcross.py
Function Description Status
tt_cross() TT-Cross interpolation from function or dense tensor Done
maxvol() Maximal-volume submatrix selection Done
tensor_complete.py
Function Description Status
silrtc_tt() Tensor completion via SiLRTC-TT (nuclear norm minimization on TT-style matricizations) Done
riemannian_complete.py
Function Description Status
rgrad_tt() Tensor completion via Riemannian optimization on the TT manifold (ALS & gradient with line search). Scales polynomially: O(n·d·r²) Done

🗺 Roadmap

Tier 0: PyTorch-like Feel In Progress

Make ttglow feel like a first-class PyTorch object with familiar APIs.

Function Description Status
shape, ndim, numel Logical tensor metadata Done
ttshape, ttnumel TT storage metadata Done
reshape / view Merge/split adjacent modes Done
merge_modes Merge adjacent modes into one Done
split_mode Split one mode into target shape (SVD-based) Done
swap_adjacent Swap adjacent modes via SVD Done
transpose Swap any two modes (via adjacent swaps) Done
permute Arbitrary mode reordering Planned
unsqueeze Insert singleton dimension Done
squeeze Remove singleton dimension(s) Done
flatten Merge range of modes into one Done
cat / stack Concatenate along mode / stack as new mode Done
from_dense() Convert dense tensor to TT (TT-SVD) Done
to(device/dtype) Device/dtype conversion Done
clone, detach Copy and gradient control Done
Tier 1: Reductions & Contractions Partial

Enable practical computation with daily-use operations.

Function Description Status
sum(dim=...) Sum reduction over dimensions Done
mean(dim=...) Mean via sum + scaling Done
tensordot General tensor contraction Done
einsum Restricted einsum patterns Planned
outer Outer product Planned
Tier 2: Elementwise Nonlinearities Planned

Support ML workflows with elementwise operations (exact or approximate).

Function Description Status
abs, neg Absolute value, negation Planned
sqrt, rsqrt Square root operations Planned
exp, log Exponential and logarithm Planned
tanh, sigmoid Activation functions Planned
relu ReLU activation Planned
where Conditional selection Planned

Note: Elementwise operations may use TT-cross, sampling, or polynomial approximations with explicit rank control.

Tier 3: Advanced Indexing Planned

Widen compatibility with existing PyTorch codebases.

Function Description Status
split / chunk Split tensor along dimension Planned
gather Gather along dimension Planned
slicing Partial slicing / fiber extraction Done
logsumexp Log-sum-exp (approximate) Planned
softmax Softmax (built from logsumexp) Planned
Tier 4: Advanced Linear Algebra Planned

Unlock iterative solvers and matrix-free operations for large-scale problems.

Function Description Status
matvec / matmul Matrix-free A @ x interface Partial
solve (CG) Conjugate gradient solver Planned
solve (GMRES) GMRES iterative solver Planned
eigsh Eigenvalue solver (Lanczos/DMRG-style) Planned
linear_operator LinearOperator-style interface Planned