vortex_torch.abs.tensor

Functions

as_vtensor(x[, _format])

Wrap an input as vTensor without copying storage.

Classes

FORMAT(*values)

Tensor storage/layout format.

vTensor(data[, _format])

Tensor subclass with a _format metadata field.

class vortex_torch.abs.tensor.FORMAT(*values)[source]

Bases: Enum

Tensor storage/layout format.

BATCHED

Standard dense batched tensors (e.g., [B, N, D]).

RAGGED

Ragged tensors with variable-length sequences or elements per batch.

PAGED

Paged tensors used for large or streaming data split into pages/chunks.

BATCHED = 0
RAGGED = 1
PAGED = 2
class vortex_torch.abs.tensor.vTensor(data, _format=FORMAT.BATCHED, **kwargs)[source]

Bases: Tensor

Tensor subclass with a _format metadata field.

Rules: - Torch ops do NOT change _format; it must be consistent across all vTensors in the op. - vTensor CANNOT participate in ops with plain torch.Tensors (raise RuntimeError). - vTensor CAN participate in ops with Python scalars (int/float/bool).

Parameters:

_format (FORMAT)

vortex_torch.abs.tensor.as_vtensor(x, _format=FORMAT.BATCHED)[source]

Wrap an input as vTensor without copying storage.

If x is already a vTensor, this returns the same object (or an equivalent wrapper) after updating its format to _format when needed.

Parameters:
  • x (torch.Tensor | Any) – Input to wrap. Typically a torch.Tensor. If a vTensor is passed, it will be returned (format may be updated).

  • _format (FORMAT, optional) – Desired tensor storage/layout format. Defaults to FORMAT.BATCHED.

Returns:

A vTensor that references the same underlying storage as x (no data copy). Device, dtype, and shape are preserved unless the target _format requires metadata-only adjustments.

Return type:

vTensor

Example

>>> t = torch.randn(2, 4, 8)
>>> vt = as_vtensor(t)  # FORMAT.BATCHED by default
>>> vt_ragged = as_vtensor(vt, _format=FORMAT.RAGGED)