vortex_torch.abs.tensor¶
Functions
|
Wrap an input as |
Classes
|
Tensor storage/layout format. |
|
Tensor subclass with a _format metadata field. |
- class vortex_torch.abs.tensor.FORMAT(*values)[source]¶
Bases:
EnumTensor storage/layout format.
- BATCHED¶
Standard dense batched tensors (e.g.,
[B, N, D]).
- RAGGED¶
Ragged tensors with variable-length sequences or elements per batch.
- PAGED¶
Paged tensors used for large or streaming data split into pages/chunks.
- BATCHED = 0¶
- RAGGED = 1¶
- PAGED = 2¶
- class vortex_torch.abs.tensor.vTensor(data, _format=FORMAT.BATCHED, **kwargs)[source]¶
Bases:
TensorTensor subclass with a _format metadata field.
Rules: - Torch ops do NOT change _format; it must be consistent across all vTensors in the op. - vTensor CANNOT participate in ops with plain torch.Tensors (raise RuntimeError). - vTensor CAN participate in ops with Python scalars (int/float/bool).
- Parameters:
_format (FORMAT)
- vortex_torch.abs.tensor.as_vtensor(x, _format=FORMAT.BATCHED)[source]¶
Wrap an input as
vTensorwithout copying storage.If
xis already avTensor, this returns the same object (or an equivalent wrapper) after updating its format to_formatwhen needed.- Parameters:
x (torch.Tensor | Any) – Input to wrap. Typically a
torch.Tensor. If avTensoris passed, it will be returned (format may be updated)._format (FORMAT, optional) – Desired tensor storage/layout format. Defaults to
FORMAT.BATCHED.
- Returns:
A
vTensorthat references the same underlying storage asx(no data copy). Device, dtype, and shape are preserved unless the target_formatrequires metadata-only adjustments.- Return type:
Example
>>> t = torch.randn(2, 4, 8) >>> vt = as_vtensor(t) # FORMAT.BATCHED by default >>> vt_ragged = as_vtensor(vt, _format=FORMAT.RAGGED)