vortex_torch.abs¶
- class vortex_torch.abs.ContextBase[source]¶
Bases:
ABCAbstract base class for runtime contexts.
This class defines the minimal contract that all context implementations must follow. It exposes two primary public attributes:
mode(str): The current operating mode, e.g."profile"or"execute"._created(bool): Whether the context has been populated viacreate().
Subclasses are responsible for implementing the lifecycle behavior and may carry additional internal state as needed, but the public surface should stay minimal and consistent.
- add_aux_memory(obj)[source]¶
Accumulate auxiliary memory usage and return the number of bytes added.
- Parameters:
obj (int | torch.Tensor) – If an
int, it is treated as a number of bytes to add. If atorch.Tensor, its size in bytes is computed via_tensor_nbytes()and added.- Returns:
The number of bytes that were added to the auxiliary total.
- Return type:
- Raises:
TypeError – If
objis neither anintnor atorch.Tensor.ValueError – If the computed number of bytes is negative.
Notes
This is a simple accumulator. Calling it multiple times on tensors that share storage (or on the same tensor) will double-count.
- class vortex_torch.abs.vOp[source]¶
Bases:
ABCBase class for defining virtual operators that support profiling and execution modes.
This abstract base class provides a unified interface for defining virtual operators that have two main phases:
Profiling phase – used to pre-compute shapes, allocate buffers, or collect statistics.
Execution phase – performs the actual operator computation.
Subclasses must implement
profile()andexecute(). The__call__()method automatically dispatches between these modes based on the provided context.- abstractmethod execute(*args, ctx=None, **kwargs)[source]¶
Abstract method: execute.
Called during the normal execution phase.
This method implements the actual operator logic. Subclasses must provide their own implementation.
- Parameters:
*args (Any) – Positional arguments for the operation.
ctx (ContextBase, optional) – The execution context.
**kwargs – Additional keyword arguments.
- Returns:
The result of the operator execution.
- Return type:
Any
- Raises:
NotImplementedError – If the subclass does not implement this method.
- abstractmethod profile(*args, ctx=None, **kwargs)[source]¶
Abstract method: profile.
Called during the profiling or preparation phase.
- Typical use cases:
Allocate persistent output buffers.
Compute static shapes.
Collect performance statistics.
Subclasses must implement this method.
- Parameters:
*args (Any) – Positional arguments.
ctx (ContextBase, optional) – The execution context.
**kwargs – Additional keyword arguments.
- Returns:
The result of the profiling operation.
- Return type:
Any
- Raises:
NotImplementedError – If the subclass does not implement this method.
- class vortex_torch.abs.vTensor(data, _format=FORMAT.BATCHED, **kwargs)[source]¶
Bases:
TensorTensor subclass with a _format metadata field.
Rules: - Torch ops do NOT change _format; it must be consistent across all vTensors in the op. - vTensor CANNOT participate in ops with plain torch.Tensors (raise RuntimeError). - vTensor CAN participate in ops with Python scalars (int/float/bool).
- Parameters:
_format (FORMAT)
- vortex_torch.abs.as_vtensor(x, _format=FORMAT.BATCHED)[source]¶
Wrap an input as
vTensorwithout copying storage.If
xis already avTensor, this returns the same object (or an equivalent wrapper) after updating its format to_formatwhen needed.- Parameters:
x (torch.Tensor | Any) – Input to wrap. Typically a
torch.Tensor. If avTensoris passed, it will be returned (format may be updated)._format (FORMAT, optional) – Desired tensor storage/layout format. Defaults to
FORMAT.BATCHED.
- Returns:
A
vTensorthat references the same underlying storage asx(no data copy). Device, dtype, and shape are preserved unless the target_formatrequires metadata-only adjustments.- Return type:
Example
>>> t = torch.randn(2, 4, 8) >>> vt = as_vtensor(t) # FORMAT.BATCHED by default >>> vt_ragged = as_vtensor(vt, _format=FORMAT.RAGGED)
- class vortex_torch.abs.FORMAT(*values)[source]¶
Bases:
EnumTensor storage/layout format.
- BATCHED¶
Standard dense batched tensors (e.g.,
[B, N, D]).
- RAGGED¶
Ragged tensors with variable-length sequences or elements per batch.
- PAGED¶
Paged tensors used for large or streaming data split into pages/chunks.
- BATCHED = 0¶
- RAGGED = 1¶
- PAGED = 2¶
Modules