vortex_torch.abs

class vortex_torch.abs.ContextBase[source]

Bases: ABC

Abstract base class for runtime contexts.

This class defines the minimal contract that all context implementations must follow. It exposes two primary public attributes:

  • mode (str): The current operating mode, e.g. "profile" or "execute".

  • _created (bool): Whether the context has been populated via create().

Subclasses are responsible for implementing the lifecycle behavior and may carry additional internal state as needed, but the public surface should stay minimal and consistent.

name: str

Human-readable context name.

mode: Literal['profile', 'execute']

Current operating mode.

add_aux_flops(nflops)[source]
Parameters:

nflops (int)

Return type:

int

add_aux_memory(obj)[source]

Accumulate auxiliary memory usage and return the number of bytes added.

Parameters:

obj (int | torch.Tensor) – If an int, it is treated as a number of bytes to add. If a torch.Tensor, its size in bytes is computed via _tensor_nbytes() and added.

Returns:

The number of bytes that were added to the auxiliary total.

Return type:

int

Raises:
  • TypeError – If obj is neither an int nor a torch.Tensor.

  • ValueError – If the computed number of bytes is negative.

Notes

This is a simple accumulator. Calling it multiple times on tensors that share storage (or on the same tensor) will double-count.

assert_created()[source]
Return type:

None

clear_aux_flops()[source]
Return type:

None

clear_aux_memory()[source]

Reset the total auxiliary memory to zero.

Return type:

None

abstractmethod create(*args, **kwargs)[source]

Populate the context (idempotency/overwrite rules are up to the subclass).

Parameters:
Return type:

ContextBase

property created: bool
execute()[source]
Return type:

None

missing()[source]
Return type:

list[str]

profile()[source]
Return type:

None

summary()[source]

Print fields; tensor fields show shape/dtype/device, and append memory totals incl. auxiliary.

Return type:

None

class vortex_torch.abs.vOp[source]

Bases: ABC

Base class for defining virtual operators that support profiling and execution modes.

This abstract base class provides a unified interface for defining virtual operators that have two main phases:

  • Profiling phase – used to pre-compute shapes, allocate buffers, or collect statistics.

  • Execution phase – performs the actual operator computation.

Subclasses must implement profile() and execute(). The __call__() method automatically dispatches between these modes based on the provided context.

abstractmethod execute(*args, ctx=None, **kwargs)[source]

Abstract method: execute.

Called during the normal execution phase.

This method implements the actual operator logic. Subclasses must provide their own implementation.

Parameters:
  • *args (Any) – Positional arguments for the operation.

  • ctx (ContextBase, optional) – The execution context.

  • **kwargs – Additional keyword arguments.

Returns:

The result of the operator execution.

Return type:

Any

Raises:

NotImplementedError – If the subclass does not implement this method.

abstractmethod profile(*args, ctx=None, **kwargs)[source]

Abstract method: profile.

Called during the profiling or preparation phase.

Typical use cases:
  • Allocate persistent output buffers.

  • Compute static shapes.

  • Collect performance statistics.

Subclasses must implement this method.

Parameters:
  • *args (Any) – Positional arguments.

  • ctx (ContextBase, optional) – The execution context.

  • **kwargs – Additional keyword arguments.

Returns:

The result of the profiling operation.

Return type:

Any

Raises:

NotImplementedError – If the subclass does not implement this method.

class vortex_torch.abs.vTensor(data, _format=FORMAT.BATCHED, **kwargs)[source]

Bases: Tensor

Tensor subclass with a _format metadata field.

Rules: - Torch ops do NOT change _format; it must be consistent across all vTensors in the op. - vTensor CANNOT participate in ops with plain torch.Tensors (raise RuntimeError). - vTensor CAN participate in ops with Python scalars (int/float/bool).

Parameters:

_format (FORMAT)

vortex_torch.abs.as_vtensor(x, _format=FORMAT.BATCHED)[source]

Wrap an input as vTensor without copying storage.

If x is already a vTensor, this returns the same object (or an equivalent wrapper) after updating its format to _format when needed.

Parameters:
  • x (torch.Tensor | Any) – Input to wrap. Typically a torch.Tensor. If a vTensor is passed, it will be returned (format may be updated).

  • _format (FORMAT, optional) – Desired tensor storage/layout format. Defaults to FORMAT.BATCHED.

Returns:

A vTensor that references the same underlying storage as x (no data copy). Device, dtype, and shape are preserved unless the target _format requires metadata-only adjustments.

Return type:

vTensor

Example

>>> t = torch.randn(2, 4, 8)
>>> vt = as_vtensor(t)  # FORMAT.BATCHED by default
>>> vt_ragged = as_vtensor(vt, _format=FORMAT.RAGGED)
class vortex_torch.abs.FORMAT(*values)[source]

Bases: Enum

Tensor storage/layout format.

BATCHED

Standard dense batched tensors (e.g., [B, N, D]).

RAGGED

Ragged tensors with variable-length sequences or elements per batch.

PAGED

Paged tensors used for large or streaming data split into pages/chunks.

BATCHED = 0
RAGGED = 1
PAGED = 2

Modules