vortex_torch.indexer.reduce

Classes

L2Norm([dim])

L2-norm reduction over a single logical axis.

Max([dim])

Maximum reduction over a single logical axis.

Mean([dim])

Mean reduction over a single logical axis.

Min([dim])

Minimum reduction over a single logical axis.

Reduce([dim])

Generic reduction dispatcher for rank-3 logical tensors [N, D_0, D_1].

Sum([dim])

Sum reduction over a single logical axis.

class vortex_torch.indexer.reduce.Reduce(dim=1)[source]

Bases: vOp

Generic reduction dispatcher for rank-3 logical tensors [N, D_0, D_1].

This operator performs a 1D reduction over either the D_0 or D_1 axis of a 3D tensor. The leading dimension N is generic and may represent a batch axis (B) or a sequence/page axis (S); the reduction is applied independently for each of the N slices.

Given an input tensor

\[X \in \mathbb{R}^{N \times D_0 \times D_1},\]

the output logical shape depends on the configured reduction dimension dim:

  • dim == 1 (reduce over \(D_0\)):

    \[\text{out} \in \mathbb{R}^{N \times 1 \times D_1}.\]
  • dim == 2 (reduce over \(D_1\)):

    \[\text{out} \in \mathbb{R}^{N \times D_0 \times 1}.\]

The specific reduction operation (e.g. mean, max, min, L2-norm, sum) is selected via reduce_type.

Dispatch is keyed only by the input format x._format.

Parameters:

dim (int)

_impl_map

Dispatch table keyed by x_format. Each entry maps to (callable_impl, resolved_output_format).

Type:

Dict[FORMAT, Tuple[Callable, FORMAT]]

dim

Reduction dimension in the logical 3D tensor: must be either

  • 1 for reduction over the \(D_0\) axis, or

  • 2 for reduction over the \(D_1\) axis.

Type:

int

reduce_type

The type of reduction to perform (e.g. mean, max, min, L2-norm, sum).

Type:

Optional[ReduceType]

impl

The resolved implementation selected during profile().

Type:

Optional[Callable]

output_format

The output tensor format as determined in profile().

Type:

Optional[FORMAT]

output_buffer

Preallocated output tensor buffer with logical shape [N, out_D0, out_D1], where out_D0 and out_D1 depend on dim as described above.

Type:

Optional[torch.Tensor]

profile(x, ctx)[source]

Validate the input, select an implementation based on x._format, allocate the output buffer, and return a vTensor view.

The input tensor is expected to have logical shape [N, D_0, D_1], where the leading dimension N may represent either a batch size or a sequence/page count. The runtime uses ctx.max_num_pages to define the leading dimension of the output, in line with other operators that treat the first axis as the logical N axis.

According to dim, the output logical shape is:

  • dim == 1[N, 1, D_1]

  • dim == 2[N, D_0, 1]

Parameters:
  • x (vTensor) – Input tensor with logical shape [N, D_0, D_1].

  • ctx (Context) – Execution context providing ctx.max_num_pages for the leading dimension and tracking auxiliary memory usage.

Returns:

A vTensor view wrapping the allocated output buffer with the resolved output format.

Return type:

vTensor

Raises:

AssertionError – If x is not a vTensor, if its rank is not 3, or if no implementation is registered for x._format.

execute(x, ctx)[source]

Run the selected reduction implementation into the internal buffer and return the result.

The underlying implementation is expected to follow the signature:

impl(x, output, dim, reduce_type, ctx)

where dim specifies which logical axis to reduce and reduce_type selects the reduction operation.

Parameters:
  • x (torch.Tensor) – Input tensor with shape [N, D_0, D_1] on the same device as the preallocated output buffer.

  • ctx (Context) – Execution context passed through to the implementation.

Returns:

The output tensor stored in self.output_buffer, with logical shape determined by dim as described in profile().

Return type:

torch.Tensor

Raises:

AssertionError – If profile() has not been called (no implementation or output buffer).

class vortex_torch.indexer.reduce.Max(dim=1)[source]

Bases: Reduce

Maximum reduction over a single logical axis.

Given an input tensor

\[X \in \mathbb{R}^{N \times D_0 \times D_1},\]

this operator computes, depending on dim:

  • dim == 1 (reduce over \(D_0\)):

    \[\text{out}[n, 0, d_1] = \max_{0 \le d_0 < D_0} X[n, d_0, d_1],\]

    with shape \([N, 1, D_1]\).

  • dim == 2 (reduce over \(D_1\)):

    \[\text{out}[n, d_0, 0] = \max_{0 \le d_1 < D_1} X[n, d_0, d_1],\]

    with shape \([N, D_0, 1]\).

The leading dimension \(N\) may represent either a batch axis (B) or a sequence/page axis (S); the reduction is applied independently for each slice along this dimension.

Parameters:

dim (int, optional) – Reduction dimension in the logical 3D tensor (1 for \(D_0\), 2 for \(D_1\)). Default is 1.

class vortex_torch.indexer.reduce.Min(dim=1)[source]

Bases: Reduce

Minimum reduction over a single logical axis.

Given an input tensor

\[X \in \mathbb{R}^{N \times D_0 \times D_1},\]

this operator computes, depending on dim:

  • dim == 1 (reduce over \(D_0\)):

    \[\text{out}[n, 0, d_1] = \min_{0 \le d_0 < D_0} X[n, d_0, d_1],\]

    with shape \([N, 1, D_1]\).

  • dim == 2 (reduce over \(D_1\)):

    \[\text{out}[n, d_0, 0] = \min_{0 \le d_1 < D_1} X[n, d_0, d_1],\]

    with shape \([N, D_0, 1]\).

The leading dimension \(N\) may represent either a batch axis (B) or a sequence/page axis (S); the reduction is applied independently for each slice along this dimension.

Parameters:

dim (int, optional) – Reduction dimension in the logical 3D tensor (1 for \(D_0\), 2 for \(D_1\)). Default is 1.

class vortex_torch.indexer.reduce.Mean(dim=1)[source]

Bases: Reduce

Mean reduction over a single logical axis.

Given an input tensor

\[X \in \mathbb{R}^{N \times D_0 \times D_1},\]

this operator computes, depending on dim:

  • dim == 1 (reduce over \(D_0\)):

    \[\text{out}[n, 0, d_1] = \frac{1}{D_0} \sum_{d_0=0}^{D_0-1} X[n, d_0, d_1],\]

    with shape \([N, 1, D_1]\).

  • dim == 2 (reduce over \(D_1\)):

    \[\text{out}[n, d_0, 0] = \frac{1}{D_1} \sum_{d_1=0}^{D_1-1} X[n, d_0, d_1],\]

    with shape \([N, D_0, 1]\).

The leading dimension \(N\) may represent either a batch axis (B) or a sequence/page axis (S); the reduction is applied independently for each slice along this dimension.

Parameters:

dim (int, optional) – Reduction dimension in the logical 3D tensor (1 for \(D_0\), 2 for \(D_1\)). Default is 1.

class vortex_torch.indexer.reduce.L2Norm(dim=1)[source]

Bases: Reduce

L2-norm reduction over a single logical axis.

Given an input tensor

\[X \in \mathbb{R}^{N \times D_0 \times D_1},\]

this operator computes, depending on dim:

  • dim == 1 (reduce over \(D_0\)):

    \[\text{out}[n, 0, d_1] = \sqrt{\sum_{d_0=0}^{D_0-1} X[n, d_0, d_1]^2},\]

    with shape \([N, 1, D_1]\).

  • dim == 2 (reduce over \(D_1\)):

    \[\text{out}[n, d_0, 0] = \sqrt{\sum_{d_1=0}^{D_1-1} X[n, d_0, d_1]^2},\]

    with shape \([N, D_0, 1]\).

The leading dimension \(N\) may represent either a batch axis (B) or a sequence/page axis (S); the reduction is applied independently for each slice along this dimension.

Parameters:

dim (int, optional) – Reduction dimension in the logical 3D tensor (1 for \(D_0\), 2 for \(D_1\)). Default is 1.

class vortex_torch.indexer.reduce.Sum(dim=1)[source]

Bases: Reduce

Sum reduction over a single logical axis.

Given an input tensor

\[X \in \mathbb{R}^{N \times D_0 \times D_1},\]

this operator computes, depending on dim:

  • dim == 1 (reduce over \(D_0\)):

    \[\text{out}[n, 0, d_1] = \sum_{d_0=0}^{D_0-1} X[n, d_0, d_1],\]

    with shape \([N, 1, D_1]\).

  • dim == 2 (reduce over \(D_1\)):

    \[\text{out}[n, d_0, 0] = \sum_{d_1=0}^{D_1-1} X[n, d_0, d_1],\]

    with shape \([N, D_0, 1]\).

The leading dimension \(N\) may represent either a batch axis (B) or a sequence/page axis (S); the reduction is applied independently for each slice along this dimension.

Parameters:

dim (int, optional) – Reduction dimension in the logical 3D tensor (1 for \(D_0\), 2 for \(D_1\)). Default is 1.