vortex_torch.indexer.reduce¶
Classes
|
L2-norm reduction over a single logical axis. |
|
Maximum reduction over a single logical axis. |
|
Mean reduction over a single logical axis. |
|
Minimum reduction over a single logical axis. |
|
Generic reduction dispatcher for rank-3 logical tensors |
|
Sum reduction over a single logical axis. |
- class vortex_torch.indexer.reduce.Reduce(dim=1)[source]¶
Bases:
vOpGeneric reduction dispatcher for rank-3 logical tensors
[N, D_0, D_1].This operator performs a 1D reduction over either the
D_0orD_1axis of a 3D tensor. The leading dimensionNis generic and may represent a batch axis (B) or a sequence/page axis (S); the reduction is applied independently for each of theNslices.Given an input tensor
\[X \in \mathbb{R}^{N \times D_0 \times D_1},\]the output logical shape depends on the configured reduction dimension
dim:dim == 1(reduce over \(D_0\)):\[\text{out} \in \mathbb{R}^{N \times 1 \times D_1}.\]dim == 2(reduce over \(D_1\)):\[\text{out} \in \mathbb{R}^{N \times D_0 \times 1}.\]
The specific reduction operation (e.g. mean, max, min, L2-norm, sum) is selected via
reduce_type.Dispatch is keyed only by the input format
x._format.- Parameters:
dim (int)
- _impl_map¶
Dispatch table keyed by
x_format. Each entry maps to(callable_impl, resolved_output_format).
- dim¶
Reduction dimension in the logical 3D tensor: must be either
1for reduction over the \(D_0\) axis, or2for reduction over the \(D_1\) axis.
- Type:
- reduce_type¶
The type of reduction to perform (e.g. mean, max, min, L2-norm, sum).
- Type:
Optional[ReduceType]
- output_buffer¶
Preallocated output tensor buffer with logical shape
[N, out_D0, out_D1], whereout_D0andout_D1depend ondimas described above.- Type:
Optional[torch.Tensor]
- profile(x, ctx)[source]¶
Validate the input, select an implementation based on
x._format, allocate the output buffer, and return avTensorview.The input tensor is expected to have logical shape
[N, D_0, D_1], where the leading dimensionNmay represent either a batch size or a sequence/page count. The runtime usesctx.max_num_pagesto define the leading dimension of the output, in line with other operators that treat the first axis as the logicalNaxis.According to
dim, the output logical shape is:dim == 1→[N, 1, D_1]dim == 2→[N, D_0, 1]
- Parameters:
- Returns:
A
vTensorview wrapping the allocated output buffer with the resolved output format.- Return type:
- Raises:
AssertionError – If
xis not avTensor, if its rank is not 3, or if no implementation is registered forx._format.
- execute(x, ctx)[source]¶
Run the selected reduction implementation into the internal buffer and return the result.
The underlying implementation is expected to follow the signature:
impl(x, output, dim, reduce_type, ctx)
where
dimspecifies which logical axis to reduce andreduce_typeselects the reduction operation.- Parameters:
x (torch.Tensor) – Input tensor with shape
[N, D_0, D_1]on the same device as the preallocated output buffer.ctx (Context) – Execution context passed through to the implementation.
- Returns:
The output tensor stored in
self.output_buffer, with logical shape determined bydimas described inprofile().- Return type:
torch.Tensor
- Raises:
AssertionError – If
profile()has not been called (no implementation or output buffer).
- class vortex_torch.indexer.reduce.Max(dim=1)[source]¶
Bases:
ReduceMaximum reduction over a single logical axis.
Given an input tensor
\[X \in \mathbb{R}^{N \times D_0 \times D_1},\]this operator computes, depending on
dim:dim == 1(reduce over \(D_0\)):\[\text{out}[n, 0, d_1] = \max_{0 \le d_0 < D_0} X[n, d_0, d_1],\]with shape \([N, 1, D_1]\).
dim == 2(reduce over \(D_1\)):\[\text{out}[n, d_0, 0] = \max_{0 \le d_1 < D_1} X[n, d_0, d_1],\]with shape \([N, D_0, 1]\).
The leading dimension \(N\) may represent either a batch axis (
B) or a sequence/page axis (S); the reduction is applied independently for each slice along this dimension.- Parameters:
dim (int, optional) – Reduction dimension in the logical 3D tensor (
1for \(D_0\),2for \(D_1\)). Default is1.
- class vortex_torch.indexer.reduce.Min(dim=1)[source]¶
Bases:
ReduceMinimum reduction over a single logical axis.
Given an input tensor
\[X \in \mathbb{R}^{N \times D_0 \times D_1},\]this operator computes, depending on
dim:dim == 1(reduce over \(D_0\)):\[\text{out}[n, 0, d_1] = \min_{0 \le d_0 < D_0} X[n, d_0, d_1],\]with shape \([N, 1, D_1]\).
dim == 2(reduce over \(D_1\)):\[\text{out}[n, d_0, 0] = \min_{0 \le d_1 < D_1} X[n, d_0, d_1],\]with shape \([N, D_0, 1]\).
The leading dimension \(N\) may represent either a batch axis (
B) or a sequence/page axis (S); the reduction is applied independently for each slice along this dimension.- Parameters:
dim (int, optional) – Reduction dimension in the logical 3D tensor (
1for \(D_0\),2for \(D_1\)). Default is1.
- class vortex_torch.indexer.reduce.Mean(dim=1)[source]¶
Bases:
ReduceMean reduction over a single logical axis.
Given an input tensor
\[X \in \mathbb{R}^{N \times D_0 \times D_1},\]this operator computes, depending on
dim:dim == 1(reduce over \(D_0\)):\[\text{out}[n, 0, d_1] = \frac{1}{D_0} \sum_{d_0=0}^{D_0-1} X[n, d_0, d_1],\]with shape \([N, 1, D_1]\).
dim == 2(reduce over \(D_1\)):\[\text{out}[n, d_0, 0] = \frac{1}{D_1} \sum_{d_1=0}^{D_1-1} X[n, d_0, d_1],\]with shape \([N, D_0, 1]\).
The leading dimension \(N\) may represent either a batch axis (
B) or a sequence/page axis (S); the reduction is applied independently for each slice along this dimension.- Parameters:
dim (int, optional) – Reduction dimension in the logical 3D tensor (
1for \(D_0\),2for \(D_1\)). Default is1.
- class vortex_torch.indexer.reduce.L2Norm(dim=1)[source]¶
Bases:
ReduceL2-norm reduction over a single logical axis.
Given an input tensor
\[X \in \mathbb{R}^{N \times D_0 \times D_1},\]this operator computes, depending on
dim:dim == 1(reduce over \(D_0\)):\[\text{out}[n, 0, d_1] = \sqrt{\sum_{d_0=0}^{D_0-1} X[n, d_0, d_1]^2},\]with shape \([N, 1, D_1]\).
dim == 2(reduce over \(D_1\)):\[\text{out}[n, d_0, 0] = \sqrt{\sum_{d_1=0}^{D_1-1} X[n, d_0, d_1]^2},\]with shape \([N, D_0, 1]\).
The leading dimension \(N\) may represent either a batch axis (
B) or a sequence/page axis (S); the reduction is applied independently for each slice along this dimension.- Parameters:
dim (int, optional) – Reduction dimension in the logical 3D tensor (
1for \(D_0\),2for \(D_1\)). Default is1.
- class vortex_torch.indexer.reduce.Sum(dim=1)[source]¶
Bases:
ReduceSum reduction over a single logical axis.
Given an input tensor
\[X \in \mathbb{R}^{N \times D_0 \times D_1},\]this operator computes, depending on
dim:dim == 1(reduce over \(D_0\)):\[\text{out}[n, 0, d_1] = \sum_{d_0=0}^{D_0-1} X[n, d_0, d_1],\]with shape \([N, 1, D_1]\).
dim == 2(reduce over \(D_1\)):\[\text{out}[n, d_0, 0] = \sum_{d_1=0}^{D_1-1} X[n, d_0, d_1],\]with shape \([N, D_0, 1]\).
The leading dimension \(N\) may represent either a batch axis (
B) or a sequence/page axis (S); the reduction is applied independently for each slice along this dimension.- Parameters:
dim (int, optional) – Reduction dimension in the logical 3D tensor (
1for \(D_0\),2for \(D_1\)). Default is1.