vortex_torch.indexer.scan¶
Classes
|
In-place normalization dispatcher over a packed leading axis. |
|
In-place softmax dispatcher over a packed leading axis. |
- class vortex_torch.indexer.scan.Softmax(dim=0, scale=1.0)[source]¶
Bases:
vOpIn-place softmax dispatcher over a packed leading axis.
The input is treated as a rank-3 tensor
\[X \in \mathbb{R}^{S_{\text{pack}} \times D_0 \times D_1},\]where the leading dimension \(S_{\text{pack}}\) is a packed concatenation of \(B\) segments:
\[\begin{split}S_{\text{pack}} = \sum_{b=0}^{B-1} S_b, \qquad X = \begin{bmatrix} X_0 \\ X_1 \\ \vdots \\ X_{B-1} \end{bmatrix},\end{split}\]with
\[X_b \in \mathbb{R}^{S_b \times D_0 \times D_1}.\]For each segment \(b\) and each fixed pair \((d_0, d_1)\), this operator applies a scaled softmax along the packed axis within that segment:
\[\text{out}[s, d_0, d_1] = \frac{\exp\bigl(\text{scale} \cdot X[s, d_0, d_1]\bigr)} {\sum_{s' \in \mathcal{I}_b} \exp\bigl(\text{scale} \cdot X[s', d_0, d_1]\bigr)}, \quad s \in \mathcal{I}_b,\]where \(\mathcal{I}_b\) denotes the index range in \([0, S_{\text{pack}})\) corresponding to the \(b\)-th segment of length \(S_b\).
In other words,
dim == 0is a packed S axis, and softmax is applied independently within each segment of that axis.Key properties¶
Only
dim == 0is supported.Dispatch is keyed by the input tensor format
x._format.No output buffer is allocated; the operation is performed in-place on
xand the same tensor is returned.
- _impl_map¶
Dispatch table keyed by
x_format. Each entry maps to(callable_impl, resolved_output_format).
- scale¶
Multiplicative factor applied to
xbefore the softmax (i.e. computes softmax ofx * scale).- Type:
- profile(x, ctx)[source]¶
Validate the input and select an implementation.
Since the operation is in-place, no output buffer is allocated and this method simply returns the input
vTensorunmodified.The input is expected to have logical shape
[S_pack, D_0, D_1], whereS_packis understood as a packed concatenation of \(B\) segments of lengths \(S_b\). The softmax is applied alongdim == 0within each such segment.- Parameters:
- Returns:
The same object as
x, returned as the output view.- Return type:
- Raises:
AssertionError – If
xis not avTensor, if its rank is not 3, or if no implementation is registered forx._format.
- execute(x, ctx)[source]¶
Execute the in-place scaled softmax and return the input tensor.
Conceptually, this computes a segment-wise softmax of
x * scalealong the packed axisdim == 0, where the segments along that axis correspond to \(S_0, S_1, \dots, S_{B-1}\) and are treated independently.- Parameters:
- Returns:
The same tensor instance
x, after the in-place softmax.- Return type:
torch.Tensor
- Raises:
AssertionError – If
profile()has not been called and no implementation is available.
- class vortex_torch.indexer.scan.Normalize(dim=0)[source]¶
Bases:
vOpIn-place normalization dispatcher over a packed leading axis.
The input is treated as a rank-3 tensor
\[X \in \mathbb{R}^{S_{\text{pack}} \times D_0 \times D_1},\]where the leading dimension \(S_{\text{pack}}\) is a packed concatenation of \(B\) segments:
\[\begin{split}S_{\text{pack}} = \sum_{b=0}^{B-1} S_b, \qquad X = \begin{bmatrix} X_0 \\ X_1 \\ \vdots \\ X_{B-1} \end{bmatrix},\end{split}\]with
\[X_b \in \mathbb{R}^{S_b \times D_0 \times D_1}.\]For each segment \(b\) and each fixed pair \((d_0, d_1)\), this operator normalizes the values along the packed axis within that segment. A common interpretation is L2 normalization:
\[\hat{X}[s, d_0, d_1] = \frac{X[s, d_0, d_1]} {\sqrt{\sum_{s' \in \mathcal{I}_b} X[s', d_0, d_1]^2}}, \quad s \in \mathcal{I}_b,\]where \(\mathcal{I}_b\) denotes the index range in \([0, S_{\text{pack}})\) corresponding to the \(b\)-th segment of length \(S_b\).
In other words,
dim == 0is a packed S axis, and normalization is applied independently within each segment of that axis.Key properties¶
Only
dim == 0is supported.Dispatch is keyed by the input tensor format
x._format.No output buffer is allocated; the operation is performed in-place on
xand the same tensor is returned.
- _impl_map¶
Dispatch table keyed by
x_format. Each entry maps to(callable_impl, resolved_output_format).
- dim¶
Normalization axis. Must be
0and corresponds to the packed \(S_{\text{pack}}\) dimension.- Type:
- profile(x, ctx)[source]¶
Validate the input and select an implementation.
Since the operation is in-place, no output buffer is allocated and this method simply returns the input
vTensorunmodified.The input is expected to have logical shape
[S_pack, D_0, D_1], whereS_packis understood as a packed concatenation of \(B\) segments of lengths \(S_b\). The normalization is applied alongdim == 0within each such segment.- Parameters:
- Returns:
The same object as
x, returned as the output view.- Return type:
- Raises:
AssertionError – If
xis not avTensor, if its rank is not 3, or if no implementation is registered forx._format.
- execute(x, ctx)[source]¶
Execute the in-place normalization and return the input tensor.
Conceptually, this performs segment-wise normalization along the packed axis
dim == 0, where the segments along that axis correspond to \(S_0, S_1, \dots, S_{B-1}\) and are treated independently.- Parameters:
- Returns:
The same tensor instance
x, after in-place normalization.- Return type:
torch.Tensor
- Raises:
AssertionError – If
profile()has not been called and no implementation is available.
- Parameters:
dim (int)