oneAPI Deep Neural Network Library (oneDNN)
Performance library for Deep Learning
1.96.0
Shuffle

API Reference

General

The shuffle primitive shuffles data along the shuffle axis (here is designated as \(C\)) with the group parameter \(G\). Namely, the shuffle axis is thought to be a 2D tensor of size \((\frac{C}{G} \times G)\) and it is being transposed to \((G \times \frac{C}{G})\). Variable names follow the standard Naming Conventions.

The formal definition is shown below:

Forward

\[ \dst(\overline{ou}, c, \overline{in}) = \src(\overline{ou}, c', \overline{in}) \]

where

  • \(c\) dimension is called a shuffle axis,
  • \(G\) is a group_size,
  • \(\overline{ou}\) is the outermost indices (to the left from shuffle axis),
  • \(\overline{in}\) is the innermost indices (to the right from shuffle axis), and
  • \(c'\) and \(c\) relate to each other as define by the system:

    \[ \begin{cases} c &= u + v\frac{C}{G}, \\ c' &= uG + v, \\ \end{cases} \]

Here, \(0 \leq u < \frac{C}{G}\) and \(0 \leq v < G\).

Difference Between Forward Training and Forward Inference

There is no difference between the dnnl_forward_training and dnnl_forward_inference propagation kinds.

Backward

The backward propagation computes \(\diffsrc(ou, c, in)\), based on \(\diffdst(ou, c, in)\).

Essentially, backward propagation is the same as forward propagation with \(g\) replaced by \(C / g\).

Execution Arguments

When executed, the inputs and outputs should be mapped to an execution argument index as specified by the following table.

Primitive input/output Execution argument index
\(\src\) DNNL_ARG_SRC
\(\dst\) DNNL_ARG_DST
\(\diffsrc\) DNNL_ARG_DIFF_SRC
\(\diffdst\) DNNL_ARG_DIFF_DST

Implementation Details

General Notes

  1. The memory format and data type for src and dst are assumed to be the same, and in the API are typically referred as data (e.g., see data_desc in dnnl::shuffle_forward::desc::desc()). The same holds for diff_src and diff_dst. The corresponding memory descriptors are referred to as diff_data_desc.

Data Types

The shuffle primitive supports the following combinations of data types:

Propagation Sou
forward / backward f32, bf16
forward s32, s8, u8
Warning
There might be hardware and/or implementation specific restrictions. Check Implementation Limitations section below.

Data Layouts

The shuffle primitive works with arbitrary data tensors. There is no special meaning associated with any logical dimensions. However, the shuffle axis is typically referred to as channels (hence in formulas we use \(c\)).

Shuffle operation typically appear in CNN topologies. Hence, in the library the shuffle primitive is optimized for the corresponding memory formats:

Spatial Logical tensor Shuffle Axis Implementations optimized for memory formats
2D NCHW 1 (C) dnnl_nchw (dnnl_abcd), dnnl_nhwc (dnnl_acdb), optimized^
3D NCDHW 1 (C) dnnl_ncdhw (dnnl_abcde), dnnl_ndhwc (dnnl_acdeb), optimized^

Here optimized^ means the format that comes out of any preceding compute-intensive primitive.

Post-ops and Attributes

The shuffle primitive does not support any post-ops or attributes.

Implementation Limitations

  1. Refer to Data Types for limitations related to data types support.

Performance Tips

N/A

Examples

Engine Name Com
CPU/GPU Shuffle Primitive Example

This C++ API example demonstrates how to create and execute a Shuffle primitive.

Key optimizations included in this example:

  • Shuffle along axis 1 (channels).