API reference: C, C++
The softmax primitive performs softmax along a particular axis on data with arbitrary dimensions. All other axes are treated as independent (batch).
In general form, the operation is defined by the following formulas:
Forward
\[ dst(\overline{ou}, c, \overline{in}) = \frac {e^{src(\overline{ou}, c, \overline{in}) - \nu(\overline{ou}, \overline{in})}} { \sum\limits_{ic} e^{src(\overline{ou}, ic, \overline{in}) - \nu(\overline{ou}, \overline{in})} }, \]
where
- \(c\) dimension is called a softmax axis,
- \(\overline{ou}\) is the outermost indices (to the left from softmax axis),
- \(\overline{in}\) is the innermost indices (to the right from softmax axis), and
\(\nu\) is used to produce more accurate results and defined as:
\[ \nu(\overline{ou}, \overline{in}) = \max\limits_{ic} src(\overline{ou}, ic, \overline{in}) \]
There is no difference between the mkldnn_forward_training and mkldnn_forward_inference propagation kinds.
Backward
The backward propagation computes \(diff\_src(ou, c, in)\), based on \(diff\_dst(ou, c, in)\) and \(dst(ou, c, in)\).
Implementation Details
General Notes
N/A
Post-ops and Attributes
The softmax primitive doesn't support any post-ops or attributes.
Data Type Support
The softmax primitive supports the following combinations of data types:
Propagation | Sou |
forward / backward | f32 |
forward | f16 |
Data Representation
Source, Destination, and Their Gradients
The softmax primitive works with arbitrary data tensors. There is no special meaning associated with any logical dimensions. However, the softmax axis is typically referred to as channels (hence in formulas we use \(c\)).
Implementation Limitations
- No primitive specific limitations. Refer to Data Types for limitations related to data types support.
Performance Tips
- Currently the softmax primitive is optimized for the cases where the dimension of the softmax axis is physically dense. For instance:
- Optimized: 2D case, tensor \(A \times B\), softmax axis 1 (B), format tag mkldnn_ab
- Optimized: 4D case, tensor \(A \times B \times C \times D\), softmax axis 3 (D), format tag mkldnn_abcd
- Optimized: 4D case, tensor \(A \times B \times C \times D\), softmax axis 1 (B), format tag mkldnn_abcd, and \(C = D = 1\)
- Non-optimized: 2D case, tensor \(A \times B\), softmax axis 0 (A), format tag mkldnn_ab, and \(B \ne 1\)
- Non-optimized: 2D case, tensor \(A \times B\), softmax axis 1 (B), format tag mkldnn_ba, and \(A \ne 1\)
- Non-optimized: 4D case, tensor \(A \times B\), softmax axis 2 (C), format tag mkldnn_acdb, and and \(D \cdot B \ne 1\)