oneAPI Deep Neural Network Library (oneDNN)
Performance library for Deep Learning
1.96.0
LogSoftmax

API Reference

General

The logsoftmax primitive performs softmax along a particular axis on data with arbitrary dimensions followed by the logarithm function. All other axes are treated as independent (batch).

Forward

In general form, the operation is defined by the following formulas (the variable names follow the standard Naming Conventions). Second form is used as more numerically stable:

\[ \dst(\overline{ou}, c, \overline{in}) = \ln\left({\frac { e^{\src(\overline{ou}, c, \overline{in}) - \nu(\overline{ou}, \overline{in})} } { \sum\limits_{ic} e^{\src(\overline{ou}, ic, \overline{in}) - \nu(\overline{ou}, \overline{in})} }}\right) = \left(\src(\overline{ou}, c, \overline{in}) - \nu(\overline{ou}, \overline{in})\right) - \ln\left( \sum\limits_{ic} e^{\src(\overline{ou}, ic, \overline{in}) - \nu(\overline{ou}, \overline{in})} \right), \]

where

  • \(c\) axis over which the logsoftmax computation is computed on,
  • \(\overline{ou}\) is the outermost index (to the left of logsoftmax axis),
  • \(\overline{in}\) is the innermost index (to the right of logsoftmax axis), and
  • \(\nu\) is used to produce more accurate results and defined as:

    \[ \nu(\overline{ou}, \overline{in}) = \max\limits_{ic} \src(\overline{ou}, ic, \overline{in}) \]

Difference Between Forward Training and Forward Inference

There is no difference between the dnnl_forward_training and dnnl_forward_inference propagation kinds.

Backward

The backward propagation computes \(\diffsrc(ou, c, in)\), based on \(\diffdst(ou, c, in)\) and \(\dst(ou, c, in)\).

Execution Arguments

When executed, the inputs and outputs should be mapped to an execution argument index as specified by the following table.

Primitive input/output Execution argument index
\(\src\) DNNL_ARG_SRC
\(\dst\) DNNL_ARG_DST
\(\diffsrc\) DNNL_ARG_DIFF_SRC
\(\diffdst\) DNNL_ARG_DIFF_DST

Implementation Details

General Notes

  1. Both forward and backward propagation support in-place operations, meaning that src can be used as input and output for forward propagation, and diff_dst can be used as input and output for backward propagation. In case of in-place operation, the original data will be overwritten.

Post-ops and Attributes

The logsoftmax primitive does not support any post-ops or attributes.

Data Type Support

The logsoftmax primitive supports the following combinations of data types:

Propagation Sou
forward / backward bf16, f32

Data Representation

Source, Destination, and Their Gradients

The logsoftmax primitive works with arbitrary data tensors. There is no special meaning associated with any logical dimensions. However, the logsoftmax axis is typically referred to as channels (hence in formulas we use \(c\)).

Implementation Limitations

  1. No primitive specific limitations. Refer to Data Types for limitations related to data types support.
  2. GPU
    • No support.

Performance Tips

  1. Use in-place operations whenever possible.
  2. Currently the softmax primitive is optimized for the cases where the dimension of the softmax axis is physically dense. For instance:
    • Optimized: 2D case, tensor \(A \times B\), softmax axis 1 (B), format tag dnnl_ab
    • Optimized: 4D case, tensor \(A \times B \times C \times D\), softmax axis 3 (D), format tag dnnl_abcd
    • Optimized: 4D case, tensor \(A \times B \times C \times D\), softmax axis 1 (B), format tag dnnl_abcd, and \(C = D = 1\)
    • Optimized: 4D case, tensor \(A \times B \times C \times D\), softmax axis 1 (B), format tag dnnl_acdb or dnnl_aBcd16b, and \(C \cdot D \ne 1\)
    • Non-optimized: 2D case, tensor \(A \times B\), softmax axis 0 (A), format tag dnnl_ab, and \(B \ne 1\)
    • Non-optimized: 2D case, tensor \(A \times B\), softmax axis 1 (B), format tag dnnl_ba, and \(A \ne 1\)
    • Non-optimized: 4D case, tensor \(A \times B \times C \times D\), softmax axis 2 (C), format tag dnnl_acdb, and and \(D \cdot B \ne 1\)

Examples

Engine Name Com
CPU/GPU Logsoftmax Primitive Example

This C++ API example demonstrates how to create and execute a Logsoftmax primitive in forward training propagation mode.

Key optimizations included in this example:

  • In-place primitive execution;
  • Softmax along axis 1 (C) for 2D tensors.