Intel(R) Math Kernel Library for Deep Neural Networks (Intel(R) MKL-DNN)  1.0.4
Performance library for Deep Learning
Softmax

API reference: C, C++

The softmax primitive performs softmax along a particular axis on data with arbitrary dimensions. All other axes are treated as independent (batch).

In general form, the operation is defined by the following formulas:

Forward

\[ dst(\overline{ou}, c, \overline{in}) = \frac {e^{src(\overline{ou}, c, \overline{in}) - \nu(\overline{ou}, \overline{in})}} { \sum\limits_{ic} e^{src(\overline{ou}, ic, \overline{in}) - \nu(\overline{ou}, \overline{in})} }, \]

where

Difference Between Forward Training and Forward Inference

There is no difference between the mkldnn_forward_training and mkldnn_forward_inference propagation kinds.

Backward

The backward propagation computes \(diff\_src(ou, c, in)\), based on \(diff\_dst(ou, c, in)\) and \(dst(ou, c, in)\).

Implementation Details

General Notes

N/A

Post-ops and Attributes

The softmax primitive doesn't support any post-ops or attributes.

Data Type Support

The softmax primitive supports the following combinations of data types:

Propagation Sou
forward / backward f32
forward f16

Data Representation

Source, Destination, and Their Gradients

The softmax primitive works with arbitrary data tensors. There is no special meaning associated with any logical dimensions. However, the softmax axis is typically referred to as channels (hence in formulas we use \(c\)).

Implementation Limitations

  1. No primitive specific limitations. Refer to Data Types for limitations related to data types support.

Performance Tips