Intel(R) Math Kernel Library for Deep Neural Networks (Intel(R) MKL-DNN)  1.0.4
Performance library for Deep Learning
Eltwise

API reference: C, C++

The eltwise primitive applies an operation to every element of the tensor:

\[ dst(\overline{x}) = Operation(src(\overline{x})), \]

where \(\overline{x} = (x_n, .., x_0)\).

The following operations are supported:

Operation MKL-DNN algorithm kind For
abs mkldnn_eltwise_abs \( f(x) = \begin{cases} x & \text{if}\ x > 0 \\ \alpha -x & \text{if}\ x \leq 0 \end{cases} \)
bounded_relu mkldnn_eltwise_bounded_relu \( f(x) = \begin{cases} \alpha & \text{if}\ x > \alpha \\ \alpha x & \text{if}\ x \leq \alpha \end{cases} \)
elu mkldnn_eltwise_elu \( f(x) = \begin{cases} x & \text{if}\ x > 0 \\ \alpha (e^x - 1) & \text{if}\ x \leq 0 \end{cases} \)
exp mkldnn_eltwise_exp \( f(x) = e^x \)
gelu mkldnn_eltwise_gelu \( f(x) = 0.5 x (1 + tanh[\sqrt{\frac{2}{\pi}} (x + 0.044715 x^3)])\)
linear mkldnn_eltwise_linear \( f(x) = \alpha x + \beta \)
logistic mkldnn_eltwise_logistic \( f(x) = \frac{1}{1+e^{-x}} \)
relu mkldnn_eltwise_relu \( f(x) = \begin{cases} x & \text{if}\ x > 0 \\ \alpha x & \text{if}\ x \leq 0 \end{cases} \)
soft_relu mkldnn_eltwise_soft_relu \( f(x) = \log_{e}(1+e^x) \)
sqrt mkldnn_eltwise_sqrt \( f(x) = \sqrt{x} \)
square mkldnn_eltwise_square \( f(x) = x^2 \)
tanh mkldnn_eltwise_tanh \( f(x) = \frac{e^z - e^{-z}}{e^z + e^{-z}} \)

Difference Between Forward Training and Forward Inference

There is no difference between the mkldnn_forward_training and mkldnn_forward_inference propagation kinds.

Backward

The backward propagation computes \(diff\_src(\overline{x})\), based on \(diff\_dst(\overline{x})\) and \(src(\overline{x})\).

Implementation Details

General Notes

  1. All eltise primitives have a common initialization function (e.g., mkldnn::eltwise_forward::desc::desc()) which takes both parameters \(\alpha\), and \(\beta\). These parameters are ignored if they are unused.
  2. The memory format and data type for src and dst are assumed to be the same, and in the API are typically referred as data (e.g., see data_desc in mkldnn::eltwise_forward::desc::desc()). The same holds for diff_src and diff_dst. The corresponding memory descriptors are referred to as diff_data_desc.
  3. Both forward and backward propagation support in-place operations, meaning that src can be used as input and output for forward propagation, and diff_dst can be used as input and output for backward propagation. In case of in-place operation, the original data will be overwritten.
  4. For some operations it might be performance beneficial to compute backward propagation based on \(dst(\overline{x})\), rather than on \(src(\overline{x})\). However, for some other operations this is simply impossible. So for generality the library always requires \(src\).
Note
For the ReLU operation with \(\alpha = 0\), \(dst\) can be used instead of \(src\) and \(dst\) when backward propagation is computed. This enables several performance optimizations (see the tips below).

Data Type Support

The eltwise primitive supports the following combinations of data types:

Propagation Source / Destination Int
forward / backward f32 f32
forward f16 f16
forward s32 / s8 / u8 f32
Warning
There might be hardware and/or implementation specific restrictions. Check Implementation Limitations section below.

Here the intermediate data type means that the values coming in are first converted to the intermediate data type, then the operation is applied, and finally the result is converted to the output data type.

Data Representation

The eltwise primitive works with arbitrary data tensors. There is no special meaning associated with any logical dimensions.

Post-ops and Attributes

The eltwise primitive doesn't support any post-ops or attributes.

Implementation Limitations

  1. No primitive specific limitations. Refer to Data Types for limitations related to data types support.

Performance Tips

  1. For backward propagation, use the same memory format for src, diff_dst, and diff_src (the format of the diff_dst and diff_src are always the same because of the API). Different formats are functionally supported but lead to highly suboptimal performance.
  2. Use in-place operations whenever possible.
  3. As mentioned above for the ReLU operation with \(\alpha = 0\), one can use the \(dst\) tensor instead of \(src\). This enables the following potential optimizations for training:
    • ReLU can be safely done in-place.
    • Moreover, ReLU can be fused as a post-op with the previous operation if that operation doesn't require its \(dst\) to compute the backward propagation (e.g., if the convolution operation satisfies these conditions).