Intel(R) Math Kernel Library for Deep Neural Networks (Intel(R) MKL-DNN)  0.21.0
Performance library for Deep Learning
Functions
Batch Normalization

A primitive to perform batch normalization. More...

Functions

mkldnn_status_t MKLDNN_API mkldnn_batch_normalization_forward_desc_init (mkldnn_batch_normalization_desc_t *bnrm_desc, mkldnn_prop_kind_t prop_kind, const mkldnn_memory_desc_t *data_desc, float epsilon, unsigned flags)
 Initializes a batch normalization descriptor bnrm_desc for forward propagation using prop_kind (possible values are mkldnn_forward_training and mkldnn_forward_inference), memory descriptor data_desc, normalization parameter epsilon, and flags set using bit flags of type mkldnn_batch_normalization_desc_t. More...
 
mkldnn_status_t MKLDNN_API mkldnn_batch_normalization_backward_desc_init (mkldnn_batch_normalization_desc_t *bnrm_desc, mkldnn_prop_kind_t prop_kind, const mkldnn_memory_desc_t *diff_data_desc, const mkldnn_memory_desc_t *data_desc, float epsilon, unsigned flags)
 Initializes a batch normalization descriptor bnrm_desc for backward propagation with respect to data and scale-shift parameters using memory descriptors data_desc and diff_data_desc, normalization parameter epsilon, and flags set using bit flags of type mkldnn_batch_normalization_desc_t. More...
 

Detailed Description

A primitive to perform batch normalization.

\[dst[n][c][h][w] = \gamma[c] \frac{src[n][c][h][w] - \mu[c]} {\sqrt{\sigma[c] + eps}} + \beta[c],\]

where $\gamma[c], \beta[c]$ are weights and bias for a channel and,

$\mu[c] = \frac{1}{NHW} \sum\limits_{whn} src[n][c][h][w]$, $\sigma[c] = \frac{1}{NHW} \sum\limits_{whn} (src[n][c][h][w] - \mu[c])^2$,

and eps is a constant to improve numerical stability.

Both forward and backward passes support in-place operation; that is, src and dst point to the same memory for forward pass, and diff_dst and diff_src point to the same memory for backward pass.

Batch normalization supports different flavors controlled by mkldnn_batch_normalization_desc_t. For example, batch normalization can compute the mean and variance on its own or take them as inputs. It can either perform scaling and shifting using gamma and beta parameters or not. Optionally it can also perform a fused ReLU, which in case of training would also require a workspace.

See also
mkldnn_batch_normalization_desc_t

Function Documentation

◆ mkldnn_batch_normalization_forward_desc_init()

mkldnn_status_t MKLDNN_API mkldnn_batch_normalization_forward_desc_init ( mkldnn_batch_normalization_desc_t bnrm_desc,
mkldnn_prop_kind_t  prop_kind,
const mkldnn_memory_desc_t data_desc,
float  epsilon,
unsigned  flags 
)

Initializes a batch normalization descriptor bnrm_desc for forward propagation using prop_kind (possible values are mkldnn_forward_training and mkldnn_forward_inference), memory descriptor data_desc, normalization parameter epsilon, and flags set using bit flags of type mkldnn_batch_normalization_desc_t.

Order of inputs:

Order of outputs:

Note
In-place operation is supported; that is, dst points to the same memory as src.
See also
mkldnn_batch_normalization_desc_t

◆ mkldnn_batch_normalization_backward_desc_init()

mkldnn_status_t MKLDNN_API mkldnn_batch_normalization_backward_desc_init ( mkldnn_batch_normalization_desc_t bnrm_desc,
mkldnn_prop_kind_t  prop_kind,
const mkldnn_memory_desc_t diff_data_desc,
const mkldnn_memory_desc_t data_desc,
float  epsilon,
unsigned  flags 
)

Initializes a batch normalization descriptor bnrm_desc for backward propagation with respect to data and scale-shift parameters using memory descriptors data_desc and diff_data_desc, normalization parameter epsilon, and flags set using bit flags of type mkldnn_batch_normalization_desc_t.

Order of inputs:

Order of outputs:

Note
in-place operation is supported, i.e. diff_src points to the same memory as diff_dst.
See also
mkldnn_batch_normalization_desc_t