Intel(R) Math Kernel Library for Deep Neural Networks (Intel(R) MKL-DNN)  0.21.0
Performance library for Deep Learning
Functions

A primitive to compute the common recurrent layer. More...

Functions

mkldnn_status_t MKLDNN_API mkldnn_rnn_cell_desc_init (mkldnn_rnn_cell_desc_t *rnn_cell_desc, mkldnn_alg_kind_t kind, mkldnn_alg_kind_t f, unsigned int flags, float alpha, float clipping)
 Initializes a recurrent cell descriptor rnn_cell_desc using rnn_cell_desc, kind (possible values are mkldnn_vanilla_rnn, mkldnn_vanilla_lstm, mkldnn_vanilla_gru, and mkldnn_gru_linear_before_reset), f (possible values are mkldnn_eltwise_relu and mkldnn_eltwise_tanh), flags, alpha, and clipping. More...
 
int MKLDNN_API mkldnn_rnn_cell_get_gates_count (const mkldnn_rnn_cell_desc_t *rnn_cell_desc)
 Returns the number of gates of a particular rnn_cell_desc. More...
 
int MKLDNN_API mkldnn_rnn_cell_get_states_count (const mkldnn_rnn_cell_desc_t *rnn_cell_desc)
 Returns the number of states of a particular rnn_cell_desc. More...
 
mkldnn_status_t MKLDNN_API mkldnn_primitive_attr_set_rnn_data_qparams (mkldnn_primitive_attr_t attr, const float scale, const float shift)
 Sets quantization scale and shift for RNN data tensors. More...
 
mkldnn_status_t MKLDNN_API mkldnn_primitive_attr_set_rnn_weights_qparams (mkldnn_primitive_attr_t attr, int count, int mask, const float *weights_scales)
 Sets quantization scales weights_scales for RNN weights tensors. More...
 
mkldnn_status_t MKLDNN_API mkldnn_rnn_forward_desc_init (mkldnn_rnn_desc_t *rnn_desc, mkldnn_prop_kind_t prop_kind, const mkldnn_rnn_cell_desc_t *rnn_cell_desc, const mkldnn_rnn_direction_t direction, const mkldnn_memory_desc_t *src_layer_desc, const mkldnn_memory_desc_t *src_iter_desc, const mkldnn_memory_desc_t *weights_layer_desc, const mkldnn_memory_desc_t *weights_iter_desc, const mkldnn_memory_desc_t *bias_desc, const mkldnn_memory_desc_t *dst_layer_desc, const mkldnn_memory_desc_t *dst_iter_desc)
 Initializes a rnn descriptor rnn_desc for forward propagation using prop_kind, rnn_cell_desc, direction, and memory descriptors. More...
 
mkldnn_status_t MKLDNN_API mkldnn_rnn_backward_desc_init (mkldnn_rnn_desc_t *rnn_desc, mkldnn_prop_kind_t prop_kind, const mkldnn_rnn_cell_desc_t *rnn_cell_desc, const mkldnn_rnn_direction_t direction, const mkldnn_memory_desc_t *src_layer_desc, const mkldnn_memory_desc_t *src_iter_desc, const mkldnn_memory_desc_t *weights_layer_desc, const mkldnn_memory_desc_t *weights_iter_desc, const mkldnn_memory_desc_t *bias_desc, const mkldnn_memory_desc_t *dst_layer_desc, const mkldnn_memory_desc_t *dst_iter_desc, const mkldnn_memory_desc_t *diff_src_layer_desc, const mkldnn_memory_desc_t *diff_src_iter_desc, const mkldnn_memory_desc_t *diff_weights_layer_desc, const mkldnn_memory_desc_t *diff_weights_iter_desc, const mkldnn_memory_desc_t *diff_bias_desc, const mkldnn_memory_desc_t *diff_dst_layer, const mkldnn_memory_desc_t *diff_dst_iter_desc)
 Initializes a rnn descriptor rnn_desc for backward propagation using prop_kind, rnn_cell_desc, direction, and memory descriptors. More...
 

Detailed Description

A primitive to compute the common recurrent layer.

Todo:
add additional description for the group

Function Documentation

◆ mkldnn_rnn_cell_desc_init()

mkldnn_status_t MKLDNN_API mkldnn_rnn_cell_desc_init ( mkldnn_rnn_cell_desc_t rnn_cell_desc,
mkldnn_alg_kind_t  kind,
mkldnn_alg_kind_t  f,
unsigned int  flags,
float  alpha,
float  clipping 
)

Initializes a recurrent cell descriptor rnn_cell_desc using rnn_cell_desc, kind (possible values are mkldnn_vanilla_rnn, mkldnn_vanilla_lstm, mkldnn_vanilla_gru, and mkldnn_gru_linear_before_reset), f (possible values are mkldnn_eltwise_relu and mkldnn_eltwise_tanh), flags, alpha, and clipping.

◆ mkldnn_rnn_cell_get_gates_count()

int MKLDNN_API mkldnn_rnn_cell_get_gates_count ( const mkldnn_rnn_cell_desc_t rnn_cell_desc)

Returns the number of gates of a particular rnn_cell_desc.

◆ mkldnn_rnn_cell_get_states_count()

int MKLDNN_API mkldnn_rnn_cell_get_states_count ( const mkldnn_rnn_cell_desc_t rnn_cell_desc)

Returns the number of states of a particular rnn_cell_desc.

◆ mkldnn_primitive_attr_set_rnn_data_qparams()

mkldnn_status_t MKLDNN_API mkldnn_primitive_attr_set_rnn_data_qparams ( mkldnn_primitive_attr_t  attr,
const float  scale,
const float  shift 
)

Sets quantization scale and shift for RNN data tensors.

For performance reasons, low precision configuration of RNN primitive expects input activations to have unsigned int8 data type. Scale and shift used to quantize floating point data to unsigned integer must be passed to RNN primitive using attributes. Example usage:

// rnn parameters
int l = 2, t = 2, mb = 32, sic = 32, slc = 32, dic = 32, dlc = 32;
// activations quantization parameters
float scale = ..., shift = ..;
// create default attributes
// set scale and shift for int8 quantization of activation
// create & configure rnn op_desc
mkldnn_primitive_desc_create_v2(&rnn_pd, &rnn_d, attr, NULL);
Note
Quantization scale and shift are common for src_layer, src_iter, dst_iter and dst_layer.

◆ mkldnn_primitive_attr_set_rnn_weights_qparams()

mkldnn_status_t MKLDNN_API mkldnn_primitive_attr_set_rnn_weights_qparams ( mkldnn_primitive_attr_t  attr,
int  count,
int  mask,
const float *  weights_scales 
)

Sets quantization scales weights_scales for RNN weights tensors.

Low precision configuration of RNN primitive expects input weights to have signed int8 data type. Scales used to quantize floating point data to signed integer must be passed to RNN primitive using attributes. The mask argument defines correspondence between output tensor dimensions and the weights_scales array. Set i-th bit of mask to 1 to use dedicated scaling factor for each slice of the output tensor over i-th dimension. Set mask to 0 to use common scaling factor for the whole output tensor. Example usage:

// rnn parameters
int l = 2, t = 2, mb = 32, sic = 32, slc = 32, dic = 32, dlc = 32;
// unique output scales per output channel
float weights_scales[dic * n_gates] = { ... };
// mask that specifies last two dimensions of ldigo format
int mask = 0x3;
// create default attributes
// set output channel-wise weights scales
weights_scales);
// create & configure rnn op_desc
mkldnn_primitive_desc_create_v2(&rnn_pd, &rnn_d, attr, NULL);
Note
The dimension order is always native and does not depend on the actual layout used. For example, 5 dimensional weights always have (l, d, i, g, o) logical dimension ordering.
Quantization sales are common for weights_layer and weights_iteration
There is no way to check that count corresponds to mask until an actual primitive descriptor is created, so it is user's responsibility to set proper values. The following formula must be held:

\[count = \prod\limits_{d \in mask} output.dims[d]\]

◆ mkldnn_rnn_forward_desc_init()

mkldnn_status_t MKLDNN_API mkldnn_rnn_forward_desc_init ( mkldnn_rnn_desc_t rnn_desc,
mkldnn_prop_kind_t  prop_kind,
const mkldnn_rnn_cell_desc_t rnn_cell_desc,
const mkldnn_rnn_direction_t  direction,
const mkldnn_memory_desc_t src_layer_desc,
const mkldnn_memory_desc_t src_iter_desc,
const mkldnn_memory_desc_t weights_layer_desc,
const mkldnn_memory_desc_t weights_iter_desc,
const mkldnn_memory_desc_t bias_desc,
const mkldnn_memory_desc_t dst_layer_desc,
const mkldnn_memory_desc_t dst_iter_desc 
)

Initializes a rnn descriptor rnn_desc for forward propagation using prop_kind, rnn_cell_desc, direction, and memory descriptors.

Note
If prop_kind equals mkldnn_forward_training, you must query a workspace memory descriptor before creating the primitive.

src_iter_desc, bias_desc, and dst_iter_desc are allowed to either be NULL or point to a zero memory descriptor, which would indicate that the RNN primitive should not use them.

Note
All memory descriptors except src_iter_desc are allowed to be initialized with mkldnn_any value of format_kind.

Order of inputs:

Order of outputs:

◆ mkldnn_rnn_backward_desc_init()

mkldnn_status_t MKLDNN_API mkldnn_rnn_backward_desc_init ( mkldnn_rnn_desc_t rnn_desc,
mkldnn_prop_kind_t  prop_kind,
const mkldnn_rnn_cell_desc_t rnn_cell_desc,
const mkldnn_rnn_direction_t  direction,
const mkldnn_memory_desc_t src_layer_desc,
const mkldnn_memory_desc_t src_iter_desc,
const mkldnn_memory_desc_t weights_layer_desc,
const mkldnn_memory_desc_t weights_iter_desc,
const mkldnn_memory_desc_t bias_desc,
const mkldnn_memory_desc_t dst_layer_desc,
const mkldnn_memory_desc_t dst_iter_desc,
const mkldnn_memory_desc_t diff_src_layer_desc,
const mkldnn_memory_desc_t diff_src_iter_desc,
const mkldnn_memory_desc_t diff_weights_layer_desc,
const mkldnn_memory_desc_t diff_weights_iter_desc,
const mkldnn_memory_desc_t diff_bias_desc,
const mkldnn_memory_desc_t diff_dst_layer,
const mkldnn_memory_desc_t diff_dst_iter_desc 
)

Initializes a rnn descriptor rnn_desc for backward propagation using prop_kind, rnn_cell_desc, direction, and memory descriptors.

Note
All memory descriptors are allowed to be initialized with mkldnn_any value of format_kind.

src_iter_desc (simultaneously with diff_src_iter_desc), bias_desc (simultaneously with diff_bias_desc), and dst_iter_desc (simultaneously with diff_src_iter_desc) are allowed to either be NULL or point to a zero memory descriptor, which would indicate that the RNN primitive should not use them.

Order of inputs:

Order of outputs: