Classes | |
struct | dnnl_version_t |
Version type. More... | |
enum dnnl_status_t |
Status values returned by the library functions.
enum dnnl_data_type_t |
Data type specification.
enum dnnl_format_kind_t |
Memory format kind.
Enumerator | |
---|---|
dnnl_format_kind_undef | Undefined memory format kind, used for empty memory descriptors. |
dnnl_format_kind_any | Unspecified format kind. The primitive selects a format automatically. |
dnnl_blocked | A tensor in a generic format described by the stride and blocking values in each dimension. See dnnl_blocking_desc_t for more information. |
dnnl_format_kind_wino | Weights format used in 8bit Winograd convolution. |
dnnl_format_kind_rnn_packed | Packed weights format used in RNN. |
enum dnnl_format_tag_t |
Memory format tag specification.
DNNL formats describe physical data layout. The physical layout is described as a sequence of the dimensions as they are laid out in the memory (from the outer-most to the inner-most). Note that this order doesn't affect the logical order of the dimensions that is kept in the dims
field of the dnnl_memory_desc_t structure. The logical order of the dimensions is specified by the primitive that uses the tensor.
For example, CNN 5D tensor always has its logical dimensions in the order (batch, channels, depth, height, width)
, while the physical layout might be NCDHW
(corresponds to dnnl_ncdhw format tag) or NDHWC
(corresponds to dnnl_ndhwc format tag).
Memory format tags can be further divided into two categories:
a
to l
to denote logical dimension from 1 to 12, and form the order in which the dimensions are laid in memory. For instance, dnnl_ab is used to denote 2D tensor where the second logical dimension (aka b
) is the innermost, i.e. has stride = 1, and the first logical dimension (a
) laid out in memory with stride equal to the size of second dimension. On the other hand, dnnl_ba is just transposed version of the same tensor: the first dimension (a
) becomes the innermost one.a
), channels correspond to the second one (b
).The following domain-specific notation applies to memory format tags:
'n'
denotes the mini-batch dimension'c'
denotes a channels dimension'i'
and 'o'
denote dimensions of input and output channels'd'
, 'h'
, and 'w'
denote spatial depth, height, and width respectivelyUpper-case letters indicate that the data is laid out in blocks for a particular dimension. In such cases, the format name contains both upper- and lower-case letters for that dimension with a lower-case letter preceded by the block size. For example: dnnl_nChw8c describes a format where the outermost dimension is mini-batch, followed by the channel block number, followed by the spatial height and width, and finally followed by 8-element channel blocks.
Enumerator | |
---|---|
dnnl_format_tag_undef | Undefined memory format tag. |
dnnl_format_tag_any | Undefined memory format tag. The primitive selects a format automatically. |
dnnl_a | plain 1D tensor |
dnnl_ab | plain 2D tensor |
dnnl_abc | plain 3D tensor |
dnnl_abcd | plain 4D tensor |
dnnl_abcde | plain 5D tensor |
dnnl_abcdef | plain 6D tensor |
dnnl_abdec | permuted 5D tensor |
dnnl_acb | permuted 3D tensor |
dnnl_acbde | permuted 5D tensor |
dnnl_acdb | permuted 4D tensor |
dnnl_acdeb | permuted 5D tensor |
dnnl_ba | permuted 2D tensor |
dnnl_bac | permuted 3D tensor |
dnnl_bacd | permuted 4D tensor |
dnnl_bca | permuted 3D tensor |
dnnl_bcda | permuted 4D tensor |
dnnl_bcdea | permuted 5D tensor |
dnnl_cba | permuted 3D tensor |
dnnl_cdba | permuted 4D tensor |
dnnl_cdeba | permuted 5D tensor |
dnnl_decab | permuted 5D tensor |
dnnl_aBc16b | 3D tensor blocked by 2nd dimension with block size 16 |
dnnl_aBc4b | 3D tensor blocked by 2nd dimension with block size 4 |
dnnl_aBc8b | 3D tensor blocked by 2nd dimension with block size 8 |
dnnl_aBcd16b | 4D tensor blocked by 2nd dimension with block size 16 |
dnnl_aBcd4b | 4D tensor blocked by 2nd dimension with block size 4 |
dnnl_aBcd8b | 4D tensor blocked by 2nd dimension with block size 8 |
dnnl_ABcd8b8a | 4D tensor blocked by 1st and 2nd dimension with block size 8 |
dnnl_aBcde16b | 5D tensor blocked by 2nd dimension with block size 16 |
dnnl_aBcde4b | 5D tensor blocked by 2nd dimension with block size 4 |
dnnl_aBcde8b | 5D tensor blocked by 2nd dimension with block size 8 |
dnnl_aBcdef16b | 6D tensor blocked by 2nd dimension with block size 16 |
dnnl_aBcdef4b | 6D tensor blocked by 2nd dimension with block size 4 |
dnnl_format_tag_last | Just a sentinel, not real memory format tag. Must be changed after new format tag is added. |
dnnl_x | 1D tensor, an alias to dnnl_a |
dnnl_nc | 2D CNN activations tensor, an alias to dnnl_ab |
dnnl_cn | 2D CNN activations tensor, an alias to dnnl_ba |
dnnl_tn | 2D RNN statistics tensor, an alias to dnnl_ab |
dnnl_nt | 2D RNN statistics tensor, an alias to dnnl_ba |
dnnl_ncw | 3D CNN activations tensor, an alias to dnnl_abc |
dnnl_nwc | 3D CNN activations tensor, an alias to dnnl_acb |
dnnl_nchw | 4D CNN activations tensor, an alias to dnnl_abcd |
dnnl_nhwc | 4D CNN activations tensor, an alias to dnnl_acdb |
dnnl_chwn | 4D CNN activations tensor, an alias to dnnl_bcda |
dnnl_ncdhw | 5D CNN activations tensor, an alias to dnnl_abcde |
dnnl_ndhwc | 5D CNN activations tensor, an alias to dnnl_acdeb |
dnnl_oi | 2D CNN weights tensor, an alias to dnnl_ab |
dnnl_io | 2D CNN weights tensor, an alias to dnnl_ba |
dnnl_oiw | 3D CNN weights tensor, an alias to dnnl_abc |
dnnl_owi | 3D CNN weights tensor, an alias to dnnl_acb |
dnnl_wio | 3D CNN weights tensor, an alias to dnnl_cba |
dnnl_iwo | 3D CNN weights tensor, an alias to dnnl_bca |
dnnl_oihw | 4D CNN weights tensor, an alias to dnnl_abcd |
dnnl_hwio | 4D CNN weights tensor, an alias to dnnl_cdba |
dnnl_ohwi | 4D CNN weights tensor, an alias to dnnl_acdb |
dnnl_ihwo | 4D CNN weights tensor, an alias to dnnl_bcda |
dnnl_iohw | 4D CNN weights tensor, an alias to dnnl_bacd |
dnnl_oidhw | 5D CNN weights tensor, an alias to dnnl_abcde |
dnnl_dhwio | 5D CNN weights tensor, an alias to dnnl_cdeba |
dnnl_odhwi | 5D CNN weights tensor, an alias to dnnl_acdeb |
dnnl_idhwo | 5D CNN weights tensor, an alias to dnnl_bcdea |
dnnl_goiw | 4D CNN weights tensor (incl. groups), an alias to dnnl_abcd |
dnnl_goihw | 5D CNN weights tensor (incl. groups), an alias to dnnl_abcde |
dnnl_hwigo | 5D CNN weights tensor (incl. groups), an alias to dnnl_decab |
dnnl_giohw | 5D CNN weights tensor (incl. groups), an alias to dnnl_acbde |
dnnl_goidhw | 6D CNN weights tensor (incl. groups), an alias to dnnl_abcdef |
dnnl_tnc | 3D RNN data tensor in the format (seq_length, batch, input channels). |
dnnl_ntc | 3D RNN data tensor in the format (batch, seq_length, input channels). |
dnnl_ldnc | 4D RNN states tensor in the format (num_layers, num_directions, batch, state channels). |
dnnl_ldigo | 5D RNN weights tensor in the format (num_layers, num_directions, input_channels, num_gates, output_channels).
|
dnnl_ldgoi | 5D RNN weights tensor in the format (num_layers, num_directions, num_gates, output_channels, input_channels).
|
dnnl_ldgo | 4D RNN bias tensor in the format (num_layers, num_directions, num_gates, output_channels).
|
dnnl_nCdhw16c | 5D CNN activations tensor blocked by channels with block size 16, an alias to dnnl_aBcde16b |
dnnl_nCdhw4c | 5D CNN activations tensor blocked by channels with block size 4, an alias to dnnl_aBcde4b |
dnnl_nCdhw8c | 5D CNN activations tensor blocked by channels with block size 8, an alias to dnnl_aBcde8b |
dnnl_nChw16c | 4D CNN activations tensor blocked by channels with block size 16, an alias to dnnl_aBcd16b |
dnnl_nChw4c | 4D CNN activations tensor blocked by channels with block size 4, an alias to dnnl_aBcd4b |
dnnl_nChw8c | 4D CNN activations tensor blocked by channels with block size 8, an alias to dnnl_aBcd8b |
dnnl_nCw16c | 3D CNN activations tensor blocked by channels with block size 16, an alias to dnnl_aBc16b |
dnnl_nCw4c | 3D CNN activations tensor blocked by channels with block size 4, an alias to dnnl_aBc4b |
dnnl_nCw8c | 3D CNN activations tensor blocked by channels with block size 8, an alias to dnnl_aBc8b |
enum dnnl_prop_kind_t |
Kinds of propagation.
Kinds of primitives.
Used to implement a way to extend the library with new primitives without changing the ABI.
enum dnnl_alg_kind_t |
Kinds of algorithms.
Enumerator | |
---|---|
dnnl_convolution_direct | Direct convolution. |
dnnl_convolution_winograd | Winograd convolution. |
dnnl_convolution_auto | Convolution algorithm(either direct or Winograd) is chosen just in time. |
dnnl_deconvolution_direct | Direct deconvolution. |
dnnl_deconvolution_winograd | Winograd deconvolution. |
dnnl_eltwise_relu | Eltwise: ReLU. |
dnnl_eltwise_tanh | Eltwise: hyperbolic tangent non-linearity (tanh) |
dnnl_eltwise_elu | Eltwise: parametric exponential linear unit (elu) |
dnnl_eltwise_square | Eltwise: square. |
dnnl_eltwise_abs | Eltwise: abs. |
dnnl_eltwise_sqrt | Eltwise: square root. |
dnnl_eltwise_linear | Eltwise: linear. |
dnnl_eltwise_bounded_relu | Eltwise: bounded_relu. |
dnnl_eltwise_soft_relu | Eltwise: soft_relu. |
dnnl_eltwise_logistic | Eltwise: logistic. |
dnnl_eltwise_exp | Eltwise: exponent. |
dnnl_eltwise_gelu | Eltwise: gelu.
|
dnnl_eltwise_swish | Eltwise: swish. |
dnnl_pooling_max | Max pooling. |
dnnl_pooling_avg_include_padding | Average pooling include padding. |
dnnl_pooling_avg_exclude_padding | Average pooling exclude padding. |
dnnl_lrn_across_channels | Local response normalization (LRN) across multiple channels. |
dnnl_lrn_within_channel | LRN within a single channel. |
dnnl_vanilla_rnn | RNN cell. |
dnnl_vanilla_lstm | LSTM cell. |
dnnl_vanilla_gru | GRU cell. |
dnnl_lbr_gru | GRU cell with linear before reset. Modification of original GRU cell. Differs from dnnl_vanilla_gru in how the new memory gate is calculated: \[ c_t = tanh(W_c*x_t + b_{c_x} + r_t*(U_c*h_{t-1}+b_{c_h})) \] Primitive expects 4 biases on input: \([b_{u}, b_{r}, b_{c_x}, b_{c_h}]\) |
dnnl_binary_add | Binary add. |
dnnl_binary_mul | Binary mul. |
Flags for batch normalization primitive.
Enumerator | |
---|---|
dnnl_use_global_stats | Use global statistics. If specified
If not specified:
|
dnnl_use_scaleshift | Use scale and shift parameters. If specified:
If no specified:
|
dnnl_fuse_norm_relu | Fuse with ReLU. The flag implies negative slope being 0. On training this is the only configuration supported. For inference, to use non-zero negative slope consider using Primitive Attributes: Post-ops. If specified:
|