Deep Neural Network Library (DNNL)  1.3.0
Performance library for Deep Learning
Public Attributes | List of all members
dnnl_memory_desc_t Struct Reference

Memory descriptor. More...

#include <dnnl_types.h>

Collaboration diagram for dnnl_memory_desc_t:
Collaboration graph
[legend]

Public Attributes

int ndims
 Number of dimensions.
 
dnnl_dims_t dims
 Dimensions in the following order: More...
 
dnnl_data_type_t data_type
 Data type of the tensor elements.
 
dnnl_dims_t padded_dims
 Size of the data including padding in each dimension.
 
dnnl_dims_t padded_offsets
 Per-dimension offset from the padding to actual data, the top-level tensor with offsets applied must lie within the padding area. More...
 
dnnl_dim_t offset0
 Offset from memory origin to the current block, non-zero only in a description of a memory sub-block. More...
 
dnnl_format_kind_t format_kind
 Memory format kind.
 
dnnl_blocking_desc_t blocking
 Description of the data layout for memory formats that use blocking. More...
 
dnnl_wino_desc_t wino_desc
 Tensor of weights for integer 8bit winograd convolution.
 
dnnl_rnn_packed_desc_t rnn_packed_desc
 Tensor of packed weights for RNN.
 

Detailed Description

Memory descriptor.

The description is based on a number of dimensions, dimensions themselves, plus information about elements type and memory format. Additionally, contains format-specific descriptions of the data layout.

Examples:
cnn_inference_f32.c, cpu_cnn_training_f32.c, and cross_engine_reorder.c.

Member Data Documentation

◆ dims

dnnl_dims_t dnnl_memory_desc_t::dims

Dimensions in the following order:

  • CNN data tensors: mini-batch, channel, spatial ({N, C, [[D,] H,] W})
  • CNN weight tensors: group (optional), output channel, input channel, spatial ({[G,] O, I, [[D,] H,] W})
  • RNN data tensors: time, mini-batch, channels ({T, N, C}) or layers, directions, states, mini-batch, channels ({L, D, S, N, C})
  • RNN weight tensor: layers, directions, input channel, gates, output channels ({L, D, I, G, O}).
Note
The order of dimensions does not depend on the memory format, so whether the data is laid out in dnnl_nchw or dnnl_nhwc the dims for 4D CN data tensor would be {N, C, H, W}.
Examples:
cpu_matmul_quantization.cpp.

◆ padded_offsets

dnnl_dims_t dnnl_memory_desc_t::padded_offsets

Per-dimension offset from the padding to actual data, the top-level tensor with offsets applied must lie within the padding area.

◆ offset0

dnnl_dim_t dnnl_memory_desc_t::offset0

Offset from memory origin to the current block, non-zero only in a description of a memory sub-block.

◆ blocking

dnnl_blocking_desc_t dnnl_memory_desc_t::blocking

Description of the data layout for memory formats that use blocking.


The documentation for this struct was generated from the following file: