Intel(R) Math Kernel Library for Deep Neural Networks (Intel(R) MKL-DNN)  1.0.4
Performance library for Deep Learning
Public Member Functions | List of all members
mkldnn::primitive_attr Struct Reference

Primitive attributes. More...

#include <mkldnn.hpp>

Inheritance diagram for mkldnn::primitive_attr:
Inheritance graph
[legend]
Collaboration diagram for mkldnn::primitive_attr:
Collaboration graph
[legend]

Public Member Functions

 primitive_attr ()
 Creates a default primitive attribute.
 
scratchpad_mode get_scratchpad_mode () const
 Returns the scratchpad mode.
 
void set_scratchpad_mode (scratchpad_mode mode)
 Sets scratchpad mode.
 
void get_output_scales (int &mask, std::vector< float > &scales) const
 Gets correspondence scale mask and a constant floating point vector of output scales previously set by set_output_scales. More...
 
void set_output_scales (int mask, const std::vector< float > &scales)
 Sets output scales for primitive operations. More...
 
const post_ops get_post_ops () const
 Returns post_ops previously set by set_post_ops.
 
void set_post_ops (post_ops ops)
 Sets post_ops for future use.
 
void set_rnn_data_qparams (float scale, float shift)
 Sets quantization scale and shift for RNN data tensors. More...
 
void set_rnn_weights_qparams (int mask, const std::vector< float > &scales)
 Sets quantization scales weights_scales for RNN weights tensors. More...
 
- Public Member Functions inherited from mkldnn::handle< mkldnn_primitive_attr_t >
 handle (mkldnn_primitive_attr_t t, bool weak=false)
 Constructs a C handle wrapper. More...
 
 handle ()
 Empty constructor. More...
 
void reset (mkldnn_primitive_attr_t t, bool weak=false)
 Resets the value of a C handle. More...
 
mkldnn_primitive_attr_t get (bool allow_emtpy=false) const
 Returns the value of the underlying C handle.
 

Detailed Description

Primitive attributes.

See also
Primitive Attributes
Examples:
cpu_cnn_inference_int8.cpp, cpu_performance_profiling.cpp, and cpu_rnn_inference_int8.cpp.

Member Function Documentation

◆ get_output_scales()

void mkldnn::primitive_attr::get_output_scales ( int &  mask,
std::vector< float > &  scales 
) const
inline

Gets correspondence scale mask and a constant floating point vector of output scales previously set by set_output_scales.

◆ set_output_scales()

void mkldnn::primitive_attr::set_output_scales ( int  mask,
const std::vector< float > &  scales 
)
inline

Sets output scales for primitive operations.

The correspondence scale mask is stored for future use.

The mask argument defines the correspondence between the output tensor dimensions and the scales vector. Set the i-th bit of mask to 1 to use a dedicated scaling factor for each slice of the output tensor over the i-th dimension. Set mask to 0 to use a common scaling factor for the whole output tensor.

Note
The dimension order is always native and does not depend on the actual layout used. Examples:
  • 2D dimensional data the order of dimensions is always: (n, c)
  • 4D dimensional data the order is always: (n, c, h, w)
  • 5D dimensional weights the order is always: (g, oc, ic, kh, kw)
Examples:
cpu_cnn_inference_int8.cpp.

◆ set_rnn_data_qparams()

void mkldnn::primitive_attr::set_rnn_data_qparams ( float  scale,
float  shift 
)
inline

Sets quantization scale and shift for RNN data tensors.

For performance reasons, the low-precision configuration of the RNN primitive expects input activations to have the unsigned int8 data type. Scale and shift used to quantize floating-point data to unsigned integer must be passed to the RNN primitive using attributes.

Note
Quantization scale and shift are common for src_layer, src_iter, dst_iter, and dst_layer.
Examples:
cpu_rnn_inference_int8.cpp.

◆ set_rnn_weights_qparams()

void mkldnn::primitive_attr::set_rnn_weights_qparams ( int  mask,
const std::vector< float > &  scales 
)
inline

Sets quantization scales weights_scales for RNN weights tensors.

The low-precision configuration of the RNN primitive expects input weights to have the signed int8 data type. Scales used to quantize floating-point data to signed integer must be passed to the RNN primitive using attributes. The mask argument defines correspondence between output tensor dimensions and the weights_scales array. Set the i-th bit of mask to 1 to use a dedicated scaling factor for each slice of the output tensor over the i-th dimension. Set mask to 0 to use a common scaling factor for the whole output tensor.

Note
The dimension order is always native and does not depend on the actual layout used. For example, five-dimensional weights always have (l, d, i, g, o) logical dimension ordering.
Quantization scales are common for weights_layer and weights_iteration
There is no way to check whether count corresponds to mask until an actual primitive descriptor is created, so it is the user's responsibility to set proper values. The following formula must hold:

\[count = \prod\limits_{d \in mask} output.dims[d]\]

Examples:
cpu_rnn_inference_int8.cpp.

The documentation for this struct was generated from the following file: