struct dnnl::post_ops¶
Overview¶
Post-ops. More…
#include <dnnl.hpp> struct post_ops: public dnnl::handle { // construction post_ops(); post_ops(dnnl_post_ops_t post_ops); // methods int len() const; primitive::kind kind(int index) const; void append_sum( float scale = 1.f, int32_t zero_point = 0, memory::data_type data_type = memory::data_type::undef ); void get_params_sum(int index, float& scale) const; void get_params_sum(int index, float& scale, memory::data_type& data_type) const; void get_params_sum( int index, float& scale, int32_t& zero_point, memory::data_type& data_type ) const; void append_eltwise(algorithm aalgorithm, float alpha, float beta); void get_params_eltwise( int index, algorithm& aalgorithm, float& alpha, float& beta ) const; void append_dw( memory::data_type weights_data_type, memory::data_type bias_data_type, memory::data_type dst_data_type, memory::dim kernel_size, memory::dim stride_size, memory::dim padding_l_size ); void get_params_dw( int index, memory::data_type& weights_data_type, memory::data_type& bias_data_type, memory::data_type& dst_data_type, memory::dim& kernel_size, memory::dim& stride_size, memory::dim& padding_l_size ) const; void append_binary(algorithm aalgorithm, const memory::desc& src1_desc); void get_params_binary( int index, algorithm& aalgorithm, memory::desc& src1_desc ) const; void append_prelu(int mask); void get_params_prelu(int index, int& mask) const; };
Inherited Members¶
public: // methods handle<T, traits>& operator = (const handle<T, traits>&); handle<T, traits>& operator = (handle<T, traits>&&); void reset(T t, bool weak = false); T get(bool allow_empty = false) const; operator T () const; operator bool () const; bool operator == (const handle<T, traits>& other) const; bool operator != (const handle& other) const;
Detailed Documentation¶
Post-ops.
Post-ops are computations executed after the main primitive computations and are attached to the primitive via primitive attributes.
See also:
Primitive Attributes: Post-ops
Construction¶
post_ops()
Constructs an empty sequence of post-ops.
post_ops(dnnl_post_ops_t post_ops)
Creates post-ops primitive attribute from a C API dnnl_post_ops_t handle.
The resulting handle is not weak and the C handle will be destroyed during the destruction of the C++ object.
Parameters:
The C API post-ops primitive attribute. |
Methods¶
int len() const
Returns the number of post-ops entries.
primitive::kind kind(int index) const
Returns the primitive kind of post-op at entry with a certain index.
Parameters:
index |
Index of the post-op to return the kind for. |
Returns:
Primitive kind of the post-op at the specified index.
void append_sum( float scale = 1.f, int32_t zero_point = 0, memory::data_type data_type = memory::data_type::undef )
Appends an accumulation (sum) post-op.
Prior to accumulating the result, the previous value will be will be reduced by zero point zero_point
and multiplied by a scaling factor scale
.
The kind of this post-op is dnnl::primitive::kind::sum.
This feature may improve performance for cases like dequantize the asymmetrically quantized sum’s src1 tensor to f32 domain before performing the sum operation by subtracting zero_point
before the scaling.
In the simplest case when the accumulation is the only post-op, the computations will be dst[:] := scale * (dst[:] - zero_point) + op(...)
instead of dst[:] := op(...)
.
If data_type
is specified, the original dst tensor will be reinterpreted as a tensor with the provided data type. Because it is a reinterpretation, data_type and dst data type should have the same size. As a result, computations will be dst[:] <- scale * (as_data_type(dst[:]) - zero_point) + op(...)
instead of dst[:] <- op(...)
.
Note
This post-op executes in-place and does not change the destination layout.
Parameters:
scale |
Scaling factor. |
zero_point |
Zero point. |
data_type |
Data type. |
void get_params_sum(int index, float& scale) const
Returns the parameters of an accumulation (sum) post-op.
Parameters:
index |
Index of the sum post-op. |
scale |
Scaling factor of the sum post-op. |
void get_params_sum(int index, float& scale, memory::data_type& data_type) const
Returns the parameters of an accumulation (sum) post-op.
Parameters:
index |
Index of the sum post-op. |
scale |
Scaling factor of the sum post-op. |
data_type |
Data type of the sum post-op. |
void get_params_sum( int index, float& scale, int32_t& zero_point, memory::data_type& data_type ) const
Returns the parameters of an accumulation (sum) post-op.
Parameters:
index |
Index of the sum post-op. |
scale |
Scaling factor of the sum post-op. |
zero_point |
Single scalar int32_t value of zeropoint. |
data_type |
Data type of the sum post-op. |
void append_eltwise(algorithm aalgorithm, float alpha, float beta)
Appends an elementwise post-op.
The kind of this post-op is dnnl::primitive::kind::eltwise.
In the simplest case when the elementwise is the only post-op, the computations would be dst[:] := eltwise_op (op(...))
instead of dst[:] <- op(...)
, where eltwise_op is configured with the given parameters.
Parameters:
aalgorithm |
Elementwise algorithm. |
alpha |
Alpha parameter for the elementwise algorithm. |
beta |
Beta parameter for the elementwise algorithm. |
void get_params_eltwise( int index, algorithm& aalgorithm, float& alpha, float& beta ) const
Returns parameters of an elementwise post-op.
Parameters:
index |
Index of the post-op. |
aalgorithm |
Output elementwise algorithm kind. |
alpha |
Output alpha parameter for the elementwise algorithm. |
beta |
Output beta parameter for the elementwise algorithm. |
void append_dw( memory::data_type weights_data_type, memory::data_type bias_data_type, memory::data_type dst_data_type, memory::dim kernel_size, memory::dim stride_size, memory::dim padding_l_size )
Appends a depthwise post-op convolution.
This post-op can only be fused with a 2D 1x1 convolution (convolution with weights spatial dimension equal to 1 i.e., kh=kw=1).
The kind of this post-op is dnnl_convolution.
The number of outputs for primitive remain same as before. The output spatial size can be derived as below:
output_height = ceil(output_height_1x1_convolution, stride) output_width = ceil(output_width_1x1_convolution, stride)
See dev_guide_attributes_post_ops_depthwise and dev_guide_attributes_post_ops_depthwise_fusion for more info.
Parameters:
weights_data_type |
Weights data type of depthwise post-op |
bias_data_type |
Bias data type of depthwise post-op |
dst_data_type |
Output data type of depthwise post-op |
kernel_size |
Size of kernel of depthwise post-op |
stride_size |
Size of stride of depthwise post-op |
padding_l_size |
Size of left and top paddings of depthwise post-op |
void get_params_dw( int index, memory::data_type& weights_data_type, memory::data_type& bias_data_type, memory::data_type& dst_data_type, memory::dim& kernel_size, memory::dim& stride_size, memory::dim& padding_l_size ) const
Returns the parameters of an depthwise post-op.
Parameters:
index |
Index of the elementwise post-op. |
weights_data_type |
Weights data type of depthwise post-op |
bias_data_type |
Bias data type of depthwise post-op |
dst_data_type |
Output data type of depthwise post-op |
kernel_size |
Size of kernel of depthwise post-op |
stride_size |
Size of stride of depthwise post-op |
padding_l_size |
Size of left and top paddings of depthwise post-op |
void append_binary(algorithm aalgorithm, const memory::desc& src1_desc)
Appends a binary post-op.
The kind of this post operation is dnnl_binary.
In the simplest case when the binary is the only post operation, the computations would be:
dst[:] <- binary_op (dst[:], another_input[:])
where binary_op is configured with the given parameters. binary_op supports broadcast semantics for a second operand.
Parameters:
aalgorithm |
Binary algorithm for the post-op. |
src1_desc |
Memory descriptor of a second operand. |
void get_params_binary( int index, algorithm& aalgorithm, memory::desc& src1_desc ) const
Returns the parameters of a binary post-op.
Parameters:
index |
Index of the binary post-op. |
aalgorithm |
Output binary algorithm kind. |
src1_desc |
Output memory descriptor of a second operand. |
void append_prelu(int mask)
Appends a prelu forward post-op.
The kind of this post-op is dnnl::primitive::kind::prelu.
The post-op can be defined as:
dst[:] <- prelu(dst[:], weights[:])
prelu:
dst[:] <- dst[:] if dst[:] > 0
dst[:] <- dst[:] * weights[:] if dst[:] <= 0
Example usage:
int mb = 32, oc = 32, oh = 14, ow = 14; // convolution output params // unique weights per output channel vector<float> weights = { ... }; int oc_dim = 1; // mb_dim = 0, channel_dim = 1, height_dim = 2, ... // construct a convolution descriptor dnnl::convolution::desc conv_d; dnnl::primitive_attr attr; attr.append_prelu(1 << oc_dim); dnnl::primitive_desc conv_pd(conv_d, attr, engine); memory prelu_weights({{1}, dt::f32, {1}}, eng, weights.data()); std::unordered_map<int, memory> conv_args; conv_args.insert( {DNNL_ARG_ATTR_MULTIPLE_POST_OP(0) | DNNL_ARG_WEIGHTS, prelu_weights})
Note
The order of dimensions does not depend on how elements are laid out in memory. For example:
for a 2D CNN activations tensor the order is always (n, c)
for a 4D CNN activations tensor the order is always (n, c, h, w)
for a 5D CNN weights tensor the order is always (g, oc, ic, kh, kw)
Prelu weights tensor is passed in runtime execution phase. Prelu weights tensor data type is implicitly assumed as f32 using plain layout (a, ab, acb, acdb, acdeb).
Parameters:
mask |
Defines the correspondence between the output tensor dimensions and the prelu weights tensor. The set i-th bit indicates that a dedicated weights value is used for each index along that dimension. Set the mask to 0 to use a common weights value for the whole output tensor. |
void get_params_prelu(int index, int& mask) const
Returns the parameters of a prelu post-op.
Parameters:
index |
Index of the prelu post-op. |
mask |
Weights mask of prelu post-op. |