Intel(R) Math Kernel Library for Deep Neural Networks (Intel(R) MKL-DNN)  1.0.4
Performance library for Deep Learning
Public Member Functions | List of all members
mkldnn::post_ops Struct Reference

Post operations. More...

#include <mkldnn.hpp>

Inheritance diagram for mkldnn::post_ops:
Inheritance graph
[legend]
Collaboration diagram for mkldnn::post_ops:
Collaboration graph
[legend]

Public Member Functions

 post_ops ()
 Creates an empty sequence of post operations.
 
int len () const
 Returns the length of post operations.
 
primitive::kind kind (int index) const
 Returns the kind of post operation with index index.
 
void append_sum (float scale=1.)
 Appends accumulation (sum) post operation. More...
 
void get_params_sum (int index, float &scale) const
 Gets the parameters of the accumulation (sum) post operation with index index. More...
 
void append_eltwise (float scale, algorithm alg, float alpha, float beta)
 Appends eltwise post operation. More...
 
void get_params_eltwise (int index, float &scale, algorithm &alg, float &alpha, float &beta) const
 Gets the eltwise parameters of the post operation with index index.
 
- Public Member Functions inherited from mkldnn::handle< mkldnn_post_ops_t >
 handle (mkldnn_post_ops_t t, bool weak=false)
 Constructs a C handle wrapper. More...
 
 handle ()
 Empty constructor. More...
 
void reset (mkldnn_post_ops_t t, bool weak=false)
 Resets the value of a C handle. More...
 
mkldnn_post_ops_t get (bool allow_emtpy=false) const
 Returns the value of the underlying C handle.
 

Detailed Description

Post operations.

See also
Primitive Attributes: Post-ops
Examples:
cpu_cnn_inference_int8.cpp, and cpu_performance_profiling.cpp.

Member Function Documentation

◆ append_sum()

void mkldnn::post_ops::append_sum ( float  scale = 1.)
inline

Appends accumulation (sum) post operation.

Prior to accumulating the result, the previous value would be multiplied by scale.

The kind of this post operation is mkldnn_sum.

This feature might improve performance for cases like residual learning blocks, where the result of convolution is accumulated to the previously computed activations. The parameter scale might be extreme for the integer-based computations when the result and previous activations have different logical scaling factors.

In the simplest case when the accumulation is the only post operation, the computations would be: dst[] <- scale * dst[] + op(...) // instead of dst[] <- op(...)

Note
This post operation (as well as all the others) disregards the original layout of the destination; that is, the layout of the original destination is expected to be the same as the layout of the stored destination.

◆ get_params_sum()

void mkldnn::post_ops::get_params_sum ( int  index,
float &  scale 
) const
inline

Gets the parameters of the accumulation (sum) post operation with index index.

◆ append_eltwise()

void mkldnn::post_ops::append_eltwise ( float  scale,
algorithm  alg,
float  alpha,
float  beta 
)
inline

Appends eltwise post operation.

The kind of this post operation is mkldnn_eltwise.

In the simplest case when the eltwise is the only post operation, the computations would be: dst[] <- scale * eltwise_op ( op(...) ) // instead of dst[] <- op(...) where eltwise_op is configured with the given parameters.

Examples:
cpu_cnn_inference_int8.cpp, and cpu_performance_profiling.cpp.

The documentation for this struct was generated from the following file: