enum dnnl_alg_kind_t

Overview

Kinds of algorithms. More…

#include <dnnl_types.h>

enum dnnl_alg_kind_t
{
    dnnl_alg_kind_undef,
    dnnl_convolution_direct               = 0x1,
    dnnl_convolution_winograd             = 0x2,
    dnnl_convolution_auto                 = 0x3,
    dnnl_deconvolution_direct             = 0xa,
    dnnl_deconvolution_winograd           = 0xb,
    dnnl_eltwise_relu                     = 0x20,
    dnnl_eltwise_tanh,
    dnnl_eltwise_elu,
    dnnl_eltwise_square,
    dnnl_eltwise_abs,
    dnnl_eltwise_sqrt,
    dnnl_eltwise_linear,
    dnnl_eltwise_soft_relu,
    dnnl_eltwise_hardsigmoid,
    dnnl_eltwise_logistic,
    dnnl_eltwise_exp,
    dnnl_eltwise_gelu_tanh,
    dnnl_eltwise_swish,
    dnnl_eltwise_log,
    dnnl_eltwise_clip,
    dnnl_eltwise_clip_v2,
    dnnl_eltwise_pow,
    dnnl_eltwise_gelu_erf,
    dnnl_eltwise_round,
    dnnl_eltwise_mish,
    dnnl_eltwise_hardswish,
    dnnl_eltwise_relu_use_dst_for_bwd     = 0x100,
    dnnl_eltwise_tanh_use_dst_for_bwd,
    dnnl_eltwise_elu_use_dst_for_bwd,
    dnnl_eltwise_sqrt_use_dst_for_bwd,
    dnnl_eltwise_logistic_use_dst_for_bwd,
    dnnl_eltwise_exp_use_dst_for_bwd,
    dnnl_eltwise_clip_v2_use_dst_for_bwd,
    dnnl_pooling_max                      = 0x1ff,
    dnnl_pooling_avg_include_padding      = 0x2ff,
    dnnl_pooling_avg_exclude_padding      = 0x3ff,
    dnnl_lrn_across_channels              = 0xaff,
    dnnl_lrn_within_channel               = 0xbff,
    dnnl_vanilla_rnn                      = 0x1fff,
    dnnl_vanilla_lstm                     = 0x2fff,
    dnnl_vanilla_gru                      = 0x3fff,
    dnnl_lbr_gru                          = 0x4fff,
    dnnl_vanilla_augru                    = 0x5fff,
    dnnl_lbr_augru                        = 0x6fff,
    dnnl_binary_add                       = 0x1fff0,
    dnnl_binary_mul                       = 0x1fff1,
    dnnl_binary_max                       = 0x1fff2,
    dnnl_binary_min                       = 0x1fff3,
    dnnl_binary_div                       = 0x1fff4,
    dnnl_binary_sub                       = 0x1fff5,
    dnnl_binary_ge                        = 0x1fff6,
    dnnl_binary_gt                        = 0x1fff7,
    dnnl_binary_le                        = 0x1fff8,
    dnnl_binary_lt                        = 0x1fff9,
    dnnl_binary_eq                        = 0x1fffa,
    dnnl_binary_ne                        = 0x1fffb,
    dnnl_resampling_nearest               = 0x2fff0,
    dnnl_resampling_linear                = 0x2fff1,
    dnnl_reduction_max,
    dnnl_reduction_min,
    dnnl_reduction_sum,
    dnnl_reduction_mul,
    dnnl_reduction_mean,
    dnnl_reduction_norm_lp_max,
    dnnl_reduction_norm_lp_sum,
    dnnl_reduction_norm_lp_power_p_max,
    dnnl_reduction_norm_lp_power_p_sum,
    dnnl_softmax_accurate                 = 0x30000,
    dnnl_softmax_log,
};

Detailed Documentation

Kinds of algorithms.

Enum Values

dnnl_convolution_direct

Direct convolution.

dnnl_convolution_winograd

Winograd convolution.

dnnl_convolution_auto

Convolution algorithm(either direct or Winograd) is chosen just in time.

dnnl_deconvolution_direct

Direct deconvolution.

dnnl_deconvolution_winograd

Winograd deconvolution.

dnnl_eltwise_relu

Eltwise: ReLU.

dnnl_eltwise_tanh

Eltwise: hyperbolic tangent non-linearity (tanh)

dnnl_eltwise_elu

Eltwise: exponential linear unit (elu)

dnnl_eltwise_square

Eltwise: square.

dnnl_eltwise_abs

Eltwise: abs.

dnnl_eltwise_sqrt

Eltwise: square root.

dnnl_eltwise_linear

Eltwise: linear.

dnnl_eltwise_soft_relu

Eltwise: soft_relu.

dnnl_eltwise_hardsigmoid

Eltwise: hardsigmoid.

dnnl_eltwise_logistic

Eltwise: logistic.

dnnl_eltwise_exp

Eltwise: exponent.

dnnl_eltwise_gelu_tanh

Eltwise: gelu.

Note

Tanh approximation formula is used to approximate the cumulative distribution function of a Gaussian here

dnnl_eltwise_swish

Eltwise: swish.

dnnl_eltwise_log

Eltwise: natural logarithm.

dnnl_eltwise_clip

Eltwise: clip.

dnnl_eltwise_clip_v2

Eltwise: clip version 2.

dnnl_eltwise_pow

Eltwise: pow.

dnnl_eltwise_gelu_erf

Eltwise: erf-based gelu.

dnnl_eltwise_round

Eltwise: round.

dnnl_eltwise_mish

Eltwise: mish.

dnnl_eltwise_hardswish

Eltwise: hardswish.

dnnl_eltwise_relu_use_dst_for_bwd

Eltwise: ReLU (dst for backward)

dnnl_eltwise_tanh_use_dst_for_bwd

Eltwise: hyperbolic tangent non-linearity (tanh) (dst for backward)

dnnl_eltwise_elu_use_dst_for_bwd

Eltwise: exponential linear unit (elu) (dst for backward)

dnnl_eltwise_sqrt_use_dst_for_bwd

Eltwise: square root (dst for backward)

dnnl_eltwise_logistic_use_dst_for_bwd

Eltwise: logistic (dst for backward)

dnnl_eltwise_exp_use_dst_for_bwd

Eltwise: exp (dst for backward)

dnnl_eltwise_clip_v2_use_dst_for_bwd

Eltwise: clip version 2 (dst for backward)

dnnl_pooling_max

Max pooling.

dnnl_pooling_avg_include_padding

Average pooling include padding.

dnnl_pooling_avg_exclude_padding

Average pooling exclude padding.

dnnl_lrn_across_channels

Local response normalization (LRN) across multiple channels.

dnnl_lrn_within_channel

LRN within a single channel.

dnnl_vanilla_rnn

RNN cell.

dnnl_vanilla_lstm

LSTM cell.

dnnl_vanilla_gru

GRU cell.

dnnl_lbr_gru

GRU cell with linear before reset.

Modification of original GRU cell. Differs from dnnl_vanilla_gru in how the new memory gate is calculated:

\[c_t = tanh(W_c*x_t + b_{c_x} + r_t*(U_c*h_{t-1}+b_{c_h}))\]

Primitive expects 4 biases on input: \([b_{u}, b_{r}, b_{c_x}, b_{c_h}]\)

dnnl_vanilla_augru

AUGRU cell.

dnnl_lbr_augru

AUGRU cell with linear before reset.

dnnl_binary_add

Binary add.

dnnl_binary_mul

Binary mul.

dnnl_binary_max

Binary max.

dnnl_binary_min

Binary min.

dnnl_binary_div

Binary div.

dnnl_binary_sub

Binary sub.

dnnl_binary_ge

Binary greater or equal.

dnnl_binary_gt

Binary greater than.

dnnl_binary_le

Binary less or equal.

dnnl_binary_lt

Binary less than.

dnnl_binary_eq

Binary equal.

dnnl_binary_ne

Binary not equal.

dnnl_resampling_nearest

Nearest Neighbor Resampling Method.

dnnl_resampling_linear

Linear Resampling Method.

dnnl_reduction_max

Reduction using max.

dnnl_reduction_min

Reduction using min.

dnnl_reduction_sum

Reduction using sum.

dnnl_reduction_mul

Reduction using mul.

dnnl_reduction_mean

Reduction using mean.

dnnl_reduction_norm_lp_max

Reduction using lp norm.

dnnl_reduction_norm_lp_sum

Reduction using lp norm.

dnnl_reduction_norm_lp_power_p_max

Reduction using lp norm without final pth-root.

dnnl_reduction_norm_lp_power_p_sum

Reduction using lp norm without final pth-root.

dnnl_softmax_accurate

Softmax.

dnnl_softmax_log

Logsoftmax.