Supported Fusion Patterns

Fusion Patterns

The following fusion patterns are subgraphs that the oneDNN Graph API recognizes as candidate for fusion. The patterns are described using oneDNN Graph operation (op) names with the following convention.

Note

oneDNN Graph performs limited input validation to minimize the performance overheads. The application is responsible for sanitizing inputs passed to the library. For large u8 or s8 inputs may lead to accumulator overflow, you can use floating point patterns instead of quantized patterns.

"+" describes a chain of two ops. The preceding op produces an output tensor, which is consumed by the following op as its first operand.

"[]" describes a component of the overall pattern description. For example, it could include a subgraph or all the op choices within the bracket.

"|" describes choices of multiple operations, say A+[B|C] means the graph partition contains A followed by B or C.

"," describes a graph composed of multiple subgraphs, each subgraph marks its output tensor explicitly, which is consumed by other subgraphs.

Superscript denotes the numbers of repetition pattern. For example, A+[B|C] \(^{3}\) means the graph partition contains A followed by three ops, each of them is either B or C. The superscript could be a range of number meaning allowing a range of repetition. If the range is between 0 and 1, we use superscript "?".

Subscript denotes the input and output tensors which need to explicitly mark the producer and consumer relation within one graph partition. For example, A \(_{>t1}\) +B+C \(_{<t1}\) refers to the pattern started with A followed by B and C, and C takes an implicit input tensor from B and an extra tensor t1 output from A. ">" refers to the output tensor, and "<" for input tensor. Input and output tensor between neighbor ops are not explicitly marked, for example, B consumes t1 implicitly in the example above.

Subscript "out" marks the output tensor of a certain op to be the output of a graph partition. For example, in A \(_{>t1}\) +B \(_{>out}\) +C \(_{<t1,>out}\), B’s output and C’s output are marked as output tensors.

Subscript "in" marks the input tensor of a certain op to be the input of a graph partition. For example, in A \(_{<in1}\) +B \(_{<in1}\) A’s input and B’s second input are graph partition input, and they share the same input tensor in1. Most input tensors of a graph partition are not explicitly marked. For example, the input tensors of the first op are implicitly regarded as graph partition inputs. Besides, for input tensors of other ops, if they are not produced by any proceeding ops, they are regarded as implicit graph partition inputs. In the example A \(_{>t1}\) +B+C \(_{<t1}\), A’s inputs are regarded as implicit graph partition inputs, and if B is a binary operation, the second input tensor is an implicit graph partition input.

The following categories will be used in describing fusion pattern.

Unary = [Abs | Clamp | Elu | Exp | GELU | HardSwish | LeakyReLU | Log | Sigmoid | SoftPlus | Pow | ReLU | Round | Sqrt | Square | Tanh]

Binary = [Add | Divide | Maximum | Minimum | Multiply | Subtract]

Reduction = [ReduceL1 | ReduceL2 | ReduceMax | ReduceMean | ReduceMin | ReduceProd | ReduceSum]

Inference

Floating Point Patterns

Pattern

Description

Convolution + BiasAdd \(^?\) + BatchNormInference \(^?\) + [Unary | Binary] \(^{0-3}\) \(_{>out}\)

This pattern is widely used in Convolution Neural Networks, for example ResNet, ResNext, SSD, etc.

ConvTranspose + BiasAdd \(^?\) + [Unary | Binary] \(^{0-3}\) \(_{>out}\)

This pattern is widely used in Generative Adversarial Networks.

Interpolate + [Unary | Binary] \(^{0-3}\) \(_{>out}\)

This pattern is widely used for image processing.

MatMul + BiasAdd \(^?\) + [Unary | Binary] \(^{0-3}\) + Select \(^?\) \(_{>out}\)

This pattern is widely used in language models and recommendation models, for example BERT, DLRM, etc.

Reduction + [Unary | Binary] \(^{0-3}\) \(_{>out}\)

This pattern is widely used for data processing, for example loss reduction.

Unary + Binary \(^{0-3}\) \(_{>out}\)

This pattern is widely used in Convolution Neural Networks.

Binary + [Unary | Binary] \(^{0-3}\) \(_{>out}\)

This pattern is widely used in Generative Adversarial Networks, for example ParallelWaveGAN.

[AvgPool | MaxPool] + Binary \(^{0-3}\) \(_{>out}\)

This pattern is widely used in Convolution Neural Networks.

BatchNormInference + ReLU \(_{>out}\)

This pattern is widely used in Convolution Neural Networks, for example DenseNet.

Reciprocal + Multiply \(_{>out}\)

N/A

Reorder + Add \(_{>out}\)

N/A

Quantized Patterns

Pattern

Description

Quantize \(^?\) + Dequantize \(_{>t1}\) , Dequantize \(_{>t2}\) \(^{0-3}\) , Dequantize + Convolution \(_{<t1}\) + BiasAdd \(^?\) + [Unary | Binary \(_{<t2}\) ] \(^{0-3}\) + Quantize \(^?\) \(_{>out}\)

N/A

Quantize \(^?\) + Dequantize \(_{>t1}\) , Dequantize \(_{>t2}\) \(^{0-3}\) , Dequantize + ConvTranspose \(_{<t1}\) + BiasAdd \(^?\) + [Unary | Binary \(_{<t2}\) ] \(^{0-3}\) + Quantize \(^?\) \(_{>out}\)

N/A

Quantize \(^?\) + Dequantize \(_{>t1}\) , Dequantize \(_{>t2}\) \(^{0-3}\) , Dequantize + MatMul \(_{<t1}\) + BiasAdd \(^?\) + [Unary | Binary \(_{<t2}\) ] \(^{0-3}\) + Select \(^?\) + Quantize \(^?\) \(_{>out}\)

N/A

Dequantize + [AvgPool | MaxPool] + Quantize \(_{>out}\)

N/A

Dequantize \(_{>t1}\) , Dequantize + [AvgPool | MaxPool] + Add \(_{<t1}\) + Quantize \(_{>out}\)

N/A

Dequantize + Reorder + Quantize \(_{>out}\)

N/A

Dequantize \(_{>t1}\) , Dequantize + Reorder + Add \(_{<t1}\) + Quantize \(_{>out}\)

N/A

Training

Pattern

Description

ConvolutionBackwardWeights + BiasAddBackward \(_{>out}\)

N/A

ReLUBackward + BatchNormTrainingBackward \(_{>out}\)

N/A

All the above fusion patterns are supported by default.

Aggressive Fusion Patterns

Aggressive fusion patterns also follow the pattern description convention defined in the Fusion Patterns section.

Note

Aggressive fusion patterns are only supported when Graph Compiler is enabled.

The following categories will also be used to describe aggressive fusion patterns.

  • ReshapeTranspose = [StaticReshape + StaticTranspose \(^{1-2}\)]

  • Activation = [ReLU | Sigmoid | GELU]

  • ActivationBackward = [ReLUBackward | SigmoidBackward | GELUBackward]

Inference

Floating Point Patterns

Pattern

Description

MatMul + [Multiply | Divide] + Add + Softmax + MatMul + StaticTranspose + Reorder \(_{>out}\)

Multi-head Attention. This pattern is widely used in models containing encoder-decoder structures, for example BERT.

ReshapeTranspose \(_{>t1}\) , ReshapeTranspose \(_{>t2}\) , ReshapeTranspose + MatMul \(_{<t1}\) + [Multiply | Divide] + Add + Softmax + MatMul \(_{<t2}\) + StaticTranspose + StaticReshape \(_{>out}\)

Multi-head Attention.

MatMul + Activation \(_{>t1}\) , [MatMul \(_{<t1}\) + Activation \(_{>t1}\) ] \(^{0-4}\) , MatMul \(_{<t1}\) + Activation \(_{>out}\)

Multi-layer Perceptron. This pattern is widely used in recommendation models, for example DLRM.

[Convolution + BiasAdd \(^{?}\) + ReLU] \(^{1-3}\) + Convolution + BiasAdd \(^{?}\) + Add + ReLU \(_{>out}\)

Identical Bottleneck. Enabled only in single thread runtime scenario. This pattern is widely used in Convolution Neural Networks, for example ResNet.

Convolution + BiasAdd \(^{?}\) \(_{>t1}\) , [Convolution + BiasAdd \(^{?}\) + ReLU] \(^{1-3}\) + Convolution + BiasAdd \(^{?}\) + Add \(_{<t1}\) + ReLU \(_{>out}\)

Convolutional Bottleneck. Enabled only in single thread runtime scenario. This pattern is widely used in Convolution Neural Networks, for example ResNet.

Quantized Patterns

Pattern

Description

Dequantize \(_{>t1}\) , Dequantize \(_{>t2}\) , Dequantize + MatMul \(_{<t1}\) + [Multiply | Divide] + Add + Softmax + Quantize + Dequantize + MatMul \(_{<t2}\) + StaticTranspose + Reorder + Quantize \(_{>out}\)

Quantized Multi-head Attention.

Dequantize + ReshapeTranspose \(_{>t1}\) , Dequantize + ReshapeTranspose \(_{>t2}\) , Dequantize + MatMul \(_{<t1}\) + [Multiply | Divide] + Add + Softmax + Quantize + Dequantize + MatMul \(_{<t2}\) + StaticTranspose + StaticReshape + Quantize \(_{>out}\)

Quantized Multi-head Attention.

Dequantize \(_{>t1}\) , Dequantize + MatMul \(_{<t1}\) + Activation + Quantize \(_{>t2}\) , [Dequantize \(_{>t3}\) , Dequantize \(_{<t2}\) + MatMul \(_{<t3}\) + Activation + Quantize \(_{>t2}\) ] \(^{0-4}\) , Dequantize \(_{>t4}\) , Dequantize \(_{<t2}\) + MatMul \(_{<t4}\) + Activation + Quantize \(_{>out}\)

Quantized Multi-layer Perceptron.

Dequantize \(_{>t2}\) , Dequantize \(_{>t3}\) , [Dequantize \(_{>t1}\) , Dequantize + Convolution \(_{<t1}\) + BiasAdd \(^{?}\) + ReLU + Quantize] \(^{1-3}\) + Dequantize + Convolution \(_{<t2}\) + BiasAdd \(^{?}\) + Add \(_{<t3}\) + ReLU + Quantize \(_{>out}\)

Quantized Identical Bottleneck. Enabled only in single thread runtime scenario.

[Dequantize \(_{>t1}\) , Dequantize + Convolution \(_{<t1}\) + BiasAdd \(^{?}\) + Quantize + Dequantize] \(_{>t2}\) , Dequantize \(_{>t4}\) , [Dequantize \(_{>t3}\) , Dequantize + Convolution \(_{<t3}\) + BiasAdd \(^{?}\) + ReLU + Quantize] \(^{1-3}\) + Dequantize + Convolution \(_{<t4}\) + BiasAdd \(^{?}\) + Add \(_{<t2}\) + ReLU + Quantize \(_{>out}\)

Quantized Convolutional Bottleneck. Enabled only in single thread runtime scenario.

Training

Pattern

Description

Dequantize \(_{>t1}\) , Dequantize \(_{>t2}\) , Dequantize + MatMul \(_{<t1}\) + [Multiply | Divide] + Add + Softmax + Quantize + Dequantize + MatMul \(_{<t2}\) + StaticTranspose + Reorder + Quantize \(_{>out}\)

Multi-head Attention Training Forward Pattern.

StaticReshape + StaticTranspose \(_{>t1}\) + MatMul + Multiply \(_{>t2}\) + Subtract \(_{<t3}\) + Multiply \(^{?}\) + [Multiply | Divide] \(_{>t4}\) + MatMul \(_{>out1}\) , Multiply \(_{<t2}\) + ReduceSum \(_{>t3}\) , MatMul \(_{<t1,>out2}\) , MatMul \(_{<t4,>out3}\)

Multi-head Attention Training Backward Pattern.

MatMul \(_{>out1}\) + Activation \(_{>t1,>out2}\) , [MatMul \(_{<t1,>out3}\) + Activation \(_{>t1,>out4}\) ] \(^{0-4}\) , MatMul \(_{<t1,>out5}\) + Activation \(_{>out6}\)

Multi-layer Perceptron Training Forward Pattern.

StaticTranspose \(^{?}\) \(_{>t0}\) , ActivationBackward \(_{>t2}\) + MatMul \(_{<t0,>t1}\) , ReduceSum \(^{?}\) \(_{<t2,>out1}\) , StaticTranspose \(^{?}\) + MatMul \(_{<t2,>out2}\) , [StaticTranspose \(^{?}\) \(_{>t3}\) , ActivationBackward \(_{>t4,<t1}\) + MatMul \(_{<t3,>t1}\) , ReduceSum \(^{?}\) \(_{<t4,>out3}\) , StaticTranspose \(^{?}\) + MatMul \(_{<t4,>out4}\) ] \(^{0-4}\) , StaticTranspose \(^{?}\) \(_{>t5}\) , ActivationBackward \(_{>t6,<t1}\) + MatMul \(_{<t5,>out5}\) , ReduceSum \(^{?}\) \(_{<t6,>out6}\) , StaticTranspose \(^{?}\) + MatMul \(_{<t6,>out7}\)

Multi-layer Perceptron Training Backward Pattern.

Convolution \(_{>out1}\) + BatchNormForwardTraining \(_{>out2}\) + ReLU \(_{>out3}\) + Convolution \(_{>out4}\) + BatchNormForwardTraining \(_{>out5}\) + ReLU \(_{>out6}\) + Convolution \(_{>out7}\) + BatchNormForwardTraining \(_{>out8}\) + Add + ReLU \(_{>out9}\)

Identical Bottleneck Training Forward Pattern.

Convolution \(_{>out1}\) + BatchNormForwardTraining \(_{>t1,>out2}\) , Convolution \(_{>out3}\) + BatchNormForwardTraining \(_{>out4}\) + ReLU \(_{>out5}\) + Convolution \(_{>out6}\) + BatchNormForwardTraining \(_{>out7}\) + ReLU \(_{>out8}\) + Convolution \(_{>out9}\) + BatchNormForwardTraining \(_{>out10}\) + Add \(_{<t1}\) + ReLU \(_{>out11}\)

Convolutional Bottleneck Training Forward Pattern.

ReLUBackward \(_{>t1}\) + BatchNormTrainingBackward \(_{>t2,>out1}\) + ConvolutionBackwardData + ReLUBackward + BatchNormTrainingBackward \(_{>t3,>out2}\) + ConvolutionBackwardData + ReLUBackward + BatchNormTrainingBackward \(_{>t4,>out3}\) + ConvolutionBackwardData + Add \(_{<t1,>out4}\) , ConvolutionBackwardWeights \(_{<t2,>out5}\) , ConvolutionBackwardWeights \(_{<t3,>out6}\) , ConvolutionBackwardWeights \(_{<t4,>out7}\)

Identical Bottleneck Training Backward Pattern.

ReLUBackward \(_{>t1}\) + BatchNormTrainingBackward \(_{>t2,>out1}\) + ConvolutionBackwardData + ReLUBackward + BatchNormTrainingBackward \(_{>t3,>out2}\) + ConvolutionBackwardData + ReLUBackward + BatchNormTrainingBackward \(_{>t4,>out3}\) + ConvolutionBackwardData + Add \(_{<t6,>out4}\) , BatchNormTrainingBackward \(_{<t1,>t5,>out5}\) + ConvolutionBackwardData \(_{>t6}\) , ConvolutionBackwardWeights \(_{<t2,>out6}\) , ConvolutionBackwardWeights \(_{<t3,>out7}\) , ConvolutionBackwardWeights \(_{<t4,>out8}\) , ConvolutionBackwardWeights \(_{<t5,>out9}\)

Convolutional Bottleneck Training Backward Pattern.