Intel(R) Math Kernel Library for Deep Neural Networks (Intel(R) MKL-DNN)  1.0.4
Performance library for Deep Learning
Primitive Attributes: Quantization

Introduction

Some primitives in the library support input/output tensors with the INT8 (either signed or unsigned) data type. The primary goal is to support reduced precision inference on the compatible hardware.

Related materials:

Quantization Model

The primary quantization model that the library assumes is the following:

\[ x_{f32}(:) = scale_{f32} \cdot (x_{int8}(:) - 0_{x\_int8}) \]

where \(scale_{f32}\) is somehow known in advance (typically, the process of obtaining these scale factors is called the calibration process). This might be counter-intuitive, but the library cannot compute any of the scale factors at run-time dynamically. Hence, the model is sometimes called a static quantization model. The main rationale to support only static quantization out-of-the-box is higher performance. Those who want to use dynamic quantization can do so in a few steps:

  1. Compute the result in higher precision, like mkldnn::memory::data_type::s32.
  2. Find the required characteristics, like min and max values, and derive the scale factor.
  3. Re-quantize to the lower precision data type.

It is also worth mentioning that the library supports fixed zero position. For most of the primitives, real zero value is mapped to zero for quantized values; that is, \(0_{x\_int8} = 0\). For example, this is the only model that Convolution and Inner Product currently support. The RNN primitives have limited support of shifted zero (for details, refer to the corresponding section in RNN).

For the rest of this guide, we will assume that \(0_{x\_int8} = 0\).

Warning
Depending on the architecture, the behavior of int8 computations might slightly vary. For more details, refer to Int8 Computation Aspects.

This guide doesn't cover how the appropriate scaling factor can be found. Refer to the materials in the Introduction.

Example: Convolution Quantization Workflow

Let's consider a simple example: a convolution without bias. The tensors are represented as:

Here the \(src_{f32}, weights_{f32}, dst_{f32}\) are not computed at all, the whole work happens with INT8 tensors. As mentioned above, we also somehow know all the scaling factors: \(scale_{src}, scale_{weights}, scale_{dst}\).

So the task is to compute the \(dst_{int8}\) tensor.

Mathematically, the computations are pretty straightforward:

\[ dst_{int8}(:) = downconvert\_f32\_to\_int8( output\_scale \cdot conv_{s32}(src_{int8}, weights_{int8}) ), \]

where:

Note that in order to perform the operation, one doesn't need to know the exact scaling factors for all the tensors; it is enough to know only the \(output\_scale\). The library utilizes this fact; a user needs to provide only this one extra parameter (see the Output Scaling Attribute section below) to perform the convolution.

Per-Channel Scaling

Some of the primitives have limited support of multiple scales for a quantized tensor. The most popular use-case is a Convolution primitive that supports per-output-channel scaling factors for the weights, meaning that the actual convolution computations would need to scale different output channels differently. This is possible without significant performance drop because the per-output-channel re-quantization only required at the very end of the computations. It seems impossible to implement the same trick for the input channels, since that would require re-quantization for every input data point.

Assume we have (the scales are designated as \(\alpha\) to simplify reading):

Note that now the weights' scaling factor depends on the \(oc\).

To compute the \(dst_{int8}\) we need to perform the following:

\[ dst_{int8}(n, oc, oh, ow) = downconvert\_f32\_to\_int8( output\_scale(oc) \cdot conv_{s32}(src_{int8}, weights_{int8})|_{(n, oc, oh, ow)} ), \]

where now

It is worth mentioning that a user has to prepare quantized weights accordingly. For Intel MKL-DNN provides reorders that can perform per-channel scaling:

\[ weights_{int8}(oc, ic, kh, kw) = downconvert\_f32\_to\_int8( output\_scale(oc) \cdot weights_{f32}(oc, ic, kh, kw) ), \]

where:

API

The library API to support for INT8 was designed for the model described above. However, it doesn't require users to follow exactly this model. As long as users can fit their model into the given functionality everything should work fine. Having this in mind we tried to design a minimal and simple yet powerful enough quantization API.

The most common data type for data tensors during INT8 inference is mkldnn::memory::data_type::s8 and mkldnn::memory::data_type::u8. All the scaling factors related to tensors are not attached in any way to the Intel MKL-DNN memory objects and should be maintained by users.

The library essentially extends the ability of the primitives to scale the output before storing the result to the memory with the destination data type. That's exactly the minimum that we need to support INT8 inference (check the equations above–only \(output\_scale\) is non-standard).

The scaling happens in the single precision floating point data type (mkldnn::memory::data_type::f32). Before storing, the result is downconverted to the destination data type with saturation if required. The rounding happens according to the current HW setting (for instance, on CPU according to the MXCSR register).

Output Scaling Attribute

The library uses Primitive Attributes API for setting the scaling factors for most of the primitives. The supporting attributes can be found in the documentation for each primitive. The unsupported cases are handled according to the attributes error handling section.

API:

The primitives do not support output scales if source (and weights) tensors are of the int8 data type. In other words, regular f32 convolution cannot scale the output result.

The parameters (C++ API for simplicity):

int mask,
const std::vector<float> &scales
);

In the simplest case, when there is only one common scale the attribute changes the op behavior from

\[ dst(:) = Op(...) \]

to

\[ dst(:) = scale \cdot Op(...). \]

To support scales per one or several dimensions, users must set the appropriate mask.

Say the destination is \(D_0 \times ... \times D_{n-1}\) tensor and we want to have output scales per \(d_i\) dimension (where \(0 \le d_i < n\)).

Then the mask should be set to:

and the number of scales should be:

Example 1: weights quantization with per-output-channel-and-group scaling

// weights dimensions
const int G, OC, IC, KH, KW;
// original f32 weights in user's format
mkldnn::memory::desc wei_user_f32_md(
{G, OC/G, IC/G, KH, KW}, // dims
mkldnn::memory::data_type::f32, // the data originally in f32
mkldnn::memory::format_tag::hwigo // the memory format a user uses
);
// the scaling factors for quantized weights
// An unique scale for each group and output-channel.
std::vector<float> wei_scales(G * OC/G) = {...};
// ...
// int8 convolution primitive descriptor (will create it in the next example)
// query the convolution weights memory descriptor
mkldnn::memory::desc wei_conv_s8_md = conv_pd.weights_desc();
// prepare the inverse of the scales (f32 = scale * int8 --> int8 = 1/scale * f32)
std::vector<float> inv_wei_scales(wei_scales.size());
for (size_t i = 0; i < wei_scales.size(); ++i)
inv_wei_scales[i] = 1.f / wei_scales[i];
// prepare the attributes for the reorder
const int mask = 0
| (1 << 0) // scale per G dimension, which is the dim #0
| (1 << 1); // scale per OC dimension, which is the dim #1
attr.set_output_scales(mask, inv_wei_scales);
// create reorder that would perform:
// wei_s8(g, oc, ic, kh, kw) <- 1/scale(g, oc) * wei_f32(g, oc, ic, kh, kw)
// including the data format transformation.
auto wei_reorder_pd = mkldnn::reorder::primitive_desc(
wei_user_f32_md, engine, // source
wei_conv_s8_md, engine, // destination,
attr);
auto wei_reorder = mkldnn::reorder(wei_reorder_pd);
// ...

Example 2: convolution with groups, with per-output-channel quantization

This example is complementary to the previous example (which should ideally be the first one). Let's say we want to have an INT8 convolution with per-output channel scaling.

const float src_scale; // source scale factor: src_f32[:] = src_scale * src_s8[:]
const float dst_scale; // destination scale factor: dst_f32[:] = dst_scale * dst_s8[:]
// the scaling factors for quantized weights (as declared above)
// An unique scale for each group and output-channel.
std::vector<float> wei_scales(G * OC/G) = {...};
// Src, weights, and dst memory descriptors for convolution,
// with memory format tag == any to allow a convolution implementation
// to chose the appropriate memory format
mkldnn::memory::desc src_conv_s8_any_md(
{BATCH, IC, IH, IW}, // dims
mkldnn::memory::data_type::s8, // the data originally in s8
mkldnn::memory::format_tag::any // let convolution to choose
);
mkldnn::memory::desc wei_conv_s8_any_md(
{G, OC/G, IC/G, KH, KW}, // dims
mkldnn::memory::data_type::s8, // the data originally in s8
mkldnn::memory::format_tag::any // let convolution to choose
);
mkldnn::memory::desc dst_conv_s8_any_md(...); // ditto
// Create a convolution operation descriptor
src_conv_s8_any_md, // what's important is that
wei_conv_s8_any_md, // we specified that we want
dst_conv_s8_any_md, // computations in s8
strides, padding_l, padding_r,
mkldnn::padding_kind::zero
);
// prepare the attributes for the convolution
const int mask = 0
| (1 << 1); // scale per OC dimension, which is the dim #1 on dst tensor:
// (BATCH, OC, OH, OW)
// 0 1 2 3
std::vector<float> conv_output_scales(G * OC/G);
for (int g_oc = 0; G * OC/G; ++g_oc)
conv_output_scales[g_oc] = src_scale * wei_scales(g_oc) / dst_scale;
attr.set_output_scales(mask, conv_output_scales);
// create a convolution primitive descriptor with the scaling factors
conv_d, // general (non-customized) operation descriptor
attr, // the attributes contain the output scaling
engine);
// ...

Interplay of output scales with post-ops

In general, the post-ops are independent from the output scales. The output scales are applied to the result first; then post-ops will take effect.

For details, refer to the Tanh -> Sum -> ScaleShift example.

That has an implication on the scaling factors passed to the library, however. Consider the following example of a convolution with \(\tanh\) as a post-op:

\[ dst_{s8}(:) = \frac{1}{scale_{dst}} \cdot \tanh( scale_{src} \cdot scale_{weights} \cdot conv_{s32}(src_{s8}, wei_{s8}) ) \]

As you can see: