Deep Neural Network Library (DNNL)  1.1.3
Performance library for Deep Learning
Inner Product

API reference: C, C++

The inner product primitive (sometimes called fully connected) treats each activation in the minibatch as a vector and computes its product with a weights 2D tensor producing a 2D tensor as an output.

More precisely, let \(src\), \(weights\), \(bias\) and \(dst\) be \(N \times IC\), \(OC \times IC\), \(OC\), \(N \times OC\) tensors (the variable names follow the standard Naming Conventions). Then:

\[dst(n, oc) = bias(oc) + \sum_{ic=0}^{IC-1} src(n, ic) \cdot weights(oc, ic)\]

In case when the \(src\) tensor has spatial dimension it is flattened to 2D. For example, if it is a 4D \(N \times IC' \times IH \times IW\) tensor, then the formula above is applied with \(IC = IC' \cdot IH \cdot IW\).

Difference Between Forward Training and Forward Inference

There is no difference between the dnnl::forward_training and dnnl::forward_inference propagation kinds.

Backward

The backward propagation computes \(diff\_src\) based on \(diff\_dst\) and \(weights\).

The weights update computes \(diff\_weights\) and \(diff\_bias\) based on \(diff\_dst\) and \(src\).

Note
The optimized memory formats \(src\) and \(weights\) might be different on forward propagation, backward propagation, and weights update.

Implementation Details

General Notes

N/A.

Data Types

Inner product primitive supports the following combination of data types for source, destination, weights, and bias:

Propagation Source Weights Destination Bias
forward / backward f32 f32 f32 f32
forward f16 f16 f16 f16
forward u8, s8 s8 u8, s8, s32, f32 u8, s8, s32, f32
forward bf16 bf16 f32, bf16 f32, bf16
backward f32, bf16 bf16 bf16
weights update bf16 f32, bf16 bf16 f32, bf16

Data Representation

Like other CNN primitives, the inner product primitive expects the following tensors:

Spatial Source Destination Wei
1D \(N \times C \times W\) \(N \times C\) \(OC \times IC \times KW\)
2D \(N \times C \times H \times W\) \(N \times C\) \(OC \times IC \times KH \times KW\)
3D \(N \times C \times D \times H \times W\) \(N \times C\) \(OC \times IC \times KD \times KH \times KW\)

Memory format of data and weights memory objects is critical for inner product primitive performance. In the DNNL programming model, inner product primitive is one of the few primitives that support the placeholder format dnnl::memory::format_tag::any (shortened to any from now on) and can define data and weight memory objects formats based on the primitive parameters. When using any it is necessary to first create an inner product primitive descriptor and then query it for the actual data and weight memory objects formats.

The table below shows the combinations for which plain memory formats the inner product primitive is optimized for. For the destination tensor (which is always \(N \times C\)) the memory format is always dnnl::memory::format_tag::nc (dnnl::memory::format_tag::ab).

Spatial Source / Weights logical tensor Imp
0D NC / OI dnnl_nc (dnnl_ab) / dnnl_oi (dnnl_ab)
0D NC / OI dnnl_nc (dnnl_ab) / dnnl_io (dnnl_ba)
1D NCW / OIW dnnl_ncw (dnnl_abc) / dnnl_oiw (dnnl_abc)
1D NCW / OIW dnnl_nwc (dnnl_acb) / dnnl_wio (dnnl_cba)
2D NCHW / OIHW dnnl_nchw (dnnl_abcd) / dnnl_oihw (dnnl_abcd)
2D NCHW / OIHW dnnl_nhwc (dnnl_acdb) / dnnl_hwio (dnnl_cdba)
3D NCDHW / OIDHW dnnl_ncdhw (dnnl_abcde) / dnnl_oidhw (dnnl_abcde)
3D NCDHW / OIDHW dnnl_ndhwc (dnnl_acdeb) / dnnl_dhwio (dnnl_cdeba)

Post-ops and Attributes

Post-ops and attributes enable you to modify the behavior of the inner product primitive by chaining certain operations after the inner product operation. The following post-ops are supported by inner product primitives:

Propagation Type Operation Des
forward post-op eltwise Applies an Eltwise operation to the result

Implementation Limitations

  1. Check Data Types.

Performance Tips