Intel(R) Math Kernel Library for Deep Neural Networks (Intel(R) MKL-DNN)  0.21.0
Performance library for Deep Learning
A Performance Library for Deep Learning

The Intel(R) Math Kernel Library for Deep Neural Networks (Intel(R) MKL-DNN) is an open source performance library for Deep Learning (DL) applications intended for acceleration of DL frameworks on Intel(R) architecture. Intel MKL-DNN includes highly vectorized and threaded building blocks for implementation of convolutional neural networks (CNNs) and reccurent neural networks (RNNs) with C and C++ interfaces. This project is created to help the DL community innovate on the Intel(R) processor family.

The library provides optimized implementations for the most common computational functions (also called primitives) used in deep neural networks covering a wide range of applications, including image recognition, object detection, semantic segmentation, neural machine translation, and speech recognition. The table below summarizes the list of supported functions and their variants.

Primitive class Primitive fp32 training fp32 inference int8 inference
Convolution 1D direct convolution x x
2D direct convolution x x x
2D direct deconvolution x x x
2D winograd convolution x x x
3D direct convolution x x
3D direct deconvolution x x
Inner Product 2D inner product x x x
3D inner product x x
RNN Vanilla RNN x x
LSTM x x x
GRU x x
Pooling 2D maximum pooling x x x
2D average pooling x x x
3D maximum pooling x x
3D average pooling x x
Normalization 2D LRN (within channel) x x
2D LRN (across channels) x x
2D batch normalization x x
3D batch normalization x x
Activation and ReLU x x x
elementwise Tanh x x
functions ELU x x
Square x x
Sqrt x x
Abs x x
Linear x x
Bounded ReLU x x
Soft ReLU x x
Logistic x x
Softmax x x
Data manipulation Reorder/quantization x x x
Sum x x x
Concat x x x
Shuffle x x x

Programming Model

Intel MKL-DNN models memory as a primitive similar to an operation primitive. This allows reconstruction of the graph of computations at run time.

Basic Terminology

Intel MKL-DNN operates on the following main objects:

A typical workflow is to create a set of primitives to run, push them to a stream all at once or one at a time, and wait for completion.

Creating Primitives

In Intel MKL-DNN, creating primitives involves three levels of abstraction:

To create a memory primitive:

  1. Create a memory descriptor. The memory descriptor contains the dimensions, precision, and format of the data layout in memory. The data layout can be either user-specified or set to any. The any format allows the operation primitives (convolution and inner product) to choose the best memory format for optimal performance.
  2. Create a memory primitive descriptor. The memory primitive descriptor contains the memory descriptor and the target engine.
  3. Create a memory primitive. The memory primitive requires allocating a memory buffer and attaching the data handle to the memory primitive descriptor. Note: in the C++ API for creating an output memory primitive, you do not need to allocate buffer unless the output is needed in a user-defined format.

To create an operation primitive:

  1. Create a logical description of the operation. For example, the description of a convolution operation contains parameters such as sizes, strides, and propagation type. It also contains the input and output memory descriptors.
  2. Create a primitive descriptor by attaching the target engine to the logical description.
  3. Create an instance of the primitive and specify the input and output primitives.

Examples

A walk-through example for implementing an AlexNet topology using the c++ API:

An introductory example to low-precision 8-bit computations:

The following examples are available in the /examples directory and provide more details about the API.

Performance Considerations

The following link provides a guide to MKLDNN verbose mode for profiling execution:

Operational Details

Auxiliary Types

Architecture and design of Intel MKL-DNN

For better understanding the architecture and design of Intel MKL-DNN as well as the concepts used in the library please read the following topics:

Understanding Memory Formats


Legal information