This C++ API example demonstrates programming for Intel(R) Processor Graphics with Intel(R) MKL-DNN.
Example code: gpu_getting_started.cpp
To start using Intel MKL-DNN, we must first include the mkldnn.hpp header file in the application. We also include mkldnn_debug.h, which contains some debugging facilities such as returning a string representation for common Intel MKL-DNN C types.
All C++ API types and functions reside in the mkldnn
namespace. For simplicity of the example we import this namespace.
All Intel MKL-DNN primitives and memory objects are attached to a particular mkldnn::engine, which is an abstraction of a computational device (see also Basic Concepts). The primitives are created and optimized for the device they are attached to, and the memory objects refer to memory residing on the corresponding device. In particular, that means neither memory objects nor primitives that were created for one engine can be used on another.
To create engines, we must specify the mkldnn::engine::kind and the index of the device of the given kind. There is only one CPU engine and one GPU engine, so the index for both engines must be 0.
In addition to an engine, all primitives require a mkldnn::stream for the execution. The stream encapsulates an execution context and is tied to a particular engine.
In this example, a GPU stream is created.
Fill the data in CPU memory first, and then move data from CPU to GPU memory by reorder.
Let's now create a ReLU primitive for GPU.
The library implements the ReLU primitive as a particular algorithm of a more general Eltwise primitive, which applies a specified function to each element of the source tensor.
Just as in the case of mkldnn::memory, a user should always go through (at least) three creation steps (which, however, can sometimes be combined thanks to C++11):
The code:
After the ReLU operation, users need to get data from GPU to CPU memory by reorder.
Finally, let's execute all primitives and wait for their completion via the following sequence:
Reorder(CPU,GPU) -> ReLU -> Reorder(GPU,CPU).
execute()
method using a <tag, memory> map. Each tag specifies what kind of tensor each memory object represents. All Eltwise primitives require the map to have two elements: a source memory object (input) and a destination memory (output). For executing on GPU engine, both source and destination memory object must use GPU memory.execute()
method).Depending on the stream kind, an execution might be blocking or non-blocking. This means that we need to call mkldnn::stream::wait before accessing the results.
Now that we have the computed the result on CPU memory, let's validate that it is actually correct.
We now just call everything we prepared earlier.
Since we are using the Intel MKL-DNN C++ API, we use exceptions to handle errors (see C and C++ APIs). The Intel MKL-DNN C++ API throws exceptions of type mkldnn::error, which contains the error status (of type mkldnn_status_t) and a human-readable error message accessible through regular what()
method.
Upon compiling and running the example, the output should be just: