Deep Neural Network Library (DNNL)  1.1.3
Performance library for Deep Learning
Getting started on GPU with OpenCL extensions API

Full example text: gpu_opencl_interop.cpp

This C++ API example demonstrates programming for Intel(R) Processor Graphics with OpenCL* extensions API in DNNL. The workflow includes following steps:

Public headers

To start using DNNL, we must first include the dnnl.hpp header file in the application. We also include CL/cl.h for using OpenCL APIs and dnnl_debug.h, which contains some debugging facilities such as returning a string representation for common DNNL C types. All C++ API types and functions reside in the dnnl namespace. For simplicity of the example we import this namespace.

#include <dnnl.hpp>
#include <CL/cl.h>
// Optional header to access debug functions like `dnnl_status2str()`
#include "dnnl_debug.h"
#include <iostream>
#include <numeric>
#include <sstream>
using namespace dnnl;
using namespace std;

gpu_opencl_interop_tutorial() function

Engine and stream

All DNNL primitives and memory objects are attached to a particular dnnl::engine, which is an abstraction of a computational device (see also Basic Concepts). The primitives are created and optimized for the device to which they are attached, and the memory objects refer to memory residing on the corresponding device. In particular, that means neither memory objects nor primitives that were created for one engine can be used on another.

To create engines, we must specify the dnnl::engine::kind and the index of the device of the given kind. In this example we use the first available GPU engine, so the index for the engine is 0. This example assumes OpenCL being a runtime for GPU. In such case, during engine creation, an OpenCL context is also created and attaches to the created engine.

engine eng(engine::kind::gpu, 0);

In addition to an engine, all primitives require a dnnl::stream for the execution. The stream encapsulates an execution context and is tied to a particular engine.

In this example, a GPU stream is created. This example assumes OpenCL being a runtime for GPU. During stream creation, an OpenCL command queue is also created and attaches to this stream.

dnnl::stream strm(eng);









Wrapping data into DNNL memory object

Next, we create a memory object. We need to specify dimensions of our memory by passing a memory::dims object. Then we create a memory descriptor with these dimensions, with the dnnl::memory::data_type::f32 data type, and with the dnnl::memory::format_tag::nchw memory format. Finally, we construct a memory object and pass the memory descriptor. The library allocates memory internally.

memory::dims tz_dims = {2, 3, 4, 5};
const size_t N = std::accumulate(tz_dims.begin(), tz_dims.end(), (size_t)1,
std::multiplies<size_t>());
memory::desc mem_d(
tz_dims, memory::data_type::f32, memory::format_tag::nchw);
memory mem(mem_d, eng);








Initialize the data by executing a custom OpenCL kernel

We are going to create an OpenCL kernel that will initialize our data. It requires writing a bit of C code to create an OpenCL program from a string literal source. The kernel initializes the data by the 0, -1, 2, -3, ... sequence: data[i] = (-1)^i * i.

const char *ocl_code
= "__kernel void init(__global float *data) {"
" int id = get_global_id(0);"
" data[id] = (id % 2) ? -id : id;"
"}";







Create/Build Opencl kernel by create_init_opencl_kernel() function. Refer to the full code example for the create_init_opencl_kernel() function.

const char *kernel_name = "init";
cl_kernel ocl_init_kernel = create_init_opencl_kernel(
eng.get_ocl_context(), kernel_name, ocl_code);






The next step is to execute our OpenCL kernel by setting its arguments and enqueueing to an OpenCL queue. You can extract the underlying OpenCL buffer from the memory object using the interoperability interface: dnnl::memory::get_ocl_mem_object() . For simplicity we can just construct a stream, extract the underlying OpenCL queue, and enqueue the kernel to this queue.

cl_mem ocl_buf = mem.get_ocl_mem_object();
OCL_CHECK(clSetKernelArg(ocl_init_kernel, 0, sizeof(ocl_buf), &ocl_buf));
cl_command_queue ocl_queue = strm.get_ocl_command_queue();
OCL_CHECK(clEnqueueNDRangeKernel(ocl_queue, ocl_init_kernel, 1, nullptr, &N,
nullptr, 0, nullptr, nullptr));





Create and execute a primitive

There are three steps to create an operation primitive in DNNL:

  1. Create an operation descriptor.
  2. Create a primitive descriptor.
  3. Create a primitive.

Let's create the primitive to perform the ReLU (rectified linear unit) operation: x = max(0, x). An operation descriptor has no dependency on a specific engine - it just describes some operation. On the contrary, primitive descriptors are attached to a specific engine and represent some implementation for this engine. A primitive object is a realization of a primitive descriptor, and its construction is usually much "heavier".

auto relu_d = eltwise_forward::desc(
prop_kind::forward, algorithm::eltwise_relu, mem_d, 0.0f);
auto relu_pd = eltwise_forward::primitive_desc(relu_d, eng);
auto relu = eltwise_forward(relu_pd);




Next, execute the primitive.

relu.execute(strm, {{DNNL_ARG_SRC, mem}, {DNNL_ARG_DST, mem}});
strm.wait();



Note
Our primitive mem serves as both input and output parameter.
Primitive submission on GPU is asynchronous; However, the user can call dnnl:stream::wait() to synchronize the stream and ensure that all previously submitted primitives are completed.


Validate the results

Before running validation codes, we need to access the OpenCL memory on the host. The simplest way to access the OpenCL memory is to map it to the host using the dnnl::memory::map_data() and dnnl::memory::unmap_data() APIs. After mapping, this data is directly accessible for reading or writing on the host. We can run validation codes on the host accordingly. While the data is mapped, no GPU-side operations on this data are allowed. The data should be unmapped to release all resources associated with mapping.

float *mapped_data = mem.map_data<float>();
for (size_t i = 0; i < N; i++) {
float expected = (i % 2) ? 0.0f : (float)i;
if (mapped_data[i] != expected)
throw std::string(
"Unexpected output, find a negative value after the ReLU "
"execution");
}
mem.unmap_data(mapped_data);

main() function

We now just call everything we prepared earlier.

Because we are using the DNNL C++ API, we use exceptions to handle errors (see C and C++ APIs). The DNNL C++ API throws exceptions of type dnnl::error, which contains the error status (of type dnnl_status_t) and a human-readable error message accessible through the regular what() method.

int main(int argc, char **argv) {
try {
gpu_opencl_interop_tutorial();
} catch (dnnl::error &e) {
std::cerr << "DNNL error: " << e.what() << std::endl
<< "Error status: " << dnnl_status2str(e.status) << std::endl;
return 1;
} catch (std::string &e) {
std::cerr << "Error in the example: " << e << std::endl;
return 2;
}
std::cout << "Example passes" << std::endl;
return 0;
}

Upon compiling and running the example, the output should be just:

Example passes