Quick Start

Before You Begin

oneDAL is located in <install_dir>/dal directory where <install_dir> is the directory in which Intel® oneAPI Base Toolkit was installed.

The current version of oneDAL with SYCL is available for Linux* and Windows* 64-bit operating systems. The prebuilt oneDAL libraries can be found in the <install_dir>/dal/<version>/redist directory.

The dependencies needed to build examples with SYCL extensions:

  • Intel® oneAPI DPC++/C++ Compiler 2021.1 release or later (for support)

  • OpenCL™ runtime 1.2 or later (to run the SYCL runtime)

  • GNU* Make on Linux*, nmake on Windows*

End-to-end Example

Below you can find a typical usage workflow for a oneDAL algorithm on GPU. The example is provided for Principal Component Analysis algorithm (PCA).

The following steps depict how to:

  • Read the data from CSV file

  • Run the training and inference operations for PCA

  • Access intermediate results obtained at the training stage

  1. Include the following header that makes all oneDAL declarations available.

    #include "oneapi/dal.hpp"
    
    /* Standard library headers required by this example */
    #include <cassert>
    #include <iostream>
    
  2. Create a SYCL* queue with the desired device selector. In this case, GPU selector is used:

    const auto queue = sycl::queue{ sycl::gpu_selector_v };
    
  3. Since all oneDAL declarations are in the oneapi::dal namespace, import all declarations from the oneapi namespace to use dal instead of oneapi::dal for brevity:

    using namespace oneapi;
    
  4. Use CSV data source to read the data from the CSV file into a table:

    const auto data = dal::read<dal::table>(queue, dal::csv::data_source{"data.csv"});
    
  5. Create a PCA descriptor, configure its parameters, and run the training algorithm on the data loaded from CSV.

    const auto pca_desc = dal::pca::descriptor<float>
       .set_component_count(3)
       .set_deterministic(true);
    
    const dal::pca::train_result train_res = dal::train(queue, pca_desc, data);
    
  6. Print the learned eigenvectors:

    const dal::table eigenvectors = train_res.get_eigenvectors();
    
    const auto acc = dal::row_accessor<const float>{eigenvectors};
    for (std::int64_t i = 0; i < eigenvectors.row_count(); i++) {
    
       /* Get i-th row from the table, the eigenvector stores pointer to USM */
       const dal::array<float> eigenvector = acc.pull(queue, {i, i + 1});
       assert(eigenvector.get_count() == eigenvectors.get_column_count());
    
       std::cout << i << "-th eigenvector: ";
       for (std::int64_t j = 0; j < eigenvector.get_count(); j++) {
          std::cout << eigenvector[j] << " ";
       }
       std::cout << std::endl;
    }
    
  7. Use the trained model for inference to reduce dimensionality of the data:

    const dal::pca::model model = train_res.get_model();
    
    const dal::table data_transformed =
       dal::infer(queue, pca_desc, data).get_transformed_data();
    
    assert(data_transformed.column_count() == 3);
    

Build and Run Examples

Perform the following steps to build and run examples demonstrating the basic usage scenarios of oneDAL with DPCPP. Go to <install_dir>/dal/<version> and then set up an environment as shown in the example below:

Note

All content below that starts with # is considered a comment and should not be run with the code.

  1. Set up the required environment for oneDAL (variables such as CPATH, LIBRARY_PATH, and LD_LIBRARY_PATH):

    On Linux, there are two possible ways to set up the required environment: via vars.sh script or via modulefiles.

    • To set up oneDAL environment via vars.sh script, run source ./env/vars.sh.

    • To set up oneDAL environment via setvars.sh script, run source ./setvars.sh.

    • To set up oneDAL environment via modulefiles:

      1. Initialize modules:

        source $MODULESHOME/init/bash
        

        Note

        Refer to Environment Modules documentation for details.

      2. Provide modules with a path to the modulefiles directory:

        module use ./modulefiles
        
      3. Run the module:

        module load dal
        
  2. Copy ./examples/oneapi/dpc to a writable directory if necessary (since it creates temporary files):

    cp –r ./examples/oneapi/dpc ${WRITABLE_DIR}
    
  3. Set up the compiler environment for Intel® oneAPI DPC++/C++ Compiler. See Get Started with Intel® oneAPI DPC++/C++ Compiler for details.

  4. Build and run examples:

    Note

    You need to have write permissions to the examples folder to build examples, and execute permissions to run them. Otherwise, you need to copy examples/oneapi/dpc and examples/oneapi/data folders to the directory with right permissions. These two folders must be retained in the same directory level relative to each other.

    # Navigate to examples directory and build examples
    cd /examples/oneapi/dpc
    cmake -G "Unix Makefiles" -DEXAMPLES_LIST=svm_two_class_thunder # This would generate makefiles for all svm examples matching passed name
    make               # This will compile and run generated svm examples
    cmake -G "Unix Makefiles" -DONEDAL_LINK=static # This wouldgenerate make for static version
    make               # This will compile and run all the examples
    
  5. The resulting example binaries and log files are written into the _results directory.

    Note

    You should run the examples from examples/oneapi/dpc folder, not from _results folder. Most examples require data to be stored in examples/oneapi/data folder and to have a relative link to it started from examples/oneapi/dpc folder.

    You can build traditional C++ examples located in examples/oneapi/cpp folder in a similar way.

See also