oneCCL Benchmark Collective User Guide#
The oneCCL benchmark provides performance measurements for the collective operations in oneCCL, such as:
allreduce
reduce
allgather
allgatherv
alltoall
alltoallv
reduce-scatter
broadcast
The benchmark is distributed with the oneCCL package. You can find it in the examples directory within the oneCCL installation path.
Build oneCCL Benchmark#
CPU-Only#
To build the benchmark, complete the following steps:
Configure your environment. Source the installed oneCCL library for the CPU-only support:
source <oneCCL install dir>/ccl/latest/env/vars.sh --ccl-configuration=cpu
Navigate to
<oneCCL install dir>/share/doc/ccl/examples
Build the benchmark with the following command:
cmake -S . -B build -DCMAKE_INSTALL_PREFIX=$(pwd)/build/_install && cmake --build build -j $(nproc) -t install
CPU-GPU#
Configure your environment.
Source the SYCL compiler. See the documentation for the instructions.
Source the installed oneCCL library for the CPU-GPU support:
source <oneCCL install dir>/ccl/latest/env/vars.sh --ccl-configuration=cpu_gpu_dpcpp
Navigate to
<oneCCL install dir>/share/doc/ccl/examples
.Build the SYCL benchmark with the following command:
cmake -S . -B build -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icpx -DCOMPUTE_BACKEND=dpcpp -DCMAKE_INSTALL_PREFIX=$(pwd)/build/_install && cmake --build build -j $(nproc) -t install
Run oneCCL Benchmark#
To run the benchmark, use the following command:
mpirun -n <N> -ppn <P> benchmark [arguments]
Where:
N
is the overall number of processesP
is the number of processes within a node
The benchmark reports:
#bytes
- the message size in the number of byteselem_count
- the message size in the number of elements#repetitions
- the number of iterationst_min
- the average time across iterations of the fastest process in each iterationt_max
- the average time across iterations of the slowest rank in each iterationt_avg
- the average time across processes and iterationsstddev
- standard deviationwait_t_avg
- the average wait time after the collective call returns and until it completes To enable, use the-x
option.
Notice that t_min
, t_max
, and t_avg
measure the total collective time. It means the timer starts before calling the collective and ends once the collective completes.
While wait_t_avg
only measures the wait time. It means the timer starts after the collective call returns and ends once the collective completes.
Thus, wait_t_avg
does not measure the time the oneCCL runtime spends preparing the necessary tasks for the collective to execute, while t_min
, t_max
, and t_avg
include that time. Time is reported in μsec.
Benchmark Arguments#
To see the benchmark arguments, use the --help
argument.
The benchmark accepts the following arguments:
Option |
Description |
Default Value |
---|---|---|
|
Specify the backend. The possible values are host and sycl. For a CPU-only build, the backend is automatically set to host, and only the host option is available. For a CPU-GPU build, host and sycl options are available, and sycl is the default value. The host value allocates buffers in the host (CPU) memory, while the sycl value allocates buffers in the host (CPU) or device (GPU) memory. |
|
|
Specify the number of iterations executed by the benchmark. |
|
|
Specify the number of the warmup iterations. It means the number of iterations the benchmark runs before starting the timing of the iterations specified with the |
|
|
Specify the iteration policy. Possible values are off and auto. When the iteration policy is off, the number of iterations is the same across the message sizes. When the iteration policy is auto, the number of iterations reduces based on the message size of the collective operation. |
|
|
Specify the number of collective operations the benchmark calls in each iteration. Each collective uses different |
|
|
Specify the minimum number of elements used for the collective. |
|
|
Specify the maximum number of elements used for the collective. |
|
|
Specify a list with the number of elements used for the collective, such as |
|
|
Check for correctness. The possible values are |
|
|
Specify whether to use persistent collectives ( Note A collective is persistent when the same collective is called with the same parameters multiple times. OneCCL generates a schedule for each collective it runs and can apply optimizations when persistent collectives are used. It means the schedule is generated once and reused across the subsequent invocations, saving the time to generate the schedule. |
|
|
Specify for oneCCL to use in-place ( |
|
|
Specify the type of the SYCL device. The possible values are |
|
|
Specify to use the root devices ( |
|
|
Specify the type of SYCL memory. The possible values are |
|
|
Specify the type of SYCL device. The possible values are |
|
|
Specify the type of the SYCL queue. The possible values are |
|
|
Specify the collective to run. Accept a comma-separated list, without whitespace characters, of collectives to run. The available collectives are |
|
|
Specify the datatype. Accept a comma-separated list, without whitespace characters, of datatypes to benchmark. The available types are |
|
|
Specify the type of the reduction. Accept a coma-separated list, without whitespace characters, of the reduction operations to run. The available operations are |
|
|
Specify to store the output in the specified CSV file. User specifies the csv_filepath/file_to_store CSV-formatted data. |
|
|
Specify to show the additional information. The possible values are |
|
|
Show all of the supported options. |
Note
The -t
and -f
options specify the count in number of elements, so the total number of bytes is obtained by multiplying the number of elements by the number of bytes of the data type the collective uses.
For instance, with -f 128
and fp32
datatype, the total amount of bytes is 512 (128 element count * 4 bytes FP32).
The benchmark runs and reports time for message sizes that correspond to the -t
and -f
arguments and all message sizes that are powers of two in between these two numbers.
Example#
GPU#
The following example shows how to run the benchmark with the GPU buffers:
mpirun -n <N> -ppn <P> benchmark -a gpu -m usm -u device -l allreduce -i 20 -f 1024 -t 67108864 -j off -d float32 -p 0 -e in_order
The above command runs:
The
allreduce
benchmarkWith a total of
N
processesWith
P
processes per node allocating the memory in the GPUUsing SYCL Unified Shared Memory (USM) of the device type
20 iterations
With the element count from 1024 to 67108864 (the benchmark runs with all the powers on two in that range) of float32 datatype, assuming the collective is not persistent and using a SYCL in-order queue
Similar for allreduce
and reduce
:
mpirun -n <N> -ppn <P> benchmark -a gpu -m usm -u device -l allreduce,reduce -i 20 -f 1024 -t 67108864 -j off -d float32 -p 0 -e in_order
CPU#
mpirun -n <N> -ppn <P> benchmark -l allreduce -i 20 -f 1024 -t 67108864 -j off -d float32 -p 0
The above command specifies to run
The
allreduce
benchmarkWith a total of
N
processesWith
P
processes per node20 iterations
With the element count from 1024 to 67108864 (the benchmark runs with all the powers on two in that range) of float32 datatype, assuming the collective is not persistent
Similar for allreduce
and reduce
:
mpirun -n <N> -ppn <P> benchmark -l allreduce,reduce -i 20 -f 1024 -t 67108864 -j off -d float32 -p 0