Environment Variables#
Collective algorithms selection#
oneCCL supports collective operations for the host (CPU) memory buffers and device (GPU) memory buffers. Below you can see how to select the collective algorithm depending on the type of buffer being utilized.
Device (GPU) Memory Buffers#
Collectives that use GPU buffers are implemented using two phases:
Scaleup phase. Communication between ranks/processes in the same node.
Scaleout phase. Communication between ranks/processes on different nodes.
SCALEUP#
Use the following environment variables to select the scaleup algorithm:
CCL_REDUCE_SCATTER_MONOLITHIC_KERNEL
Syntax
CCL_REDUCE_SCATTER_MONOLITHIC_KERNEL=<value>
Arguments
<value> |
Description |
---|---|
|
Uses compute kernels to transfer data across GPUs for the |
|
Uses copy engines to transfer data across GPUs for the |
Description
Set this environment variable to enable compute kernels for the ALLREDUCE
, REDUCE
, and REDUCE_SCATTER
collectives using device (GPU) buffers.
CCL_ALLGATHERV_MONOLITHIC_PIPELINE_KERNEL#
Syntax
CCL_ALLGATHERV_MONOLITHIC_PIPELINE_KERNEL=<value>
Arguments
<value> |
Description |
---|---|
|
Uses compute kernels to transfer data across GPUs for the |
|
Uses copy engines to transfer data across GPUs for the |
Description
Set this environment variable to enable compute kernels for the ALLGATHERV
collective using device (GPU) buffers.
CCL_REDUCE_SCATTER_MONOLITHIC_PIPELINE_KERNEL#
Syntax
CCL_REDUCE_SCATTER_MONOLITHIC_PIPELINE_KERNEL=<value>
Arguments
<value> |
Description |
---|---|
|
Uses compute kernels for the |
|
Uses copy engines to transfer data across GPUs for the |
Description
Set this environment variable to enable compute kernels, that pipeline data transfers across tiles in the same GPU and across different GPUs, for the ALLREDUCE
, REDUCE
, and REDUCE_SCATTER
collectives using the device (GPU) buffers.
CCL_ALLTOALLV_MONOLITHIC_KERNEL#
Syntax
CCL_ALLTOALLV_MONOLITHIC_KERNEL=<value>
Arguments
<value> |
Description |
---|---|
|
Uses compute kernels to transfer data across GPUs for the |
|
Uses copy engines to transfer data across GPUs for the |
Description
Set this environment variable to enable compute kernels for the ALLTOALL
and ALLTOALLV
collectives using device (GPU) buffers
CCL_<coll_name>_SCALEOUT
.
SCALEOUT#
The following environment variables can be used to select the scaleout algorithm used.
Syntax
To set a specific algorithm for scaleout for the device (GPU) buffers for the whole message size range:
CCL_<coll_name>_SCALEOUT=<algo_name>
To set a specific algorithm for scaleout for the device (GPU) buffers for a specific message size range:
CCL_<coll_name>_SCALEOUT="<algo_name_1>[:<size_range_1>][;<algo_name_2>:<size_range_2>][;...]"
Where:
<coll_name>
is selected from a list of the available collective operations (Available collectives).<algo_name>
is selected from a list of the available algorithms for the specific collective operation (Available collectives).<size_range>
is described by the left and the right size borders in the<left>-<right>
format. The size is specified in bytes. To specify the maximum message size, use reserved word max.
oneCCL internally fills the algorithm selection table with sensible defaults. Your input complements the selection table.
To see the actual table values, set CCL_LOG_LEVEL=info
.
Example
CCL_ALLREDUCE_SCALEOUT="recursive_doubling:0-8192;rabenseifner:8193-1048576;ring:1048577-max"
Available Collectives#
Available collective operations (<coll_name>
):
ALLGATHERV
ALLREDUCE
ALLTOALL
ALLTOALLV
BARRIER
BCAST
REDUCE
REDUCE_SCATTER
Available algorithms#
Available algorithms for each collective operation (<algo_name>
):
ALLGATHERV
algorithms#
|
Based on |
|
Send to all, receive from all |
|
Alltoall-based algorithm |
|
Series of broadcast operations with different root ranks |
|
Ring-based algorithm |
ALLREDUCE
algorithms#
|
Based on |
|
Rabenseifner’s algorithm |
|
May be beneficial for imbalanced workloads |
|
reduce_scatter + allgather ring.
Use |
|
Double-tree algorithm |
|
Recursive doubling algorithm |
|
Two-dimensional algorithm (reduce_scatter + allreduce + allgather). Only available for the host (CPU) buffers. |
ALLTOALL
algorithms#
|
Based on |
|
Send to all, receive from all |
|
Scatter-based algorithm |
ALLTOALLV
algorithms#
|
Based on |
|
Send to all, receive from all |
|
Scatter-based algorithm |
BARRIER
algorithms#
|
Based on |
|
Ring-based algorithm |
Note
The BARRIER`
algorithm does not support the CCL_BARRIER_SCALEOUT
environment variable. To change the algorithm for BARRIER
, use the CCL_BARRIER
environment variable.
BCAST
algorithms#
|
Based on |
|
Ring |
|
Double-tree algorithm |
|
Send to all from root rank |
Note
The BCAST
algorithm does not yet support the CCL_BCAST_SCALEOUT
environment variable. To change the algorithm for BCAST
, use the CCL_BCAST
environment variable.
REDUCE
algorithms#
|
Based on |
|
Rabenseifner’s algorithm |
|
Tree algorithm |
|
Double-tree algorithm |
REDUCE_SCATTER
algorithms#
|
Based on |
|
Use |
Note
The REDUCE_SCATTER
algorithm does not yet support the CCL_REDUCE_SCATTER_SCALEOUT
environment variable. To change the algorithm for REDUCE_SCATTER
, use the CCL_REDUCE_SCATTER
environment variable.
Host (CPU) Memory Buffers#
CCL_<coll_name>#
Syntax
To set a specific algorithm for the host (CPU) buffers for the whole message size range:
CCL_<coll_name>=<algo_name>
To set a specific algorithm for the host (CPU) buffers for a specific message size range:
CCL_<coll_name>="<algo_name_1>[:<size_range_1>][;<algo_name_2>:<size_range_2>][;...]"
Where:
<coll_name>
is selected from a list of available collective operations (Available collectives).<algo_name>
is selected from a list of available algorithms for a specific collective operation (Available algorithms).<size_range>
is described by the left and the right size borders in a format<left>-<right>
. Size is specified in bytes. Use reserved wordmax
to specify the maximum message size.
oneCCL internally fills algorithm selection table with sensible defaults. User input complements the selection table.
To see the actual table values set CCL_LOG_LEVEL=info
.
Example
CCL_ALLREDUCE="recursive_doubling:0-8192;rabenseifner:8193-1048576;ring:1048577-max"
CCL_RS_CHUNK_COUNT#
Syntax
CCL_RS_CHUNK_COUNT=<value>
Arguments
<value> |
Description |
---|---|
|
Maximum number of chunks. |
Description
Set this environment variable to specify maximum number of chunks for reduce_scatter phase in ring allreduce.
CCL_RS_MIN_CHUNK_SIZE#
Syntax
CCL_RS_MIN_CHUNK_SIZE=<value>
Arguments
<value> |
Description |
---|---|
|
Minimum number of bytes in chunk. |
Description
Set this environment variable to specify minimum number of bytes in chunk for reduce_scatter phase in ring allreduce. Affects actual value of CCL_RS_CHUNK_COUNT
.
Workers#
The group of environment variables to control worker threads.
CCL_WORKER_COUNT#
Syntax
CCL_WORKER_COUNT=<value>
Arguments
<value> |
Description |
---|---|
|
The number of worker threads for oneCCL rank ( |
Description
Set this environment variable to specify the number of oneCCL worker threads.
CCL_WORKER_AFFINITY#
Syntax
CCL_WORKER_AFFINITY=<cpulist>
Arguments
<cpulist> |
Description |
---|---|
|
Workers are automatically pinned to last cores of pin domain.
Pin domain depends from process launcher.
If |
|
A comma-separated list of core numbers and/or ranges of core numbers for all local workers, one number per worker.
The i-th local worker is pinned to the i-th core in the list.
For example |
Description
Set this environment variable to specify cpu affinity for oneCCL worker threads.
CCL_WORKER_MEM_AFFINITY#
Syntax
CCL_WORKER_MEM_AFFINITY=<nodelist>
Arguments
<nodelist> |
Description |
---|---|
|
Workers are automatically pinned to NUMA nodes that correspond to CPU affinity of workers. |
|
A comma-separated list of NUMA node numbers for all local workers, one number per worker. The i-th local worker is pinned to the i-th NUMA node in the list. The number should not exceed the number of NUMA nodes available on the system. |
Description
Set this environment variable to specify memory affinity for oneCCL worker threads.
ATL#
The group of environment variables to control ATL (abstract transport layer).
CCL_ATL_TRANSPORT#
Syntax
CCL_ATL_TRANSPORT=<value>
Arguments
<value> |
Description |
---|---|
|
MPI transport (default). |
|
OFI (libfabric*) transport. |
Description
Set this environment variable to select the transport for inter-process communications.
CCL_ATL_HMEM#
Syntax
CCL_ATL_HMEM=<value>
Arguments
<value> |
Description |
---|---|
|
Enable heterogeneous memory support on the transport layer. |
|
Disable heterogeneous memory support on the transport layer (default). |
Description
Set this environment variable to enable handling of HMEM/GPU buffers by the transport layer. The actual HMEM support depends on the limitations on the transport level and system configuration.
CCL_ATL_SHM#
Syntax
CCL_ATL_SHM=<value>
Arguments
<value> |
Description |
---|---|
|
Disables the OFI shared memory provider. The default value. |
|
Enables the OFI shared memory provider. |
Description
Set this environment variable to enable the OFI shared memory provider to communicate between ranks in the same node of the host (CPU) buffers.
This capability requires OFI as the transport (CCL_ATL_TRANSPORT=ofi
).
The OFI/SHM provider has support to utilize the Intel(R) Data Streaming Accelerator* (DSA). To run it with DSA*, you need:
Linux* OS kernel support for the DSA* shared work queues
Libfabric* 1.17 or later
To enable DSA, set the following environment variables:
FI_SHM_DISABLE_CMA=1
FI_SHM_USE_DSA_SAR=1
Refer to Libfabric* Programmer’s Manual for the additional details about DSA* support in the SHM provider: https://ofiwg.github.io/libfabric/main/man/fi_shm.7.html.
CCL_PROCESS_LAUNCHER#
Syntax
CCL_PROCESS_LAUNCHER=<value>
Arguments
<value> |
Description |
---|---|
|
Uses the MPI hydra job launcher. The default value. |
|
Uses a torch job launcher. |
|
Is used with the PALS job launcher that uses the pmix API. The CCL_PROCESS_LAUNCHER=pmix CCL_ATL_TRANSPORT=mpi mpiexec -np 2 -ppn 2 --pmi=pmix ...
|
|
No job launcher is used. You should specify the values for |
Description
Set this environment variable to specify the job launcher.
CCL_LOCAL_SIZE#
Syntax
CCL_LOCAL_SIZE=<value>
Arguments
<value> |
Description |
---|---|
|
A total number of ranks on the local host. |
Description
Set this environment variable to specify a total number of ranks on a local host.
CCL_LOCAL_RANK#
Syntax
CCL_LOCAL_RANK=<value>
Arguments
<value> |
Description |
---|---|
|
Rank number of the current process on the local host. |
Description
Set this environment variable to specify the rank number of the current process in the local host.
Multi-NIC#
CCL_MNIC
, CCL_MNIC_NAME
and CCL_MNIC_COUNT
define filters to select multiple NICs.
oneCCL workers will be pinned on selected NICs in a round-robin way.
CCL_MNIC#
Syntax
CCL_MNIC=<value>
Arguments
<value> |
Description |
---|---|
|
Select all NICs available on the node. |
|
Select all NICs local for the NUMA node that corresponds to process pinning. |
|
Disable special NIC selection, use a single default NIC (default). |
Description
Set this environment variable to control multi-NIC selection by NIC locality.
CCL_MNIC_NAME#
Syntax
CCL_MNIC_NAME=<namelist>
Arguments
<namelist> |
Description |
---|---|
|
A comma-separated list of NIC full names or prefixes to filter NICs.
Use the |
Description
Set this environment variable to control multi-NIC selection by NIC names.
CCL_MNIC_COUNT#
Syntax
CCL_MNIC_COUNT=<value>
Arguments
<value> |
Description |
---|---|
|
The maximum number of NICs that should be selected for oneCCL workers. If not specified then equal to the number of oneCCL workers. |
Description
Set this environment variable to specify the maximum number of NICs to be selected. The actual number of NICs selected may be smaller due to limitations on transport level or system configuration.
Low-precision datatypes#
The group of environment variables to control processing of low-precision datatypes.
CCL_BF16#
Syntax
CCL_BF16=<value>
Arguments
<value> |
Description |
---|---|
|
Select implementation based on |
|
Select implementation based on |
Description
Set this environment variable to select implementation for BF16 <-> FP32 conversion on reduction phase of collective operation.
Default value depends on instruction set support on specific CPU. AVX512_BF16
-based implementation has precedence over AVX512F
-based one.
CCL_FP16#
Syntax
CCL_FP16=<value>
Arguments
<value> |
Description |
---|---|
|
Select implementation based on |
|
Select implementation based on |
Description
Set this environment variable to select implementation for FP16 <-> FP32 conversion on reduction phase of collective operation.
Default value depends on instruction set support on specific CPU. AVX512F
-based implementation has precedence over F16C
-based one.
CCL_LOG_LEVEL#
Syntax
CCL_LOG_LEVEL=<value>
Arguments
<value> |
---|
|
|
|
|
|
Description
Set this environment variable to control logging level.
CCL_ITT_LEVEL#
Syntax
CCL_ITT_LEVEL=<value>
Arguments
<value> |
Description |
---|---|
|
Enable support for ITT profiling. |
|
Disable support for ITT profiling (default). |
Description
Set this environment variable to specify Intel® Instrumentation and Tracing Technology (ITT) profiling level. Once the environment variable is enabled (value > 0), it is possible to collect and display profiling data for oneCCL using tools such as Intel® VTune™ Profiler.
Fusion#
The group of environment variables to control fusion of collective operations.
CCL_FUSION#
Syntax
CCL_FUSION=<value>
Arguments
<value> |
Description |
---|---|
|
Enable fusion of collective operations |
|
Disable fusion of collective operations (default) |
Description
Set this environment variable to control fusion of collective operations. The real fusion depends on additional settings described below.
CCL_FUSION_BYTES_THRESHOLD#
Syntax
CCL_FUSION_BYTES_THRESHOLD=<value>
Arguments
<value> |
Description |
---|---|
|
Bytes threshold for a collective operation. If the size of a communication buffer in bytes is less than or equal
to |
Description
Set this environment variable to specify the threshold of the number of bytes for a collective operation to be fused.
CCL_FUSION_COUNT_THRESHOLD#
Syntax
CCL_FUSION_COUNT_THRESHOLD=<value>
Arguments
<value> |
Description |
---|---|
|
The threshold for the number of collective operations.
oneCCL can fuse together no more than |
Description
Set this environment variable to specify count threshold for a collective operation to be fused.
CCL_FUSION_CYCLE_MS#
Syntax
CCL_FUSION_CYCLE_MS=<value>
Arguments
<value> |
Description |
---|---|
|
The frequency of checking for collectives operations to be fused, in milliseconds:
|
Description
Set this environment variable to specify the frequency of checking for collectives operations to be fused.
CCL_PRIORITY#
Syntax
CCL_PRIORITY=<value>
Arguments
<value> |
Description |
---|---|
|
You have to explicitly specify priority using |
|
Priority is implicitly increased on each collective call. You do not have to specify priority. |
|
Disable prioritization (default). |
Description
Set this environment variable to control priority mode of collective operations.
CCL_MAX_SHORT_SIZE#
Syntax
CCL_MAX_SHORT_SIZE=<value>
Arguments
<value> |
Description |
---|---|
|
Bytes threshold for a collective operation ( |
Description
Set this environment variable to specify the threshold of the number of bytes for a collective operation to be split.
CCL_SYCL_OUTPUT_EVENT#
Syntax
CCL_SYCL_OUTPUT_EVENT=<value>
Arguments
<value> |
Description |
---|---|
|
Enable support for SYCL output event (default). |
|
Disable support for SYCL output event. |
Description
Set this environment variable to control support for SYCL output event.
Once the support is enabled, you can retrieve SYCL output event from oneCCL event using get_native()
method.
oneCCL event must be associated with oneCCL communication operation.
CCL_ZE_LIBRARY_PATH#
Syntax
CCL_ZE_LIBRARY_PATH=<value>
Arguments
<value> |
Description |
---|---|
|
Specify the name and full path to the |
Description
Set this environment variable to specify the name and full path to Level-Zero
library. The path should be absolute and validated. Set this variable if Level-Zero
is not located in the default path. By default oneCCL uses libze_loader.so
name for dynamic loading.