Decision Forest Classification and Regression (DF)¶
Decision Forest (DF) classification and regression algorithms are based on an ensemble of treestructured classifiers, which are known as decision trees. Decision forest is built using the general technique of bagging, a bootstrap aggregation, and a random choice of features. For more details, see [Breiman84] and [Breiman2001].
Operation 
Computational methods 
Programming Interface 

Mathematical formulation¶
Training¶
Given \(n\) feature vectors \(X=\{x_1=(x_{11},\ldots,x_{1p}),\ldots,x_n=(x_{n1},\ldots,x_{np})\}\) of size \(p\), their nonnegative observation weights \(W=\{w_1,\ldots,w_n\}\) and \(n\) responses \(Y=\{y_1,\ldots,y_n\}\),
\(y_i \in \{0, \ldots, C1\}\), where \(C\) is the number of classes
\(y_i \in \mathbb{R}\)
the problem is to build a decision forest classification or regression model.
The library uses the following algorithmic framework for the training stage. Let \(S = (X, Y)\) be the set of observations. Given positive integer parameters, such as the number of trees \(B\), the bootstrap parameter \(N = f*n\), where \(f\) is a fraction of observations used for a training of each tree in the forest, and the number of features per node \(m\), the algorithm does the following for \(b = 1, \ldots, B\):
Selects randomly with replacement the set \(D_b\) of \(N\) vectors from the set \(S\). The set \(D_b\) is called a bootstrap set.
Trains a decision tree classifier \(T_b\) on \(D_b\) using parameter \(m\) for each tree.
Decision tree \(T\) is trained using the training set \(D\) of size \(N\). Each node \(t\) in the tree corresponds to the subset \(D_t\) of the training set \(D\), with the root node being \(D\) itself. Each internal node \(t\) represents a binary test (split) that divides the subset \(X_t\) in two subsets, \(X_{t_L}\) and \(X_{t_R}\), corresponding to their children, \(t_L\) and \(t_R\).
Training method: Dense¶
In dense training method, all possible splits for each feature are taken from the subset of selected features for the current node and evaluated for best split computation.
Training method: Hist¶
In hist training method, only a selected subset of splits is considered for best split computation. This subset of splits is computed for each feature at the initialization stage of the algorithm. After computing the subset of splits, each value from the initially provided data is substituted with the value of the corresponding bin. Bins are continuous intervals between selected splits.
Split Criteria¶
The metric for measuring the best split is called impurity, \(i(t)\). It generally reflects the homogeneity of responses within the subset \(D_t\) in the node \(t\).
Gini index is an impurity metric for classification, calculated as follows:
where
\(D\) is a set of observations that reach the node;
\(p_i\) is specified in the table below:
Without sample weights 
With sample weights 

\(p_i\) is the observed fraction of observations that belong to class \(i\) in \(D\) 
\(p_i\) is the observed weighted fraction of observations that belong to class \(i\) in \(D\):
\[p_i = \frac{\sum_{d \in \{d \in D  y_d = i \}} W_d}{\sum_{d \in D} W_d}\]

MSE is an impurity metric for regression, calculated as follows:
Without sample weights 
With sample weights 

\(I_{\mathrm{MSE}}\left(D\right) = \frac{1}{W(D)} \sum _{i=1}^{W(D)}{\left(y_i  \frac{1}{W(D)} \sum _{j=1}^{W(D)} y_j \right)}^{2}\) 
\(I_{\mathrm{MSE}}\left(D\right) = \frac{1}{W(D)} \sum _{i \in D}{w_i \left(y_i  \frac{1}{W(D)} \sum _{j \in D} w_j y_j \right)}^{2}\) 
\(W(S) = \sum_{s \in S} 1\), which is equivalent to the number of elements in \(S\) 
\(W(S) = \sum_{s \in S} w_s\) 
Let the impurity decrease in the node \(t\) be
Termination Criteria¶
The library supports the following termination criteria of decision forest training:
 Minimal number of observations in a leaf node
Node \(t\) is not processed if \(D_t\) is smaller than the predefined value. Splits that produce nodes with the number of observations smaller than that value are not allowed.
 Minimal number of observations in a split node
Node \(t\) is not processed if \(D_t\) is smaller than the predefined value. Splits that produce nodes with the number of observations smaller than that value are not allowed.
 Minimum weighted fraction of the sum total of weights of all the input observations required to be at a leaf node
Node \(t\) is not processed if \(D_t\) is smaller than the predefined value. Splits that produce nodes with the number of observations smaller than that value are not allowed.
 Maximal tree depth
Node \(t\) is not processed if its depth in the tree reached the predefined value.
 Impurity threshold
Node \(t\) is not processed if its impurity is smaller than the predefined threshold.
 Maximal number of leaf nodes
Grow trees with positive maximal number of leaf nodes in a bestfirst fashion. Best nodes are defined by relative reduction in impurity. If maximal number of leaf nodes equals zero, then this criterion does not limit the number of leaf nodes, and trees grow in a depthfirst fashion.
Tree Building Strategies¶
Maximal number of leaf nodes defines the strategy of tree building: depthfirst or bestfirst.
Depthfirst Strategy¶
If maximal number of leaf nodes equals zero, a decision tree is built using depthfirst strategy. In each terminal node \(t\), the following recursive procedure is applied:
Stop if the termination criteria are met.
Choose randomly without replacement \(m\) feature indices \(J_t \in \{0, 1, \ldots, p1\}\).
For each \(j \in J_t\), find the best split \(s_{j,t}\) that partitions subset \(D_t\) and maximizes impurity decrease \(\Delta i(t)\).
A node is a split if this split induces a decrease of the impurity greater than or equal to the predefined value. Get the best split \(s_t\) that maximizes impurity decrease \(\Delta i\) in all \(s_{j,t}\) splits.
Apply this procedure recursively to \(t_L\) and \(t_R\).
Bestfirst Strategy¶
If maximal number of leaf nodes is positive, a decision tree is built using bestfirst strategy. In each terminal node \(t\), the following steps are applied:
Stop if the termination criteria are met.
Choose randomly without replacement \(m\) feature indices \(J_t \in \{0, 1, \ldots, p1\}\).
For each \(j \in J_t\), find the best split \(s_{j,t}\) that partitions subset \(D_t\) and maximizes impurity decrease \(\Delta i(t)\).
A node is a split if this split induces a decrease of the impurity greater than or equal to the predefined value and the number of split nodes is less or equal to \(\mathrm{maxLeafNodes} – 1\). Get the best split \(s_t\) that maximizes impurity decrease \(\Delta i\) in all \(s_{j,t}\) splits.
Put a node into a sorted array, where sort criterion is the improvement in impurity \(\Delta i(t)D_t\). The node with maximal improvement is the first in the array. For a leaf node, the improvement in impurity is zero.
Apply this procedure to \(t_L\) and \(t_R\) and grow a tree one by one getting the first element from the array until the array is empty.
Inference¶
Given decision forest classification or regression model and vectors \(x_1, \ldots, x_r\), the problem is to calculate the responses for those vectors.
Inference methods: Dense and Hist¶
Dense and hist inference methods perform prediction in the same way. To solve the problem for each given query vector \(x_i\), the algorithm does the following:
For each tree in the forest, it finds the leaf node that gives \(x_i\) its label. The label \(y\) that the majority of trees in the forest vote for is chosen as the predicted label for the query vector \(x_i\).
For each tree in the forest, it finds the leaf node that gives \(x_i\) the response as the mean of dependent variables. The mean of responses from all trees in the forest is the predicted response for the query vector \(x_i\).
Additional Characteristics Calculated by the Decision Forest¶
Decision forests can produce additional characteristics, such as an estimate of generalization error and an importance measure (relative decisive power) of each of p features (variables).
Outofbag Error¶
The estimate of the generalization error based on the training data can be obtained and calculated as follows:
For each vector \(x_i\) in the dataset \(X\), predict its label \(\hat{y_i}\) by having the majority of votes from the trees that contain \(x_i\) in their OOB set, and vote for that label.
Calculate the OOB error of the decision forest \(T\) as the average of misclassifications:
\[OOB(T) = \frac{1}{{D}^{\text{'}}}\sum _{y_i \in {D}^{\text{'}}}I\{y_i \ne \hat{y_i}\}\text{,where }{D}^{\text{'}}={\bigcup }_{b=1}^{B}\overline{D_b}.\]If OOB error value per each observation is required, then calculate the prediction error for \(x_i\): \(OOB(x_i) = I\{{y}_{i}\ne \hat{{y}_{i}}\}\)
For each vector \(x_i\) in the dataset \(X\), predict its response \(\hat{y_i}\) as the mean of prediction from the trees that contain \(x_i\) in their OOB set:
\(\hat{y_i} = \frac{1}{{B}_{i}}\sum _{b=1}^{B_i}\hat{y_{ib}}\), where \(B_i= \bigcup{T_b}: x_i \in \overline{D_b}\) and \(\hat{y_{ib}}\) is the result of prediction \(x_i\) by \(T_b\).
Calculate the OOB error of the decision forest \(T\) as the MeanSquare Error (MSE):
\[OOB(T) = \frac{1}{{D}^{\text{'}}}\sum _{{y}_{i} \in {D}^{\text{'}}}\sum {(y_i\hat{y_i})}^{2}, \text{where } {D}^{\text{'}}={\bigcup}_{b=1}^{B}\overline{{D}_{b}}\]If OOB error value per each observation is required, then calculate the prediction error for \(x_i\):
\[OOB(x_i) = {(y_i\hat{y_i})}^{2}\]
Variable Importance¶
There are two main types of variable importance measures:
Mean Decrease Impurity importance (MDI)
Importance of the \(j\)th variable for predicting \(Y\) is the sum of weighted impurity decreases \(p(t) \Delta i(s_t, t)\) for all nodes \(t\) that use \(x_j\), averaged over all \(B\) trees in the forest:
\[MDI\left(j\right)=\frac{1}{B}\sum _{b=1}^{B} \sum _{t\in {T}_{b}:v\left({s}_{t}\right)=j}p\left(t\right)\Delta i\left({s}_{t},t\right),\]where \(p\left(t\right)=\frac{{D}_{t}}{D}\) is the fraction of observations reaching node \(t\) in the tree \(T_b\), and \(v(s_t)\) is the index of the variable used in split \(s_t\).
Mean Decrease Accuracy (MDA)
Importance of the \(j\)th variable for predicting \(Y\) is the average increase in the OOB error over all trees in the forest when the values of the \(j\)th variable are randomly permuted in the OOB set. For that reason, this latter measure is also known as permutation importance.
In more details, the library calculates MDA importance as follows:
Let \(\pi (X,j)\) be the set of feature vectors where the \(j\)th variable is randomly permuted over all vectors in the set.
Let \(E_b\) be the OOB error calculated for \(T_b:\) on its outofbag dataset \(\overline{D_b}\).
Let \(E_{b,j}\) be the OOB error calculated for \(T_b:\) using \(\pi \left(\overline{{X}_{b}},j\right)\), and its outofbag dataset \(\overline{D_b}\) is permuted on the \(j\)th variable. Then
\({\delta }_{b,j}={E}_{b}{E}_{b,j}\) is the OOB error increase for the tree \(T_b\).
\(Raw MDA\left(j\right)=\frac{1}{B}\sum _{b=1}^{B}{\delta }_{b,j}\) is MDA importance.
\(Scaled MDA\left(j\right)=\frac{Raw MDA\left({x}_{j}\right)}{\frac{{\sigma }_{j}}{\sqrt{B}}}\), where \({\sigma }_{j}^{2}\) is the variance of \(D_{b,j}\)
Programming Interface¶
Refer to API Reference: Decision Forest Classification and Regression.
Distributed mode¶
The algorithm supports distributed execution in SMPD mode (only on GPU).