Regression Decision Tree#

Regression decision tree is a kind of decision trees described in Classification and Regression > Decision Tree.

Details#

Given:

  • n feature vectors \(x_1 = (x_{11}, \ldots, x_{1p}), \ldots, x_n = (x_{n1}, \ldots, x_{np})\) of size p

  • The vector of responses \(y = (y_1, \ldots, y_n)\) , where \(y_i \in R\) describes the dependent variable for independent variables \(x_i\).

The problem is to build a regression decision tree.

Split Criterion#

The library provides the decision tree regression algorithm based on the mean-squared error (MSE) [Breiman84]:

\[\text{Δ}{I}_{MSE}\left(D, \text{τ}\right)={I}_{MSE}\left(D\right)-\frac{|{D}_{true}|}{|D|}{I}_{MSE}\left({D}_{true}\right)-\frac{|{D}_{false}|}{|D|}{I}_{MSE}\left({D}_{false}\right)\]

Where

  • \(O(\tau)\) is the set of all possible outcomes of test \(\tau\)

  • \(D_v\) is the subset of \(D\), for which outcome of \(\tau\) is \(v\), for example, \({D}_{v}=\left\{d\in D\text{|τ}\left(d\right)=v\right\}\).

The test used in the node is selected as \(\underset{\tau }{\mathrm{argmax}}\mathrm{\Delta }{I}_{MSE}\left(D, \tau \right)\). For binary decision tree with “true” and “false” branches,

\[\text{Δ}{I}_{MSE}\left(D, \text{τ}\right)={I}_{MSE}\left(D\right)-\frac{|{D}_{true}|}{|D|}{I}_{MSE}\left({D}_{true}\right)-\frac{|{D}_{false}|}{|D|}{I}_{MSE}\left({D}_{false}\right)\]

Training Stage#

The regression decision tree follows the algorithmic framework of decision tree training described in Decision Tree.

Prediction Stage#

The regression decision tree follows the algorithmic framework of decision tree prediction described in Decision Tree.

Given the regression decision tree and vectors \(x_1, \ldots, x_r\), the problem is to calculate the responses for those vectors.

Batch Processing#

Decision tree regression follows the general workflow described in Regression Usage Model.

Training#

At the training stage, decision tree regression has the following parameters:

Training Parameters for Decision Forest Regression (Batch Processing)#

Parameter

Default Value

Description

algorithmFPType

float

The floating-point type that the algorithm uses for intermediate computations. Can be float or double.

method

defaultDense

The computation method used by the decision tree regression. The only training method supported so far is the default dense method.

pruning

reducedErrorPruning

Method to perform post-pruning. Available options for the pruning parameter:

  • reducedErrorPruning - reduced error pruning. Provide dataForPruning and dependentVariablesForPruning inputs, if you use pruning.

  • none - do not prune.

maxTreeDepth

\(0\)

Maximum tree depth. Zero value means unlimited depth. Can be any non-negative number.

minObservationsInLeafNodes

\(5\)

Minimum number of observations in the leaf node. Can be any positive number.

pruningFraction

\(0.2\)

Fraction of observations from training dataset to be used as observations for post-pruning via random sampling. The rest observations (with fraction \(1 - pruningFraction\) to be used to build a decision tree). Can be any number in the interval (0, 1). If pruning is not used, all observations are used to build the decision tree regardless of this parameter value.

engine

SharedPtr<engines::mt19937::Batch<> >()

Pointer to the random number engine to be used for random sampling for reduced error post-pruning.

Prediction#

At the prediction stage, decision tree regression has the following parameters:

Prediction Parameters for Decision Forest Regression (Batch Processing)#

Parameter

Default Value

Description

algorithmFPType

float

The floating-point type that the algorithm uses for intermediate computations. Can be float or double.

method

defaultDense

The computation method used by the decision tree regression. The only training method supported so far is the default dense method.

Examples#

Batch Processing: