Objective function#
Some classification algorithms are designed to minimize the selected objective function. On each iteration its’ gradient and sometimes hessian is calculated and model weights are updated using this information.
Operation |
Computational methods |
Programming Interface |
||
Supported objective functions#
Mathematical formulation#
Computing#
Algorithm takes dataset \(X = \{ x_1, \ldots, x_n \}\) with \(n\) feature vectors of dimension \(p\), vector with correct class labels \(y = \{ y_1, \ldots, y_n \}\) and coefficients vector w = { w_0, ldots, w_p }`of size :math:`p + 1 as input. Then it calculates logistic loss, its gradient or hessian.
Value#
\(L(X, w, y)\) - value of objective function.
Gradient#
\(\overline{grad} = \frac{\partial L}{\partial w}\) - gradient of objective function.
Hessian#
\(H = (h_{ij}) = \frac{\partial L}{\partial w \partial w}\) - hessian of objective function.
Computation method: dense_batch#
The method computes value of objective function, its gradient or hessian for the dense data. This is the default and the only method supported.
Programming Interface#
Refer to API Reference: Objective Function.
Distributed mode#
Currently algorithm does not support distributed execution in SMPD mode.
Examples: Logistic Loss
Batch Processing:
Batch Processing: