Quality Metrics for Multi-class Classification Algorithms#

For \(l\) classes \(C_1, \ldots, C_l\), given a vector \(X = (x_1, \ldots, x_n)\) of class labels computed at the prediction stage of the classification algorithm and a vector \(Y = (y_1, \ldots, y_n)\) of expected class labels, the problem is to evaluate the classifier by computing the confusion matrix and connected quality metrics: precision, error rate, and so on.

QualityMetricsId for multi-class classification is confusionMatrix.

Details#

Further definitions use the following notations:

Notations for Quality Metrics for Multi-class Classification Algorithms#

\(\text{tp}_i\)

true positive

the number of correctly recognized observations for class \(C_1\)

\(\text{tn}_i\)

true negative

the number of correctly recognized observations that do not belong to the class \(C_1\)

\(\text{fp}_i\)

false positive

the number of observations that were incorrectly assigned to the class \(C_1\)

\(\text{fn_i}\)

false negative

the number of observations that were not recognized as belonging to the class \(C_1\)

The library uses the following quality metrics for multi-class classifiers:

Definitions of Quality Metrics for Multi-class Classification Algorithms#

Quality Metric

Definition

Average accuracy

\(\frac {\sum _{i = 1}^{l} \frac {\text{tp}_i + \text{tn}_i}{\text{tp}_i + \text{fn}_i + \text{fp}_i + \text{tn}_i}}{l}\)

Error rate

\(\frac {\sum _{i = 1}^{l} \frac {\text{fp}_i + \text{fn}_i}{\text{tp}_i + \text{fn}_i + \text{fp}_i + \text{tn}_i}}{l}\)

Micro precision (\(\text{Precision}_\mu\))

\(\frac {\sum _{i = 1}^{l} \text{tp}_i} {\sum _{i = 1}^{l} (\text{tp}_i + \text{fp}_i)}\)

Micro recall (\(\text{Recall}_\mu\))

\(\frac {\sum _{i = 1}^{l} \text{tp}_i} {\sum _{i = 1}^{l} (\text{tp}_i + \text{fn}_i)}\)

Micro F-score (\(\text{F-score}_\mu\))

\(\frac {(\beta^2 + 1)(\text{Precision}_\mu \times \text{Recall}_\mu)}{\beta^2 \times \text{Precision}_\mu + \text{Recall}_\mu}\)

Macro precision (\(\text{Precision}_M\))

\(\frac {\sum _{i = 1}^{l} \frac {\text{tp}_i}{\text{tp}_i + \text{fp}_i}}{l}\)

Macro recall (\(\text{Recall}_M\))

\(\frac {\sum _{i = 1}^{l} \frac {\text{tp}_i}{\text{tp}_i + \text{fn}_i}}{l}\)

Macro F-score (\(\text{F-score}_M\))

\(\frac {(\beta^2 + 1)(\text{Precision}_M \times \text{Recall}_M)}{\beta^2 \times \text{Precision}_M + \text{Recall}_M}\)

For more details of these metrics, including the evaluation focus, refer to [Sokolova09].

The following is the confusion matrix:

Confusion Matrix for Multi-class Classification Algorithms#

Classified as Class \(C_1\)

\(\ldots\)

Classified as Class \(C_i\)

\(\ldots\)

Classified as Class \(C_l\)

Actual Class \(C_1\)

\(c_{11}\)

\(\ldots\)

\(c_{1i}\)

\(\ldots\)

\(c_{1l}\)

\(\ldots\)

\(\ldots\)

\(\ldots\)

\(\ldots\)

\(\ldots\)

\(\ldots\)

Actual Class \(C_i\)

\(c_{i1}\)

\(\ldots\)

\(c_{ii}\)

\(\ldots\)

\(c_{il}\)

\(\ldots\)

\(\ldots\)

\(\ldots\)

\(\ldots\)

\(\ldots\)

\(\ldots\)

Actual Class \(C_l\)

\(c_{l1}\)

\(\ldots\)

\(c_{li}\)

\(\ldots\)

\(c_{ll}\)

The positives and negatives are defined through elements of the confusion matrix as follows:

\[\text{tp}_i = c_{ii}\]
\[\text{fp}_i = \sum _{n = 1}^{l} c_{ni} - \text{tp}_i\]
\[\text{fn}_i = \sum _{n = 1}^{l} c_{in} - \text{tp}_i\]
\[\text{tn}_i = \sum _{n = 1}^{l} \sum _{k = 1}^{l} c_{nk} - \text{tp}_i - \text{fp}_i - \text{fn}_i\]

Batch Processing#

Algorithm Input#

The quality metric algorithm for multi-class classifiers accepts the input described below. Pass the Input ID as a parameter to the methods that provide input for your algorithm. For more details, see Algorithms.

Algorithm Input for Quality Metrics for Multi-class Classification Algorithms (Batch Processing)#

Input ID

Input

predictedLabels

Pointer to the \(n \times 1\) numeric table that contains labels computed at the prediction stage of the classification algorithm.

This input can be an object of any class derived from NumericTable except PackedSymmetricMatrix, PackedTriangularMatrix, and CSRNumericTable.

groundTruthLabels

Pointer to the \(n \times 1\) numeric table that contains expected labels.

This input can be an object of any class derived from NumericTable except PackedSymmetricMatrix, PackedTriangularMatrix, and CSRNumericTable.

Algorithm Parameters#

The quality metric algorithm has the following parameters:

Algorithm Parameters for Quality Metrics for Multi-class Classification Algorithms (Batch Processing)#

Parameter

Default Value

Description

algorithmFPType

float

The floating-point type that the algorithm uses for intermediate computations. Can be float or double.

method

defaultDense

Performance-oriented computation method, the only method supported by the algorithm.

nClasses

\(0\)

The number of classes (\(l\)).

useDefaultMetrics

true

A flag that defines a need to compute the default metrics provided by the library.

beta

\(1\)

The \(\beta\) parameter of the F-score quality metric provided by the library.

Algorithm Output#

The quality metric algorithm calculates the result described below. Pass the Result ID as a parameter to the methods that access the results of your algorithm. For more details, see Algorithms.

Algorithm Output for Quality Metrics for Multi-class Classification Algorithms (Batch Processing)#

Result ID

Result

confusionMatrix

Pointer to the \(\text{nClasses} \times \text{nClasses}\) numeric table with the confusion matrix.

Note

By default, this result is an object of the HomogenNumericTable class, but you can define the result as an object of any class derived from NumericTable except PackedTriangularMatrix, PackedSymmetricMatrix, and CSRNumericTable.

multiClassMetrics

Pointer to the \(1 \times 8\) numeric table that contains quality metrics, which you can access by an appropriate Multi-class Metrics ID:

  • averageAccuracy - average accuracy

  • errorRate - error rate

  • microPrecision - micro precision

  • microRecall - micro recall

  • microFscore - micro F-score

  • macroPrecision - macro precision

  • macroRecall - macro recall

  • macroFscore - macro F-score

Note

By default, this result is an object of the HomogenNumericTable class, but you can define the result as an object of any class derived from NumericTable except PackedTriangularMatrix, PackedSymmetricMatrix, and CSRNumericTable.

Examples#