sklearn.metrics
.multilabel_confusion_matrix#
- sklearn.metrics.multilabel_confusion_matrix(y_true, y_pred, *, sample_weight=None, labels=None, samplewise=False)[source]#
Compute a confusion matrix for each class or sample.
New in version 0.21.
Compute class-wise (default) or sample-wise (samplewise=True) multilabel confusion matrix to evaluate the accuracy of a classification, and output confusion matrices for each class or sample.
In multilabel confusion matrix \(MCM\), the count of true negatives is \(MCM_{:,0,0}\), false negatives is \(MCM_{:,1,0}\), true positives is \(MCM_{:,1,1}\) and false positives is \(MCM_{:,0,1}\).
Multiclass data will be treated as if binarized under a one-vs-rest transformation. Returned confusion matrices will be in the order of sorted unique labels in the union of (y_true, y_pred).
Read more in the User Guide.
- Parameters:
- y_true{array-like, sparse matrix} of shape (n_samples, n_outputs) or (n_samples,)
Ground truth (correct) target values.
- y_pred{array-like, sparse matrix} of shape (n_samples, n_outputs) or (n_samples,)
Estimated targets as returned by a classifier.
- sample_weightarray-like of shape (n_samples,), default=None
Sample weights.
- labelsarray-like of shape (n_classes,), default=None
A list of classes or column indices to select some (or to force inclusion of classes absent from the data).
- samplewisebool, default=False
In the multilabel case, this calculates a confusion matrix per sample.
- Returns:
- multi_confusionndarray of shape (n_outputs, 2, 2)
A 2x2 confusion matrix corresponding to each output in the input. When calculating class-wise multi_confusion (default), then n_outputs = n_labels; when calculating sample-wise multi_confusion (samplewise=True), n_outputs = n_samples. If
labels
is defined, the results will be returned in the order specified inlabels
, otherwise the results will be returned in sorted order by default.
See also
confusion_matrix
Compute confusion matrix to evaluate the accuracy of a classifier.
Notes
The
multilabel_confusion_matrix
calculates class-wise or sample-wise multilabel confusion matrices, and in multiclass tasks, labels are binarized under a one-vs-rest way; whileconfusion_matrix
calculates one confusion matrix for confusion between every two classes.Examples
Multilabel-indicator case:
>>> import numpy as np >>> from sklearn.metrics import multilabel_confusion_matrix >>> y_true = np.array([[1, 0, 1], ... [0, 1, 0]]) >>> y_pred = np.array([[1, 0, 0], ... [0, 1, 1]]) >>> multilabel_confusion_matrix(y_true, y_pred) array([[[1, 0], [0, 1]], [[1, 0], [0, 1]], [[0, 1], [1, 0]]])
Multiclass case:
>>> y_true = ["cat", "ant", "cat", "cat", "ant", "bird"] >>> y_pred = ["ant", "ant", "cat", "cat", "ant", "cat"] >>> multilabel_confusion_matrix(y_true, y_pred, ... labels=["ant", "bird", "cat"]) array([[[3, 1], [0, 2]], [[5, 0], [1, 0]], [[2, 1], [1, 2]]])