Add Performance Calculations to a ClassifyResult Object or Calculate for a Pair of Factor Vectors
calcPerformance.Rd
If calcExternalPerformance
is used, such as when having a vector of
known classes and a vector of predicted classes determined outside of the
ClassifyR package, a single metric value is calculated. If
calcCVperformance
is used, annotates the results of calling
crossValidate
, runTests
or runTest
with one of the user-specified performance measures.
Usage
# S4 method for factor,factor
calcExternalPerformance(
actualOutcome,
predictedOutcome,
performanceTypes = "auto"
)
# S4 method for Surv,numeric
calcExternalPerformance(
actualOutcome,
predictedOutcome,
performanceTypes = "auto"
)
# S4 method for factor,tabular
calcExternalPerformance(
actualOutcome,
predictedOutcome,
performanceTypes = "auto"
)
# S4 method for ClassifyResult
calcCVperformance(result, performanceTypes = "auto")
performanceTable(
resultsList,
performanceTypes = "auto",
aggregate = c("median", "mean")
)
Arguments
- actualOutcome
A factor vector or survival information specifying each sample's known outcome.
- predictedOutcome
A factor vector or survival information of the same length as
actualOutcome
specifying each sample's predicted outcome.- performanceTypes
Default:
"auto"
A character vector. If"auto"
, Balanced Accuracy will be used for a classification task and C-index for a time-to-event task. Must be one of the following options:"Error"
: Ordinary error rate."Accuracy"
: Ordinary accuracy."Balanced Error"
: Balanced error rate."Balanced Accuracy"
: Balanced accuracy."Sample Error"
: Error rate for each sample in the data set."Sample Accuracy"
: Accuracy for each sample in the data set."Micro Precision"
: Sum of the number of correct predictions in each class, divided by the sum of number of samples in each class."Micro Recall"
: Sum of the number of correct predictions in each class, divided by the sum of number of samples predicted as belonging to each class."Micro F1"
: F1 score obtained by calculating the harmonic mean of micro precision and micro recall."Macro Precision"
: Sum of the ratios of the number of correct predictions in each class to the number of samples in each class, divided by the number of classes."Macro Recall"
: Sum of the ratios of the number of correct predictions in each class to the number of samples predicted to be in each class, divided by the number of classes."Macro F1"
: F1 score obtained by calculating the harmonic mean of macro precision and macro recall."Matthews Correlation Coefficient"
: Matthews Correlation Coefficient (MCC). A score between -1 and 1 indicating how concordant the predicted classes are to the actual classes. Only defined if there are two classes."AUC"
: Area Under the Curve. An area ranging from 0 to 1, under the ROC."C-index"
: For survival data, the concordance index, for models which produce risk scores. Ranges from 0 to 1."Sample C-index"
: Per-individual C-index.
- result
An object of class
ClassifyResult
.- resultsList
A list of modelling results. Each element must be of type
ClassifyResult
.- aggregate
Default:
"median"
. Can also be"mean"
. If there are multiple values, such as for repeated cross-validation, then they are summarised to a single number using either mean or median.
Value
If calcCVperformance
was run, an updated
ClassifyResult
object, with new metric values in the
performance
slot. If calcExternalPerformance
was run, the
performance metric value itself.
Details
All metrics except Matthews Correlation Coefficient are suitable for evaluating classification scenarios with more than two classes and are reimplementations of those available from Intel DAAL.
crossValidate
, runTests
or runTest
was run in resampling mode, one performance
measure is produced for every resampling. Otherwise, if the leave-k-out mode was used,
then the predictions are concatenated, and one performance measure is
calculated for all classifications.
"Balanced Error"
calculates the balanced error rate and is better
suited to class-imbalanced data sets than the ordinary error rate specified
by "Error"
. "Sample Error"
calculates the error rate of each
sample individually. This may help to identify which samples are
contributing the most to the overall error rate and check them for
confounding factors. Precision, recall and F1 score have micro and macro
summary versions. The macro versions are preferable because the metric will
not have a good score if there is substantial class imbalance and the
classifier predicts all samples as belonging to the majority class.
Examples
predictTable <- DataFrame(sample = paste("A", 1:10, sep = ''),
class = factor(sample(LETTERS[1:2], 50, replace = TRUE)))
actual <- factor(sample(LETTERS[1:2], 10, replace = TRUE))
result <- ClassifyResult(DataFrame(characteristic = "Data Set", value = "Example"),
paste("A", 1:10, sep = ''), paste("Gene", 1:50), list(paste("Gene", 1:50), paste("Gene", 1:50)), list(paste("Gene", 1:5), paste("Gene", 1:10)),
list(function(oracle){}), NULL, predictTable, actual)
result <- calcCVperformance(result)
performance(result)
#> $`Balanced Accuracy`
#> 1
#> 0.4380952
#>