BenchMetrics: A Systematic Benchmarking Method for Binary-Classification Performance Metrics

Gürol Canbek
This paper proposes a systematic benchmarking method called BenchMetrics to analyze and compare the robustness of binary-classification performance metrics based on the confusion matrix for a crisp classifier. BenchMetrics, introducing new concepts such as meta-metrics (metrics about metrics) and metric-space, has been tested on fifteen well-known metrics including Balanced Accuracy, Normalized Mutual Information, Cohen’s Kappa, and Matthews Correlation Coefficient (MCC) along with two recently proposed metrics Optimized Precision and Index of Balanced Accuracy in the...
This data repository is not currently reporting usage information. For information on how your repository can submit usage information, please see our documentation.