Navigation überspringen
Universitätsbibliothek Heidelberg
Status: Bibliographieeintrag

Verfügbarkeit
Standort: ---
Exemplare: ---
heiBIB
 Online-Ressource
Verfasst von:Dimitriadis, Timo [VerfasserIn]   i
 Gneiting, Tilmann [VerfasserIn]   i
 Jordan, Alexander I. [VerfasserIn]   i
 Vogel, Peter [VerfasserIn]   i
Titel:Evaluating probabilistic classifiers
Titelzusatz:the triptych
Verf.angabe:Timo Dimitriadis, Tilmann Gneiting, Alexander I. Jordan, Peter Vogel
E-Jahr:2024
Jahr:July-September 2024
Umfang:22 S.
Illustrationen:Illustrationen
Fussnoten:Online verfügbar: 4. November 2023, Artikelversion: 31. Mai 2024
Titel Quelle:Enthalten in: International journal of forecasting
Ort Quelle:Amsterdam [u.a.] : Elsevier Science, 1985
Jahr Quelle:2024
Band/Heft Quelle:40(2024), 3 vom: Juli/Sept., Seite 1101-1122
ISSN Quelle:0169-2070
Abstract:Probability forecasts for binary outcomes, often referred to as probabilistic classifiers or confidence scores, are ubiquitous in science and society, and methods for evaluating and comparing them are in great demand. We propose and study a triptych of diagnostic graphics focusing on distinct and complementary aspects of forecast performance: Reliability curves address calibration, receiver operating characteristic (ROC) curves diagnose discrimination ability, and Murphy curves visualize overall predictive performance and value. A Murphy curve shows a forecast’s mean elementary scores, including the widely used misclassification rate, and the area under a Murphy curve equals the mean Brier score. For a calibrated forecast, the reliability curve lies on the diagonal, and for competing calibrated forecasts, the ROC and Murphy curves share the same number of crossing points. We invoke the recently developed CORP (Consistent, Optimally binned, Reproducible, and Pool-Adjacent-Violators (PAV) algorithm-based) approach to craft reliability curves and decompose a mean score into miscalibration (MCB), discrimination (DSC), and uncertainty (UNC) components. Plots of the DSC measure of discrimination ability versus the calibration metric MCB visualize classifier performance across multiple competitors. The proposed tools are illustrated in empirical examples from astrophysics, economics, and social science.
DOI:doi:10.1016/j.ijforecast.2023.09.007
URL:Bitte beachten Sie: Dies ist ein Bibliographieeintrag. Ein Volltextzugriff für Mitglieder der Universität besteht hier nur, falls für die entsprechende Zeitschrift/den entsprechenden Sammelband ein Abonnement besteht oder es sich um einen OpenAccess-Titel handelt.

kostenfrei: Verlag: https://www.sciencedirect.com/science/article/pii/S0169207023000997/pdfft?md5=bd26faa9dd0165399770a39be8802f6a&pid=1-s2. ...
 kostenfrei: Resolving-System: https://doi.org/10.1016/j.ijforecast.2023.09.007
 DOI: https://doi.org/10.1016/j.ijforecast.2023.09.007
Datenträger:Online-Ressource
Sprache:eng
Sach-SW:Calibration error
 Economic utility
 Logarithmic score
 MCB-DSC plot
 Misclassification loss
 Proper scoring rule
 Score decomposition
 Sharpness principle
Form-SW:Aufsatz in Zeitschrift
K10plus-PPN:1891212710
Verknüpfungen:→ Zeitschrift

Permanenter Link auf diesen Titel (bookmarkfähig):  https://katalog.ub.uni-heidelberg.de/titel/69275774   QR-Code
zum Seitenanfang