Navigation überspringen
Universitätsbibliothek Heidelberg
Status: Bibliographieeintrag

Verfügbarkeit
Standort: ---
Exemplare: ---
heiBIB
 Online-Ressource
Verfasst von:Pasquato, Mario [VerfasserIn]   i
 Trevisan, Piero [VerfasserIn]   i
 Askar, Abbas [VerfasserIn]   i
 Lemos, Pablo [VerfasserIn]   i
 Carenini, Gaia [VerfasserIn]   i
 Mapelli, Michela [VerfasserIn]   i
 Hezaveh, Yashar [VerfasserIn]   i
Titel:Interpretable machine learning for finding intermediate-mass black holes
Verf.angabe:Mario Pasquato, Piero Trevisan, Abbas Askar, Pablo Lemos, Gaia Carenini, Michela Mapelli, and Yashar Hezaveh
E-Jahr:2024
Jahr:2024 April 10
Umfang:15 S.
Illustrationen:Illustrationen
Fussnoten:Gesehen am 23.09.2024
Titel Quelle:Enthalten in: The astrophysical journal
Ort Quelle:London : Institute of Physics Publ., 1995
Jahr Quelle:2024
Band/Heft Quelle:965(2024), 1, Artikel-ID 89, Seite 1-15
ISSN Quelle:1538-4357
Abstract:Definitive evidence that globular clusters (GCs) host intermediate-mass black holes (IMBHs) is elusive. Machine-learning (ML) models trained on GC simulations can in principle predict IMBH host candidates based on observable features. This approach has two limitations: first, an accurate ML model is expected to be a black box due to complexity; second, despite our efforts to simulate GCs realistically, the simulation physics or initial conditions may fail to reflect reality fully. Therefore our training data may be biased, leading to a failure in generalization to observational data. Both the first issue-explainability/interpretability-and the second-out of distribution generalization and fairness-are active areas of research in ML. Here we employ techniques from these fields to address them: we use the anchors method to explain an Extreme Gradient Boosting (XGBoost) classifier; we also independently train a natively interpretable model using Certifiably Optimal RulE ListS (CORELS). The resulting model has a clear physical meaning, but loses some performance with respect to XGBoost. We evaluate potential candidates in real data based not only on classifier predictions but also on their similarity to the training data, measured by the likelihood of a kernel density estimation model. This measures the realism of our simulated data and mitigates the risk that our models may produce biased predictions by working in extrapolation. We apply our classifiers to real GCs, obtaining a predicted classification, a measure of the confidence of the prediction, an out-of-distribution flag, a local rule explaining the prediction of XGBoost, and a global rule from CORELS.
DOI:doi:10.3847/1538-4357/ad2261
URL:Bitte beachten Sie: Dies ist ein Bibliographieeintrag. Ein Volltextzugriff für Mitglieder der Universität besteht hier nur, falls für die entsprechende Zeitschrift/den entsprechenden Sammelband ein Abonnement besteht oder es sich um einen OpenAccess-Titel handelt.

kostenfrei: Volltext: https://doi.org/10.3847/1538-4357/ad2261
 kostenfrei: Volltext: https://www.webofscience.com/api/gateway?GWVersion=2&SrcAuth=DOISource&SrcApp=WOS&KeyAID=10.3847%2F1538-4357%2Fad2261&De ...
 DOI: https://doi.org/10.3847/1538-4357/ad2261
Datenträger:Online-Ressource
Sprache:eng
Sach-SW:ASTROPHYSICAL IMPLICATIONS
 GLOBULAR-CLUSTERS
 HIGH-STAKES DECISIONS
 MILKY-WAY
 MOCCA CODE
 MONTE-CARLO SIMULATIONS
 NO EVIDENCE
 RUNAWAY COLLISIONS
 STAR CLUSTER SIMULATIONS
 SURVEY DATABASE I
K10plus-PPN:1903189942
Verknüpfungen:→ Zeitschrift

Permanenter Link auf diesen Titel (bookmarkfähig):  https://katalog.ub.uni-heidelberg.de/titel/69255316   QR-Code
zum Seitenanfang