Navigation überspringen
Universitätsbibliothek Heidelberg
Status: Bibliographieeintrag

Verfügbarkeit
Standort: ---
Exemplare: ---
heiBIB
 Online-Ressource
Verfasst von:D'Isanto, Antonio [VerfasserIn]   i
Titel:Return of the features
Titelzusatz:Efficient feature selection and interpretation for photometric redshifts
Verf.angabe:A. D’Isanto, S. Cavuoti, F. Gieseke, and K.L. Polsterer
Fussnoten:Gesehen am 27.02.2019
Titel Quelle:Enthalten in: Astronomy and astrophysics
Jahr Quelle:2018
Band/Heft Quelle:616(2018) Artikel-Nummer A97, 21 Seiten
ISSN Quelle:1432-0746
Abstract:<i>Context<i/>. The explosion of data in recent years has generated an increasing need for new analysis techniques in order to extract knowledge from massive data-sets. Machine learning has proved particularly useful to perform this task. Fully automatized methods (e.g. deep neural networks) have recently gathered great popularity, even though those methods often lack physical interpretability. In contrast, feature based approaches can provide both well-performing models and understandable causalities with respect to the correlations found between features and physical processes.<i>Aims<i/>. Efficient feature selection is an essential tool to boost the performance of machine learning models. In this work, we propose a forward selection method in order to compute, evaluate, and characterize better performing features for regression and classification problems. Given the importance of photometric redshift estimation, we adopt it as our case study.<i>Methods<i/>. We synthetically created 4520 features by combining magnitudes, errors, radii, and ellipticities of quasars, taken from the Sloan Digital Sky Survey (SDSS). We apply a forward selection process, a recursive method in which a huge number of feature sets is tested through a k-Nearest-Neighbours algorithm, leading to a tree of feature sets. The branches of the feature tree are then used to perform experiments with the random forest, in order to validate the best set with an alternative model.<i>Results<i/>. We demonstrate that the sets of features determined with our approach improve the performances of the regression models significantly when compared to the performance of the classic features from the literature. The found features are unexpected and surprising, being very different from the classic features. Therefore, a method to interpret some of the found features in a physical context is presented.<i>Conclusions<i/>. The feature selection methodology described here is very general and can be usedto improve the performance of machine learning models for any regression or classification task.
DOI:doi:10.1051/0004-6361/201833103
URL:Bitte beachten Sie: Dies ist ein Bibliographieeintrag. Ein Volltextzugriff für Mitglieder der Universität besteht hier nur, falls für die entsprechende Zeitschrift/den entsprechenden Sammelband ein Abonnement besteht oder es sich um einen OpenAccess-Titel handelt.

Verlag: http://dx.doi.org/10.1051/0004-6361/201833103
 DOI: https://doi.org/10.1051/0004-6361/201833103
Datenträger:Online-Ressource
Sprache:eng
K10plus-PPN:1588165914
Verknüpfungen:→ Zeitschrift

Permanenter Link auf diesen Titel (bookmarkfähig):  https://katalog.ub.uni-heidelberg.de/titel/68363864   QR-Code
zum Seitenanfang