Navigation überspringen
Universitätsbibliothek Heidelberg
Status: Bibliographieeintrag

Verfügbarkeit
Standort: ---
Exemplare: ---
heiBIB
 Online-Ressource
Verfasst von:Durstewitz, Daniel [VerfasserIn]   i
 Koppe, Georgia [VerfasserIn]   i
 Thurm, Max [VerfasserIn]   i
Titel:Reconstructing computational system dynamics from neural data with recurrent neural networks
Verf.angabe:Daniel Durstewitz, Georgia Koppe & Max Ingo Thurm
E-Jahr:2023
Jahr:November 2023
Umfang:18 S.
Illustrationen:Illustrationen
Fussnoten:Veröffentlicht: 04. Oktober 2023 ; Gesehen am 20.11.2023
Titel Quelle:Enthalten in: Nature reviews. Neuroscience
Ort Quelle:London : Nature Publ. Group, 2000
Jahr Quelle:2023
Band/Heft Quelle:24(2023), 11 vom: Nov., Seite 693-710
ISSN Quelle:1471-0048
Abstract:Computational models in neuroscience usually take the form of systems of differential equations. The behaviour of such systems is the subject of dynamical systems theory. Dynamical systems theory provides a powerful mathematical toolbox for analysing neurobiological processes and has been a mainstay of computational neuroscience for decades. Recently, recurrent neural networks (RNNs) have become a popular machine learning tool for studying the non-linear dynamics of neural and behavioural processes by emulating an underlying system of differential equations. RNNs have been routinely trained on similar behavioural tasks to those used for animal subjects to generate hypotheses about the underlying computational mechanisms. By contrast, RNNs can also be trained on the measured physiological and behavioural data, thereby directly inheriting their temporal and geometrical properties. In this way they become a formal surrogate for the experimentally probed system that can be further analysed, perturbed and simulated. This powerful approach is called dynamical system reconstruction. In this Perspective, we focus on recent trends in artificial intelligence and machine learning in this exciting and rapidly expanding field, which may be less well known in neuroscience. We discuss formal prerequisites, different model architectures and training approaches for RNN-based dynamical system reconstructions, ways to evaluate and validate model performance, how to interpret trained models in a neuroscience context, and current challenges.
DOI:doi:10.1038/s41583-023-00740-7
URL:Bitte beachten Sie: Dies ist ein Bibliographieeintrag. Ein Volltextzugriff für Mitglieder der Universität besteht hier nur, falls für die entsprechende Zeitschrift/den entsprechenden Sammelband ein Abonnement besteht oder es sich um einen OpenAccess-Titel handelt.

Volltext: https://doi.org/10.1038/s41583-023-00740-7
 Volltext: https://www.nature.com/articles/s41583-023-00740-7
 DOI: https://doi.org/10.1038/s41583-023-00740-7
Datenträger:Online-Ressource
Sprache:eng
Sach-SW:Dynamical systems
 Learning algorithms
K10plus-PPN:1870652924
Verknüpfungen:→ Zeitschrift

Permanenter Link auf diesen Titel (bookmarkfähig):  https://katalog.ub.uni-heidelberg.de/titel/69144316   QR-Code
zum Seitenanfang