Navigation überspringen
Universitätsbibliothek Heidelberg
Status: Bibliographieeintrag
Standort: ---
Exemplare: ---
heiBIB
 Online-Ressource
Verfasst von:Rombach, Robin [VerfasserIn]   i
 Esser, Patrick [VerfasserIn]   i
 Ommer, Björn [VerfasserIn]   i
Titel:Network-to-network translation with conditional invertible neural networks
Verf.angabe:Robin Rombach, Patrick Esser, Björn Ommer
Ausgabe:Version v2
E-Jahr:2020
Jahr:9 Nov 2020
Umfang:24 S.
Fussnoten:Version 1 vom 27. Mai 2020, Version 2 vom 9. November 2020 ; Gesehen am 06.10.2022
Titel Quelle:Enthalten in: De.arxiv.org
Ort Quelle:[S.l.] : Arxiv.org, 1991
Jahr Quelle:2020
Band/Heft Quelle:(2020), Artikel-ID 2005.13580, Seite 1-24
Abstract:Given the ever-increasing computational costs of modern machine learning models, we need to find new ways to reuse such expert models and thus tap into the resources that have been invested in their creation. Recent work suggests that the power of these massive models is captured by the representations they learn. Therefore, we seek a model that can relate between different existing representations and propose to solve this task with a conditionally invertible network. This network demonstrates its capability by (i) providing generic transfer between diverse domains, (ii) enabling controlled content synthesis by allowing modification in other domains, and (iii) facilitating diagnosis of existing representations by translating them into interpretable domains such as images. Our domain transfer network can translate between fixed representations without having to learn or finetune them. This allows users to utilize various existing domain-specific expert models from the literature that had been trained with extensive computational resources. Experiments on diverse conditional image synthesis tasks, competitive image modification results and experiments on image-to-image and text-to-image generation demonstrate the generic applicability of our approach. For example, we translate between BERT and BigGAN, state-of-the-art text and image models to provide text-to-image generation, which neither of both experts can perform on their own.
DOI:doi:10.48550/arXiv.2005.13580
URL:Bitte beachten Sie: Dies ist ein Bibliographieeintrag. Ein Volltextzugriff für Mitglieder der Universität besteht hier nur, falls für die entsprechende Zeitschrift/den entsprechenden Sammelband ein Abonnement besteht oder es sich um einen OpenAccess-Titel handelt.

Volltext ; Verlag: https://doi.org/10.48550/arXiv.2005.13580
 Volltext: http://arxiv.org/abs/2005.13580
 DOI: https://doi.org/10.48550/arXiv.2005.13580
Datenträger:Online-Ressource
Sprache:eng
Sach-SW:Computer Science - Computer Vision and Pattern Recognition
 Computer Science - Machine Learning
K10plus-PPN:1818127288
Verknüpfungen:→ Sammelwerk

Permanenter Link auf diesen Titel (bookmarkfähig):  https://katalog.ub.uni-heidelberg.de/titel/68970893   QR-Code
zum Seitenanfang