Navigation überspringen
Universitätsbibliothek Heidelberg
Status: Bibliographieeintrag

Verfügbarkeit
Standort: ---
Exemplare: ---
heiBIB
 Online-Ressource
Verfasst von:Hong, Danfeng [VerfasserIn]   i
 Zhang, Bing [VerfasserIn]   i
 Li, Hao [VerfasserIn]   i
 Li, Yuxuan [VerfasserIn]   i
 Yao, Jing [VerfasserIn]   i
 Li, Chenyu [VerfasserIn]   i
 Werner, Martin [VerfasserIn]   i
 Chanussot, Jocelyn [VerfasserIn]   i
 Zipf, Alexander [VerfasserIn]   i
 Zhu, Xiao Xiang [VerfasserIn]   i
Titel:Cross-city matters
Titelzusatz:a multimodal remote sensing benchmark dataset for cross-city semantic segmentation using high-resolution domain adaptation networks
Verf.angabe:Danfeng Hong, Bing Zhang, Hao Li, Yuxuan Li, Jing Yao, Chenyu Li, Martin Werner, Jocelyn Chanussot, Alexander Zipf, Xiao Xiang Zhu
E-Jahr:2023
Jahr:15 December 2023
Umfang:17 S.
Illustrationen:Illustrationen
Fussnoten:Online verfügbar 30 October 2023, Version des Artikels 30 October 2023 ; Gesehen am 24.01.2024
Titel Quelle:Enthalten in: Remote sensing of environment
Ort Quelle:Amsterdam [u.a.] : Elsevier Science, 1969
Jahr Quelle:2023
Band/Heft Quelle:299(2023) vom: Dez., Artikel-ID 113856, Seite 1-17
ISSN Quelle:1879-0704
Abstract:Artificial intelligence (AI) approaches nowadays have gained remarkable success in single-modality-dominated remote sensing (RS) applications, especially with an emphasis on individual urban environments (e.g., single cities or regions). Yet these AI models tend to meet the performance bottleneck in the case studies across cities or regions, due to the lack of diverse RS information and cutting-edge solutions with high generalization ability. To this end, we build a new set of multimodal remote sensing benchmark datasets (including hyperspectral, multispectral, SAR) for the study purpose of the cross-city semantic segmentation task (called C2Seg dataset), which consists of two cross-city scenes, i.e., Berlin-Augsburg (in Germany) and Beijing-Wuhan (in China). Beyond the single city, we propose a high-resolution domain adaptation network, HighDAN for short, to promote the AI model's generalization ability from the multi-city environments. HighDAN is capable of retaining the spatially topological structure of the studied urban scene well in a parallel high-to-low resolution fusion fashion but also closing the gap derived from enormous differences of RS image representations between different cities by means of adversarial learning. In addition, the Dice loss is considered in HighDAN to alleviate the class imbalance issue caused by factors across cities. Extensive experiments conducted on the C2Seg dataset show the superiority of our HighDAN in terms of segmentation performance and generalization ability, compared to state-of-the-art competitors. The C2Seg dataset and the semantic segmentation toolbox (involving the proposed HighDAN) will be available publicly at https://github.com/danfenghong/RSE_Cross-city.
DOI:doi:10.1016/j.rse.2023.113856
URL:Bitte beachten Sie: Dies ist ein Bibliographieeintrag. Ein Volltextzugriff für Mitglieder der Universität besteht hier nur, falls für die entsprechende Zeitschrift/den entsprechenden Sammelband ein Abonnement besteht oder es sich um einen OpenAccess-Titel handelt.

Volltext: https://doi.org/10.1016/j.rse.2023.113856
 Volltext: https://www.sciencedirect.com/science/article/pii/S0034425723004078
 DOI: https://doi.org/10.1016/j.rse.2023.113856
Datenträger:Online-Ressource
Sprache:eng
Sach-SW:Cross-city
 Deep learning
 Dice loss
 Domain adaptation
 High-resolution network
 Land cover
 Multimodal benchmark datasets
 Remote sensing
 Segmentation
K10plus-PPN:1878862448
Verknüpfungen:→ Zeitschrift

Permanenter Link auf diesen Titel (bookmarkfähig):  https://katalog.ub.uni-heidelberg.de/titel/69165079   QR-Code
zum Seitenanfang