Navigation überspringen
Universitätsbibliothek Heidelberg
Status: Bibliographieeintrag

Verfügbarkeit
Standort: ---
Exemplare: ---
heiBIB
 Online-Ressource
Verfasst von:Kades, Klaus [VerfasserIn]   i
 Sellner, Jan [VerfasserIn]   i
 Köhler, Gregor [VerfasserIn]   i
 Full, Peter M. [VerfasserIn]   i
 Lai, T. Y. Emmy [VerfasserIn]   i
 Kleesiek, Jens Philipp [VerfasserIn]   i
 Maier-Hein, Klaus H. [VerfasserIn]   i
Titel:Adapting Bidirectional Encoder Representations from Transformers (BERT) to assess clinical semantic textual similarity
Titelzusatz:algorithm development and validation study
Verf.angabe:Klaus Kades, MSc; Jan Sellner, MSc; Gregor Koehler, MSc; Peter M Full, BSc; TY Emmy Lai, MSc; Jens Kleesiek, MD, PhD; Klaus H Maier-Hein, PhD
E-Jahr:2021
Jahr:3.2.2021
Umfang:13 S.
Fussnoten:Gesehen am 21.07.2021
Titel Quelle:Enthalten in: JMIR medical informatics
Ort Quelle:Toronto : [Verlag nicht ermittelbar], 2013
Jahr Quelle:2021
Band/Heft Quelle:9(2021), 2, Artikel-ID e22795, Seite 1-13
ISSN Quelle:2291-9694
Abstract:Background: Natural Language Understanding enables automatic extraction of relevant information from clinical text data, which are acquired every day in hospitals. In 2018, the language model Bidirectional Encoder Representations from Transformers (BERT) was introduced, generating new state-of-the-art results on several downstream tasks. The National NLP Clinical Challenges (n2c2) is an initiative that strives to tackle such downstream tasks on domain-specific clinical data. In this paper, we present the results of our participation in the 2019 n2c2 and related work completed thereafter. - Objective: The objective of this study was to optimally leverage BERT for the task of assessing the semantic textual similarity of clinical text data. - Methods: We used BERT as an initial baseline and analyzed the results, which we used as a starting point to develop 3 different approaches where we (1) added additional, handcrafted sentence similarity features to the classifier token of BERT and combined the results with more features in multiple regression estimators, (2) incorporated a built-in ensembling method, M-Heads, into BERT by duplicating the regression head and applying an adapted training strategy to facilitate the focus of the heads on different input patterns of the medical sentences, and (3) developed a graph-based similarity approach for medications, which allows extrapolating similarities across known entities from the training set. The approaches were evaluated with the Pearson correlation coefficient between the predicted scores and ground truth of the official training and test dataset. - Results: We improved the performance of BERT on the test dataset from a Pearson correlation coefficient of 0.859 to 0.883 using a combination of the M-Heads method and the graph-based similarity approach. We also show differences between the test and training dataset and how the two datasets influenced the results. - Conclusions: We found that using a graph-based similarity approach has the potential to extrapolate domain specific knowledge to unseen sentences. We observed that it is easily possible to obtain deceptive results from the test dataset, especially when the distribution of the data samples is different between training and test datasets.
DOI:doi:10.2196/22795
URL:Bitte beachten Sie: Dies ist ein Bibliographieeintrag. Ein Volltextzugriff für Mitglieder der Universität besteht hier nur, falls für die entsprechende Zeitschrift/den entsprechenden Sammelband ein Abonnement besteht oder es sich um einen OpenAccess-Titel handelt.

Volltext: https://doi.org/10.2196/22795
 Volltext: https://medinform.jmir.org/2021/2/e22795
 DOI: https://doi.org/10.2196/22795
Datenträger:Online-Ressource
Sprache:eng
K10plus-PPN:1754996647
Verknüpfungen:→ Zeitschrift

Permanenter Link auf diesen Titel (bookmarkfähig):  https://katalog.ub.uni-heidelberg.de/titel/68724050   QR-Code
zum Seitenanfang