Navigation überspringen
Universitätsbibliothek Heidelberg
Standort: ---
Exemplare: ---
Verfasst von:Ziebart, Andreas [VerfasserIn]   i
 Stadniczuk, Denis [VerfasserIn]   i
 Roos, Veronika [VerfasserIn]   i
 Ratliff, Miriam [VerfasserIn]   i
 Deimling, Andreas von [VerfasserIn]   i
 Hänggi, Daniel [VerfasserIn]   i
 Enders, Frederik [VerfasserIn]   i
Titel:Deep neural network for differentiation of brain tumor tissue displayed by confocal laser dndomicroscopy
Verf.angabe:Andreas Ziebart, Denis Stadniczuk, Veronika Roos, Miriam Ratliff, Andreas von Deimling, Daniel Hänggi and Frederik Enders
Jahr:11 May 2021
Umfang:10 S.
Fussnoten:Gesehen am 27.05.2021
Titel Quelle:Enthalten in: Frontiers in oncology
Ort Quelle:Lausanne : Frontiers Media, 2011
Jahr Quelle:2021
Band/Heft Quelle:11(2021) vom: 11. Mai, Artikel-ID 668273, Seite 1-10
ISSN Quelle:2234-943X
Abstract:Background: Reliable on site classification of resected tumor specimens remains a challenge. Implementation of high resolution confocal laser endoscopic techniques (CLE) during fluorescence-guided brain tumor surgery is a new tool for intraoperative tumor tissue visualization. To overcome observer dependent errors, we aimed to predict tumor type by applying a deep learning model to image data obtained by CLE. Methods: Human brain tumor specimens from 25 patients with brain metastases, glioblastoma and meningioma were evaluated within this study. In addition to routine histopathological analysis tissue samples were stained with fluorescein ex vivo and analyzed with CLE. We trained two convolutional neural networks and built a predictive level for the outputs. Results: Multiple CLE- images were obtained from each specimen with a total number of 13972 fluorescein based images. Test accuracy of 90.9% was achieved after applying a two class prediction for glioblastoma and brain metastases with an area under the curve (AUC) value of 0.92. For three class prediction, our model achieved a ratio of correct predicted label of 85.8% in the test set, which was confirmed with five-fold cross validation, without definition of confidence. Applying a confidence rate of 0.999 increased the prediction accuracy to 98.6%, when images with substantial artifacts were excluded before the analysis. 36.3% of total images met the output criteria. Conclusions: We trained a residual network model that allows automated, on site analysis of resected tumor specimens based on CLE image datasets. Further in vivo studies are required to assess the clinical benefit CLE can have.
URL:: Volltext ; Verlag:
 : Volltext:
 : :
Verknüpfungen:→ Zeitschrift
Lokale URL UB: Zum Volltext

Permanenter Link auf diesen Titel (bookmarkfähig):   QR-Code
zum Seitenanfang