| Online-Ressource |
Verfasst von: | Wiest, Isabella [VerfasserIn]  |
| Ferber, Dyke [VerfasserIn]  |
| Zhu, Jiefu [VerfasserIn]  |
| van Treeck, Marko [VerfasserIn]  |
| Meyer, Sonja K. [VerfasserIn]  |
| Juglan, Radhika [VerfasserIn]  |
| Carrero, Zunamys I. [VerfasserIn]  |
| Paech, Daniel [VerfasserIn]  |
| Kleesiek, Jens Philipp [VerfasserIn]  |
| Ebert, Matthias [VerfasserIn]  |
| Truhn, Daniel [VerfasserIn]  |
| Kather, Jakob Nikolas [VerfasserIn]  |
Titel: | Privacy-preserving large language models for structured medical information retrieval |
Verf.angabe: | Isabella Catharina Wiest, Dyke Ferber, Jiefu Zhu, Marko van Treeck, Sonja K. Meyer, Radhika Juglan, Zunamys I. Carrero, Daniel Paech, Jens Kleesiek, Matthias P. Ebert, Daniel Truhn & Jakob Nikolas Kather |
E-Jahr: | 2024 |
Jahr: | 20 September 2024 |
Umfang: | 9 S. |
Illustrationen: | Illustrationen |
Fussnoten: | Gesehen am 21.10.2024 |
Titel Quelle: | Enthalten in: npj digital medicine |
Ort Quelle: | [Basingstoke] : Macmillan Publishers Limited, 2016 |
Jahr Quelle: | 2024 |
Band/Heft Quelle: | 7(2024), 1, Seite 1-9 |
ISSN Quelle: | 2398-6352 |
Abstract: | Most clinical information is encoded as free text, not accessible for quantitative analysis. This study presents an open-source pipeline using the local large language model (LLM) “Llama 2” to extract quantitative information from clinical text and evaluates its performance in identifying features of decompensated liver cirrhosis. The LLM identified five key clinical features in a zero- and one-shot manner from 500 patient medical histories in the MIMIC IV dataset. We compared LLMs of three sizes and various prompt engineering approaches, with predictions compared against ground truth from three blinded medical experts. Our pipeline achieved high accuracy, detecting liver cirrhosis with 100% sensitivity and 96% specificity. High sensitivities and specificities were also yielded for detecting ascites (95%, 95%), confusion (76%, 94%), abdominal pain (84%, 97%), and shortness of breath (87%, 97%) using the 70 billion parameter model, which outperformed smaller versions. Our study successfully demonstrates the capability of locally deployed LLMs to extract clinical information from free text with low hardware requirements. |
DOI: | doi:10.1038/s41746-024-01233-2 |
URL: | Bitte beachten Sie: Dies ist ein Bibliographieeintrag. Ein Volltextzugriff für Mitglieder der Universität besteht hier nur, falls für die entsprechende Zeitschrift/den entsprechenden Sammelband ein Abonnement besteht oder es sich um einen OpenAccess-Titel handelt.
kostenfrei: Volltext: https://doi.org/10.1038/s41746-024-01233-2 |
| kostenfrei: Volltext: https://www.nature.com/articles/s41746-024-01233-2 |
| DOI: https://doi.org/10.1038/s41746-024-01233-2 |
Datenträger: | Online-Ressource |
Sprache: | eng |
Sach-SW: | Digestive signs and symptoms |
| Health care |
| Liver diseases |
K10plus-PPN: | 1906325219 |
Verknüpfungen: | → Zeitschrift |
Privacy-preserving large language models for structured medical information retrieval / Wiest, Isabella [VerfasserIn]; 20 September 2024 (Online-Ressource)