Combining deep learning with token selection for patient phenotyping from electronic health records
Yang,Zhen; Dehmer,Matthias; Yli-Harja,Olli; Emmert-Streib,Frank (2020)
This publication is copyrighted. You may download, display and print it for Your own personal use. Commercial use is prohibited
Julkaisun pysyvä osoite on
Artificial intelligence provides the opportunity to reveal important information buried in large amounts of complex data. Electronic health records (eHRs) are a source of such big data that provide a multitude of health related clinical information about patients. However, text data from eHRs, e.g., discharge summary notes, are challenging in their analysis because these notes are free-form texts and the writing formats and styles vary considerably between different records. For this reason, in this paper we study deep learning neural networks in combination with natural language processing to analyze text data from clinical discharge summaries. We provide a detail analysis of patient phenotyping, i.e., the automatic prediction of ten patient disorders, by investigating the influence of network architectures, sample sizes and information content of tokens. Importantly, for patients suffering from Chronic Pain, the disorder that is the most difficult one to classify, we find the largest performance gain for a combined word- and sentence-level input convolutional neural network (ws-CNN). As a general result, we find that the combination of data quality and data quantity of the text data is playing a crucial role for using more complex network architectures that improve significantly beyond a word-level input CNN model. From our investigations of learning curves and token selection mechanisms, we conclude that for such a transition one requires larger sample sizes because the amount of information per sample is quite small and only carried by few tokens and token categories. Interestingly, we found that the token frequency in the eHRs follow a Zipf law and we utilized this behavior to investigate the information content of tokens by defining a token selection mechanism. The latter addresses also issues of explainable AI.
- TUNICRIS-julkaisut 
Näytetään aineisto, joilla on samankaltaisia nimekkeitä, tekijöitä tai asiasanoja.
Smirnov,Sergey; Gotchev,Atanas; Hannuksela,Miska
IEEE International Conference on Multimedia and Expo (2013)
Joint de-noising and fusion of 2D video and depth map sequences sensed by low-powered tof range sensor Georgiev,Mihail; Gotchev,Atanas; Hannuksela,Miska
IEEE International Conference on Multimedia and Expo (Institute of Electrical and Electronics Engineers IEEE, 2013)
Georgiev,Mihail; Gotchev,Atanas; Hannuksela,Miska
IEEE International Symposium on Circuits and Systems (Institute of Electrical and Electronics Engineers IEEE, 2013)