[HTML][HTML] Automatic assignment of radiology examination protocols using pre-trained language models with knowledge distillation

W Lau, L Aaltonen, M Gunn… - AMIA Annual …, 2022 - pmc.ncbi.nlm.nih.gov
AMIA Annual Symposium Proceedings, 2022pmc.ncbi.nlm.nih.gov
Selecting radiology examination protocol is a repetitive, and time-consuming process. In this
paper, we present a deep learning approach to automatically assign protocols to computed
tomography examinations, by pre-training a domain-specific BERT model (BERTrad). To
handle the high data imbalance across exam protocols, we used a knowledge distillation
approach that up-sampled the minority classes through data augmentation. We compared
classification performance of the described approach with n-gram models using Support …
Selecting radiology examination protocol is a repetitive, and time-consuming process. In this paper, we present a deep learning approach to automatically assign protocols to computed tomography examinations, by pre-training a domain-specific BERT model (BERTrad). To handle the high data imbalance across exam protocols, we used a knowledge distillation approach that up-sampled the minority classes through data augmentation. We compared classification performance of the described approach with n-gram models using Support Vector Machine (SVM), Gradient Boosting Machine (GBM), and Random Forest (RF) classifiers, as well as the BERTbase model. SVM, GBM and RF achieved macro-averaged F1 scores of 0.45, 0.45, and 0.6 while BERTbase and BERTrad achieved 0.61 and 0.63. Knowledge distillation boosted performance on the minority classes and achieved an F1 score of 0.66.
pmc.ncbi.nlm.nih.gov