Skip to content

TCNN: A Transformer Convolutional Neural Network for artifact classification in whole slide images

Notifications You must be signed in to change notification settings

AshkanShakarami/TCNN

Repository files navigation

TCNN

TCNN: A Transformer Convolutional Neural Network for artifact classification in whole slide images (https://doi.org/10.1016/j.bspc.2023.104812)

The TCNN codebook is provided in this repository for academic research purposes. The TCNN aims to automate the detection of artifacts in pathological images. By treating artifact detection as a binary classification task, the TCNN offers a solution to identify these unwanted patterns that may arise during slide processing. This method alleviates the need for laboratory technicians to label manually, reducing the risk of erroneous data being sent for analysis by pathologists and physicians. Artifact patches, if not identified and excluded, can compromise the accuracy of Computer-Aided Diagnosis (CAD) systems.

TCNN_Architecture ----------------The proposed Transformer Convolutional Neural Network (TCNN) Architecture----------------

TCNN_Outputs ------------The pictorial objective form of this paper. a) A tailed whole slide image using QuPath software, b) the result of tailing, and c) Non-Artifact tiles (The clean dataset).------------

Should you utilize any concepts from the model, we kindly request that you acknowledge our work by providing proper attribution through citations [Shakarami, A., Nicolè, L., Terreran, M., Dei Tos, A. P., & Ghidoni, S. (2023). Tcnn: A transformer convolutional neural network for artifact classification in whole slide images. Biomedical Signal Processing and Control, 84, 104812.]; [https://www.sciencedirect.com/science/article/pii/S1746809423002458]

For inquiries, please contact Dr. Ashkan Shakarami ([email protected], [email protected]) or any other authors listed in the paper.