One should download the folders in the Google Drive URL below and "replace" with the corresponding folders in the GitHub repository before running the scripts to get meaningful results.
Furhtermore, one should download the edf folder of the TUH's "TUSZ Corpus" from the link below and this folder should be replaced with the "edf" folder in the GitHub repository. (Note that it is requited to register to TUH database because they would like to trace the name of the user institutions/organizations) We used v1.5.2 of their TUSZ release during training.
Temple University Hospital TUSZ Dataset URL
Temple University Hospital All Datasets URL
Before running the scripts, please check the requirements.txt.
If one completed initial setup, the main.py can be directly run to see the evaluation results in the paper. Whole workflow is embedded into main.py so one do not have to deal with individual functions/scripts. Time consuming steps are commented out so that one can just run the main.py to get results. However, if one is interested in initial steps that we followed, please feel free to uncomment the codes inside the main.py. For further details about processes please visit main.py.
The model architecture is implemented in swd_model.py. One should note that tensorflow_addons library is needed to run the script without errors.
swd_utils.py contains most of the important functions such as estimating the Multitaper PSD, configuration function provides Leave N One Out Cross Validation, metric calculation etc. All functions are explained inside the script just after the definition. Feel free to investigate further.
Since the data got from TUSZ Corpus is not ready to feed into our neural network directly, some file handling functions are coded in read_TUSZ.py. The functioncan be generalized for further applications and not only limited to absz seizure inside the corpus. There are a lot of arguments that can modify the main purpose of the functions. Researchers are free to use our functions according to their needs by citing our paper. Please note that pyedflib is necessary to run this script. We will be using only two montages of their multi channel EEG data as shown in the Figure:
One should download the folders in the Google Drive URL below and "replace" with the corresponding folders in the GitHub repository before running the scripts to get meaningful results.
Raw and Parsed data The first section of the EDF_LABELLER.m script can be modified to parse the new EEG records, the existing one parses the shared records based on the seizure occurence information given in the excel sheets. The dependencies in requirements.txt should be installed.
Utility scripts except create_training_data.py are directly coded in main training scripts. Training scripts RatTrainPSD.py and RatTrainTime.py are ready to run and they produce results in JSON and CSV directories.
Please cite our paper below when using or referring to our work.
@article{BASER2022103726,
title = {Automatic detection of the spike-and-wave discharges in absence epilepsy for humans and rats using deep learning},
journal = {Biomedical Signal Processing and Control},
volume = {76},
pages = {103726},
year = {2022},
issn = {1746-8094},
doi = {https://doi.org/10.1016/j.bspc.2022.103726},
url = {https://www.sciencedirect.com/science/article/pii/S1746809422002488},
author = {Oguzhan Baser and Melis Yavuz and Kutay Ugurlu and Filiz Onat and Berken Utku Demirel},
keywords = {Electroencephalography (EEG), Spike-and-wave (SWD), Absence epilepsy, Power spectral density, Deep learning}
}