Please make sure you have manually adjusted the best learning rate.
After one training session, it is extremely recommended to reload the weights and lower the lr for another training session. This can be viewed as manul lr_scheduler.
Code for our extended journal paper based on Super-Resolution Based Patch-Free 3D Image Segmentation with High-Frequency Guidance.
This is an extension to the PFSeg repo. We investigated the flaws of the original method and proposed three new improvements to solve them. The problems includes:
- The scale-inconsistency introduced in the concatation between LR main input and HR guidance patch.
- The limitation of central cropping*
- The underated potential of TFM
We introduced new multi-scale network structure and designed a clever algorithm for high-speed guidance patch searching. TFM has also been used for achieving a better model generalization ability.
Details will be released upon paper publication.
Please follow the instructions on https://braintumorsegmentation.org/ to get your copy of the BRATS2020 dataset.
Our code should work with Python>=3.5 and PyTorch >=0.4.1.
Please make sure you also have the following libraries installed on your machine:
- PyTorch
- NumPy
- MedPy
- tqdm
Optional libraries (they are only needed when run with the -v flag):
- opencv-python
Firstly, clone our code by running
git clone [email protected]:Dootmaan/PFSeg-Full.git
Normally you only have to specify the path to BRATS2020 dataset to run the code.
For example you can use the following command (of course you need to change directory to ./PFSeg first):
CUDA_VISIBLE_DEVICES=0 nohup python3 -u train_PFSeg-Full.py -dataset_path "/home/somebody/BRATS2020/" > train_PFSeg.log 2>&1 &
You can also add the -v flag to have verbose output. Our code supports multi-gpu environment and all you have to do is specifying the indices of the available GPUs. Directly running train_PFSeg.py will use all the GPUs on your machine and use the default parameter settings. The minimun requirements for running our code is a single GPU with 11G video memory.
Click here to download the pretrained weights for our framework.
Please note that the code uses 6/2/2 train/val/test split by default, while our results in the paper are reported with a 8/2 train/test split so if you would like to verify the results please make sure you change the default dataset split. We also did a 5-fold cross validation for our method and the results are quite stable, as you can see in the rebuttal part of our paper.
More codes and information may be updated later.