Key facts of LKU-Net:
-
LKU-Net is inspired by RepVGG, RepLK-ResNet, IC-Net, SYM-Net, and TransMorph.
-
LKU-Net ranks first on the 2021 MICCAI Learn2Reg Challenge Task 3 Validation as of 18th September, 2022
-
LKU-Net is also the winning entry on the 2022 MICCAI Learn2Reg Challenge Task 1.
-
LKU-Net-Affine ranks 2nd place on the 2023 ISBI RnR-ExM Challenge.
-
Please see our poster or watch the video for further information.
If you find this repo helpful, please consider citing our work.
@article{jia2022lkunet,
title={U-Net vs Transformer: Is U-Net Outdated in Medical Image Registration?},
author={Jia, Xi and Bartlett, Joseph and Zhang, Tianyang and Lu, Wenqi and Qiu, Zhaowen and Duan, Jinming},
journal={arXiv preprint arXiv:2208.04939},
year={2022}
}
Note that the data used in this work can be publicly accessed from MICCAI 2021 L2R Challenge and TransMorph IXI.
Thanks for the kind reminder from @Junyu, the leading author of TransMorph (which is accepted by MIA now), that using bilinear interpolation improves the Dice scores when even warping image labels.
Hence, in line with TransMorph, we accordingly updated all the results (see the table below) on the IXI data by changing the interpolation of LKU-Net from 'nearest' to 'bilinear'.
Dice | Nearest | Bilinear |
---|---|---|
TransMorph | 0.746±0.128 | 0.753±0.123 |
TransMorph-Bayes | 0.746±0.123 | 0.754±0.124 |
TransMorph-bspl | 0.752±0.128 | 0.761±0.122 |
TransMorph-diff | 0.599±0.156 | 0.594±0.163 |
U-Net4 | 0.727±0.126 | 0.733±0.125 |
U-Net-diff4 | 0.744±0.123 | 0.751±0.123 |
LKU-Net4,5 | 0.752±0.131 | 0.758±0.130 |
LKU-Net-diff4,5 | 0.746±0.133 | 0.752±0.132 |
LKU-Net8,5 | 0.757±0.128 | 0.765±0.129 |
LKU-Net-diff8,5 | 0.753±0.132 | 0.760±0.132 |
LKU-Net-diff16,5 | 0.757±0.132 | 0.764±0.131 |
Note that we list the results of TransMorph and LKU-Net only, the results of other compared methods can be found in the IXI page of TransMorph.
Additionaly, we add one more results of the LKU-Net-Dif(16,5), the model can be downloaded from this Google Drive Link.
We directly used the preprocessed IXI data and followed the exact training, validation, and testing protocol.
Before training and testing, please change the image loading and model saving directories accordingly.
The kernel size of LKU-Net can be changed in class LK_encoder
in Models.py.
python train.py --start_channel 8 --using_l2 2 --lr 1e-4 --checkpoint 403 --iteration 403001 --smth_labda 5.0
python infer.py / infer_bilinear.py --start_channel 8 --using_l2 2 --lr 1e-4 --checkpoint 403 --iteration 403001 --smth_labda 5.0
python compute_dsc_jet_from_quantiResult.py
Using the command above, one can easily reproduce our results. Additionally, we provided the trained models for directly computing the reported results.
Pretrained models can be downloaded from this Google Drive Link
- Batch size.
In the paper, we used
batch size = 1
, this parameter is however not carefully tuned. The only reason we usebatch size = 1
is that we want to eliminate the effects caused by GPU memory, as some lighter GPUs may not be able to fit on larger batch sizes. One may try on large different batch sizes if larger memory is available. - Kernel size.
In the 2D experiments, as we reported in our experiments,
LK==7
produces the best results. However, anLK==5
gives better results on 3D IXI. We believe this may be caused by the limited training data. If larger datasets are available, one may try a larger kernel size.