U-KAN Makes Strong Backbone for Medical Image Segmentation and Generation
Chenxin Li*, Xinyu Liu*, Wuyang Li*, Cheng Wang*, Hengyu Liu, Yixuan Yuan✉
The Chinese Univerisity of Hong Kong
Contact: [email protected]
You can change the torch and Cuda versions to satisfy your device.
conda create --name UKAN python=3.10
conda activate UKAN
conda install cudatoolkit=11.3
pip install -r requirement.txt
Download the pre-processed dataset from Onedrive and unzip it into the project folder. The data is pre-processed by the scripts in tools.
Diffusion_UKAN
| data
| └─ cvc
| └─ images_64
| └─ busi
| └─ images_64
| └─ glas
| └─ images_64
Download released_models from Onedrive and unzip it in the project folder.
Diffusion_UKAN
| released_models
| └─ ukan_cvc
| └─ FinalCheck # generated toy images (see next section)
| └─ Gens # the generated images used for evaluation in our paper
| └─ Tmp # saved generated images during model training with a 50-epoch interval
| └─ Weights # The final checkpoint
| └─ FID.txt # raw evaluation data
| └─ IS.txt # raw evaluation data
| └─ ukan_busi
| └─ ukan_glas
Images will be generated in released_models/ukan_cvc/FinalCheck
by running this:
python Main_Test.py
Please refer to the training_scripts folder. Besides, you can play with different network variations by modifying MODEL
according to the following dictionary,
model_dict = {
'UNet': UNet,
'UNet_ConvKan': UNet_ConvKan,
'UMLP': UMLP,
'UKan_Hybrid': UKan_Hybrid,
'UNet_Baseline': UNet_Baseline,
}
Thanks for We mainly appreciate these excellent projects
If you find this work helpful for your project, please consider citing the following paper:
@article{li2024ukan,
title={U-KAN Makes Strong Backbone for Medical Image Segmentation and Generation},
author={Li, Chenxin and Liu, Xinyu and Li, Wuyang and Wang, Cheng and Liu, Hengyu and Yuan, Yixuan},
journal={arXiv preprint arXiv:2406.02918},
year={2024}
}