We provide the code for reproducing result of our paper Semantically Self-Aligned Network for Text-to-Image Part-aware Person Re-identification.
-
CUHK-PEDES
Organize them in
dataset
folder as follows:|-- dataset/ | |-- <CUHK-PEDES>/ | |-- imgs |-- cam_a |-- cam_b |-- ... | |-- reid_raw.json
Download the CUHK-PEDES dataset from here and then run the
process_CUHK_data.py
as follow:cd SSAN python ./dataset/process_CUHK_data.py
-
ICFG-PEDES
Organize them in
dataset
folder as follows:|-- dataset/ | |-- <ICFG-PEDES>/ | |-- imgs |-- test |-- train | |-- ICFG_PEDES.json
Note that our ICFG-PEDES is collect from MSMT17 and thus we keep its storage structure in order to avoid the loss of information such as camera label, shooting time, etc. Therefore, the file
test
andtrain
here are not the way ICFG-PEDES is divided. The exact division of ICFG-PEDES is determined byICFG-PDES.json
. TheICFG-PDES.json
is organized like thereid_raw.json
in CUHK-PEDES .Please request the ICFG-PEDES database from [email protected] and then run the
process_ICFG_data.py
as follow:cd SSAN python ./dataset/process_ICFG_data.py
sh experiments/CUHK-PEDES/train.sh
sh experiments/ICFG-PEDES/train.sh
sh experiments/CUHK-PEDES/test.sh
sh experiments/ICFG-PEDES/test.sh
Our Results on CUHK-PEDES dataset
Our Results on ICFG-PEDES dataset
If this work is helpful for your research, please cite our work:
@article{ding2021semantically,
title={Semantically Self-Aligned Network for Text-to-Image Part-aware Person Re-identification},
author={Ding, Zefeng and Ding, Changxing and Shao, Zhiyin and Tao, Dacheng},
journal={arXiv preprint arXiv:2107.12666},
year={2021}
}