Skip to content
/ SCSD Public

[AAAI 2025] Official implementation of the paper "Exploring Semantic Consistency and Style Diversity for Domain Generalized Semantic Segmentation"

License

Notifications You must be signed in to change notification settings

nhw649/SCSD

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

🎉🎉🎉 SCSD (Accepted by AAAI 2025)

Exploring Semantic Consistency and Style Diversity for Domain Generalized Semantic Segmentation

Hongwei Niu, Linhuang Xie, Jianghang Lin, Shengchuan Zhang

Xiamen University

[Paper] [Demo] [BibTeX]


💡 Abstract

Domain Generalized Semantic Segmentation (DGSS) seeks to utilize source domain data exclusively to enhance the generalization of semantic segmentation across unknown target domains. Prevailing studies predominantly concentrate on feature normalization and domain randomization, these approaches exhibit significant limitations. Feature normalization-based methods tend to confuse semantic features in the process of constraining the feature space distribution, resulting in classification misjudgment. Domain randomization-based methods frequently incorporate domain-irrelevant noise due to the uncontrollability of style transformations, resulting in segmentation ambiguity. To address these challenges, we introduce a novel framework, named SCSD for Semantic Consistency prediction and Style Diversity generalization. It comprises three pivotal components: Firstly, a Semantic Query Booster is designed to enhance the semantic awareness and discrimination capabilities of object queries in the mask decoder, enabling cross-domain semantic consistency prediction. Secondly, we develop a Text-Driven Style Transform module that utilizes domain difference text embeddings to controllably guide the style transformation of image features, thereby increasing inter-domain style diversity. Lastly, to prevent the collapse of similar domain feature spaces, we introduce a Style Synergy Optimization mechanism that fortifies the separation of inter-domain features and the aggregation of intra-domain features by synergistically weighting style contrastive loss and style aggregation loss. Extensive experiments demonstrate that the proposed SCSD significantly outperforms existing state-of-theart methods. Notably, SCSD trained on GTAV achieved an average of 49.11 mIoU on the four unseen domain datasets, surpassing the previous state-of-the-art method by +4.08 mIoU.


📋 Table of content

  1. 🛠️ Installation
  2. 🎯 Model Zoo
  3. 🧰 Usage
    1. Prepare Datasets
    2. Training
    3. Evaluation
    4. Inference
  4. 🔍 Citation
  5. 📜 License
  6. 💖 Acknowledgement

🛠️ Installation

conda create --name scsd python=3.9 -y
conda activate scsd
pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 --extra-index-url https://download.pytorch.org/whl/cu113

pip install -U opencv-python
git clone [email protected]:facebookresearch/detectron2.git
python -m pip install -e detectron2
pip install git+https://github.com/mcordts/cityscapesScripts.git

git clone https://github.com/nhw649/SCSD.git
cd SCSD
pip install -r requirements.txt
cd scsd/modeling/pixel_decoder/ops
sh make.sh

🎯 Model Zoo

GTAV -> Others
Name Backbone Cityscapes BDD Mapillary Synthia Average Download
SCSD ResNet50 51.72 44.67 56.98 43.08 49.11 ckpt 
Cityscapes -> Others
Name Backbone BDD Mapillary GTAV Synthia Average Download
SCSD ResNet50 52.25 62.51 51.00 39.77 51.38 ckpt 
GTAV+Synthia -> Others
Name Backbone Cityscapes BDD Mapillary Average Download
SCSD ResNet50 52.43 45.25 56.58 51.42 ckpt 
GTAV -> ACDC
Name Backbone Night Snow Rain Fog Average Download
SCSD ResNet50 15.06 41.37 42.77 43.43 35.66 ckpt 

🧰 Usage

  1. Please follow this to prepare datasets for training. The data should be organized like:
datasets/
    acdc/
        gt/
        rgb_anon/
    bdd/
        images/
        labels/
    cityscapes/
        gtFine/
        leftImg8bit/
    gta/
        images/
        labels/
    mapillary/
        training/
        validation/
        testing/
        labels_detectron2/
    synthia/
        RGB/
        Depth/
        GT/
        labels_detectron2/
  1. To train a model, use
# Train on GTAV(G)
python train_net.py --num-gpus 2 --config-file configs/gtav/scsd_R50_bs2_20k.yaml
# Train on Cityscapes(C)
python train_net.py --num-gpus 2 --config-file configs/cityscapes/scsd_R50_bs2_20k.yaml
# Train on GTAV+Synthia(G+S)
python train_net.py --num-gpus 2 --config-file configs/gtav_synthia/scsd_R50_bs2_20k.yaml
  1. To evaluate a model's performance, use
# G -> C, B, M, S
python train_net.py --config-file configs/gtav/scsd_R50_bs2_20k.yaml --eval-only MODEL.WEIGHTS /path/to/checkpoint_file
# C -> B, M, G, S
python train_net.py --config-file configs/cityscapes/scsd_R50_bs2_20k.yaml --eval-only MODEL.WEIGHTS /path/to/checkpoint_file
# G+S -> C, B, M
python train_net.py --config-file configs/gtav_synthia/scsd_R50_bs2_20k.yaml --eval-only MODEL.WEIGHTS /path/to/checkpoint_file
# G -> ACDC
python train_net.py --config-file configs/acdc/scsd_R50_bs2_20k.yaml --eval-only MODEL.WEIGHTS /path/to/checkpoint_file
  1. Inference demo with pre-trained models, use
python demo/demo.py --config-file configs/gtav/scsd_R50_bs2_20k.yaml \
                    --input input_dir/ \
                    --output output_dir/ \
                    --opts MODEL.WEIGHTS /path/to/checkpoint_file

🔍 Citation

@article{niu2024scsd,
  title={Exploring Semantic Consistency and Style Diversity for Domain Generalized Semantic Segmentation},
  author={Niu, Hongwei and Xie, Linhuang and Lin, Jianghang and Zhang, Shengchuan},
  journal={arXiv preprint arXiv:2412.12050},
  year={2024}
}

📜 License

SCSD is released under the Apache 2.0 license. Please refer to LICENSE for the careful check, if you are using our code for commercial matters.

💖 Acknowledgement

About

[AAAI 2025] Official implementation of the paper "Exploring Semantic Consistency and Style Diversity for Domain Generalized Semantic Segmentation"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published