Skip to content

[ICCV 2023] Official PyTorch implementation of "A Multidimensional Analysis of Social Biases in Vision Transformers"

License

Notifications You must be signed in to change notification settings

jannik-brinkmann/social-biases-in-vision-transformers

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

A Multidimensional Analysis of Social Biases in Vision Transformers

arXiv

This is the official implementation of "A Multidimensional Analysis of Social Biases in Vision Transformers" (Brinkmann et al., 2023).

The embedding spaces of image models have been shown to encode a range of social biases such as racism and sexism. Here, we investigate the specific factors that contribute to the emergence of these biases in Vision Transformers (ViT). Therefore, we measure the impact of training data, model architecture, and training objectives on social biases in the learned representations of ViTs. Our findings indicate that counterfactual augmentation training using diffusion-based image editing can mitigate biases, but does not eliminate them. Moreover, we find that larger models are less biased than smaller models, and that joint-embedding models are less biased than reconstruction-based models. In addition, we observe inconsistencies in the learned social biases. To our surprise, ViTs can exhibit opposite biases when trained on the same data set using different self-supervised training objectives. Our findings give insights into the factors that contribute to the emergence of social biases and suggests that we could achieve substantial fairness gains based on model design choices.

Requirements

To install requirements:

pip install -r requirements.txt

Datasets and Models

We use ImageNet-1k for the counterfactual augmentation training and the iEAT dataset to measure social biases in the embeddings. To generate textual descriptions of each image, we use CLIP Interrogator. Then, we generate counterfactual descriptions using the gender terms pairs of UCLA NLP and use those to generate counterfactual images using Diffusion-based Semantic Image Editing using Mask Guidance (see HuggingFace space).

We adopt HuggingFace's Transformers and Ross Wightman's Timm to support a range of different Vision Transformers. The models from the HuggingFace Hub are downloaded in the code. You can download the MoCo-v3 checkpoint at MoCo-v3.

Citation

@article{brinkmann2023socialbiases,
    title   = {A Multidimensional Analsis of Social Biases in Vision Transformers},
    author  = {Brinkmann, Jannik and Swoboda, Paul and Bartelt, Christian},
    journal = {arXiv preprint arXiv:2308.01948},
    year    = {2023}
}

About

[ICCV 2023] Official PyTorch implementation of "A Multidimensional Analysis of Social Biases in Vision Transformers"

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published