Skip to content

Latest commit

 

History

History
40 lines (29 loc) · 1.59 KB

README.md

File metadata and controls

40 lines (29 loc) · 1.59 KB

AV-OOD-GZSL

This is the official implementation of my paper: Audio-Visual Out-Of-Distribution for Generalized Zero-shot Learning, which has been accepted to The 35th British Machine Vision Conference (BMVC2024).

Image description

Requirements

Install the required packages using the following command:

conda env create -f AVOOD_env.yml

Downloading Dataset

We adopted the same dataset as AVCA-GZSL, which can be found in here.

The unzipped files should be placed in the avgzsl_benchmark_datasets/ folder in the root directory of the project.

Training and Testing

To train and test the model, run the following command:

python main.py config/ucf_test.yaml
python main.py config/activity_test.yaml
python main.py config/vgg_test.yaml

or uniformly modify and run the run_avood.sh script.

References

If you find this code useful, please consider citing our paper:

@inproceedings{wen2024bmvc,
  title={Audio-Visual Out-Of-Distribution for Generalized Zero-shot Learning},
  author={Liuyuan, Wen},
  booktitle={Proceedings of the British Machine Vision Conference (BMVC)},
  year={2024}
}