To install all required dependencies run:
�pip install -r requirements.txt
To obtain fine-tuned bart-large model and the required tokenizer download them from:
https://drive.google.com/drive/folders/1JxGfoBGCEiFpzlP12u2hwoa17WOzVGQx?usp=sharing
And put then in the fine-tuning
folder
Includes all the data that was used for training, testing and fine-tuning the models
Includes all the data that was outputed from the models for analysis and results.
These can be used to run the code without having to re-run the models each time.
Contains the three different models:
- Bart-large
- Deberta-v3
- Deberta base
Contains helper functions used by models
Includes the initial testing of the three models:
- Bart-large
- Deberta-v3
- Deberta base
This includes rationales being used as hypotheses and experimentation with different hypotheses, and investigating neutral cases.
Generates csv outputs of the results for the bart-large and deberta-v3 models using custom curated hypotheses.
Also runs fine-tuning of bart-large model using religion data that was manually annotated
Contains misc anaylsis like, frequency distribution of lengths of hypotheses in NLI training data.
Contains all f1 scores, AUC scores, graphs and accuracy breadowns for the test and training results