Skip to content

Creating a base model to use for downstream tasks on Medical tasks

Notifications You must be signed in to change notification settings

tabbas97/bert-pubmed-base

Repository files navigation

PEFT Fine-tune of DistilBERT on Pubmed

This is a finetune of DistilBERT finetuned to run on Pubmed dataset. The model is finetune on a L4 GPU.

The pubmed-torch version was fully finetuned on the Pubmed dataset. The lora version is only an adapter that is finetuned on the dataset. We also only lora tuned the query and value weights of the attention layer. This was the suggestion in the original LORA paper.

In Progress

  • Fixing summarizer module to allow lora adapted model.

Completed

  • Fully finetuned model
  • LORA finetuned model
  • Training and evaluation scripts
  • Model cards - Auto pushed as TrainingCallbacks
  • TF version of the model
  • PEFT Early stopping

Future TODOs

  • Add more datasets
  • Add more models
  • Experiment with parameter sweep on rank and alpha values
  • Experiment with different layers to LORA tune

Model Cards:

The TF version uses TF specfic training and evaluation modules. The torch version uses the generic Trainer and TrainingArguments from transformers.

About

Creating a base model to use for downstream tasks on Medical tasks

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages