Skip to content
This repository has been archived by the owner on Jan 29, 2024. It is now read-only.

Fine-tune Question-Answering model on our own data #622

Open
3 tasks
FrancescoCasalegno opened this issue Aug 31, 2022 · 0 comments
Open
3 tasks

Fine-tune Question-Answering model on our own data #622

FrancescoCasalegno opened this issue Aug 31, 2022 · 0 comments
Labels
↩️ question-answering Attribute values extraction using QA models

Comments

@FrancescoCasalegno
Copy link
Contributor

Context

Actions

  • Fine-tune the best performing QA model(s) on our own QA dataset, using k-fold cross-validation.
  • Investigate also results when holdout (valid) splits are created by removing samples from one source (e.g. WvG, PS, HM, ...) and training on the others.
  • If our results are better than the baseline, compute also training curves (i.e. increase training set size and check accuracy).

Dependencies

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
↩️ question-answering Attribute values extraction using QA models
Projects
None yet
Development

No branches or pull requests

1 participant