-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can you proivde a script or an explanation to reproduce scores in your paper? #5
Comments
Also, if you can specify parameters for the other dataset score, it would be very helpful. For example, BC5CDR, DDI, DrugProt, and others. |
Hi @dongheechoi , The score that you display is a typed score, and is located in 'out_biorex_results.txt' To evaluate the prediction of our models, you can consider our latest leaderboard (https://codalab.lisn.upsaclay.fr/competitions/16381). For different set + BioRED exps, I used the same parameters. |
I have used https://ftp.ncbi.nlm.nih.gov/pub/lu/BioREx/datasets.zip for the BioREx dataset. |
Hi @dongheechoi , I wanted to clarify a few points regarding BioREx.
|
In the paper (https://arxiv.org/abs/2306.11189), you wrote the scores below. Can you kindly provide a way to reproduce this?
For example, with the model you provided in the repo BioREx PubMedBERT model (Original) and BioREx BioLinkBERT model (Preferred), what score can I get? And how can I get the score?
When I run with BioREx PubMedBERT model (Original) using the code you suggest
bash scripts/run_test_pred.sh
, I gotOverall 966 652 263 314 0.7125683060109289 0.6749482401656315 0.6932482721956407
in the file locaed in "out_result_file" parameter.
I think it would be precision, recall, f1 score, but then I am not sure I can get 79.6 in this case(BioRED+8 datasets in your paper).
If I misunderstood something, please let me know.
And again, if you provide a specific parameters to reproduce the scores in the paper (including the baseline approaches like TL(Transfer learning) or MTL(Multi-Task Learning), it would be great help for me as well.
The text was updated successfully, but these errors were encountered: