-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
k-shot consistency C score is different from what mentioned in paper #5
Comments
Hi, thanks for interested in our work. |
Thanks for your reply I used the exact command that you specified in the readme file only without --cuda ( it shouldn't have any effect on the result right ? ) but the Entl_b is not close to 0.2. |
Yes, it shouldn't affect the result. |
the parameters that used for training is : epoch loss Peplexity Entl_b Bleu_b Thanks again for your time |
I checked other baselines and the results was quite strange! epoch loss Peplexity Entl_b Bleu_b And for this command epoch loss Peplexity Entl_b Bleu_b Also, I didn't change the code. These results achieved by running the exact code in your repo. |
Hello, what's the "Entl_b" full name? Why don't just name it "c_score" in the code as the paper? |
I also cannot reproduce the scores in the paper based on the code in this GitHub. |
First, I would like to thanks for your contribution.
I trained your model exactly like what you said in document.But I got Entl_b = 0.0879 in printed result , I checked and found out that is the C score ( am i right ? ) ! The problem is that in paper C score has been reported equal to 0.2 .
By the way the Entl_b that I mention before was for checkpoint with loss of 46.5833 ( last checkpoint )
Thanks in advance
The text was updated successfully, but these errors were encountered: