Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why training with ELMo instead of BERT does not improve results ? #17

Open
DbrRoxane opened this issue Jun 19, 2019 · 0 comments
Open

Comments

@DbrRoxane
Copy link

DbrRoxane commented Jun 19, 2019

Hello,

I have tried to use ELMo instead of BERT as you can see on my fork
The training is working but the results are very similar with the training without any contextual embedding (just GLoVE).
Do you have any idea why or how to fix it?
I think that I might have forgotten smth in my code...

Moreover I can notice that x_cemb and ques_cemb are never instanciate, they are always None, would this be part of the issue?

Thanks in advance

@DbrRoxane DbrRoxane changed the title x_cemb and x_cemb_mid Why training with ELMo instead of BERT does not improve results ? Jun 20, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant