-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Preprocessed units for the segments #4
Comments
(Or do you have any number for the performance of DUAL with the view only on each segment?) |
Hi @mutiann ! Thanks for the question. Here is the segment hubert units from hubert large 22-th layer with 128 number of clusters. |
Thank you very much! Let me have a look at them. |
Thank you very much! Actually I am looking for the HuBERT units for each segment (for, e.g., context-0_0_1, context-0_0_2, ...), while it seems that the provided units above and in the README are for each paragraph as in standard SQuAD (e.g. context-0_0, context-0_1, context-0_2, ...) that are merged from those from each segment. May I know if those for the each segment could be provided? BTW, out of curiousity, have you tried to have any experiment that works on each segment instead of having a view on the paragraph? Thanks in advance! |
Hello!
I'm recently doing some experiments on NMSQA, and the code for DUAL provided here are really helpful! While I encountered some difficulty building the units using scripts provided to reproduce the results. Particularly, I'm trying to extract the units for each segment of context, while the preprocessed ones currently provided in the repo are already concatenated for each article following the standard QA scheme (using the merge_passage.py, I guess). May I know if the preprocessed units for each segment could be provided?
Thank you!
P.S. Just seen you and had some chat on Interspeech at the poster. The work was really impressive and useful for us :)
The text was updated successfully, but these errors were encountered: