Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

difference between just classification and pretraining+fine-tuning #66

Open
harrykwon0524 opened this issue Dec 5, 2023 · 1 comment

Comments

@harrykwon0524
Copy link

Hello, I wanted to work on the pretraining+fine-tuning part and had some questions along the way:

  1. It had a warning and an error like these:
warning error I did not have any kinds of error on the baseline datasets used for the pretraining. Is there anyway I could try to fix this? 2. When i perform just classification and pretraining+fine-tuning, it seems like the dataset input sizes get different. For instance, if I were to perform just classification on the SpokenArabicDigits, the shape looks like this: just_arabic pretraining_arabic The two pictures are showing different sample sizes being inserted when doing different tasks. What is the logic behind this difference? It would be greatly appreciated if you could answer these questions. Thank you
@seay-12
Copy link

seay-12 commented Mar 28, 2024

您好!我是一名计算机专业的学生,想要学习这份代码但是关于作者发布的关于分类的数据集已经无法打开,如果您方便的话可以加我的联系方式打包一份给我吗?非常感谢

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants