-
Notifications
You must be signed in to change notification settings - Fork 60
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
After fine-tuning, the model outputs repetitive phrases #89
Comments
Can you share the inference script that you used to do the inference with fine-tuned LoRA weights? |
I am performing LoRA fine-tuning based on videollama2-7b, and the script is as follows: #!/bin/bash Environment VariablesARG_WORLD_SIZE=${1:-1} Multiple conditionsif [ ! -n "$WORLD_SIZE" ] || [ ! -n "$NPROC_PER_NODE" ]; then echo "WORLD_SIZE: $WORLD_SIZE" Training ArgumentsGLOBAL_BATCH_SIZE=8 Log Argumentsexport TRANSFORMERS_OFFLINE=1 |
I also have this problem,do you solve it? |
Hello,I'm a phD student from ZJU, I also use videollama2 to do my own research,we create a WeChat group to discuss some issues of videollama2 and help each other,could you join us? Please contact me: WeChat number == LiangMeng19357260600, phone number == +86 19357260600,e-mail == [email protected]. |
Thanks for your good job。
I am trying to fine-tune the videollama2 model with my own data. However, after fine-tuning, the model starts to repeatedly output the same content. Could you help me solve this issue?
The text was updated successfully, but these errors were encountered: