Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Usage] The issue encountered when fine-tuning llava_mistral1.6 using LoRA #1772

Open
yuwang4321 opened this issue Nov 16, 2024 · 0 comments

Comments

@yuwang4321
Copy link

yuwang4321 commented Nov 16, 2024

Describe the issue

Issue:
"I used llava_mistral 1.6 and LoRA for fine-tuning. The model loads and works fine when epoch=1, but there is no output when epoch=10. Has anyone encountered the same issue? How can I troubleshoot and resolve it?"
Command:

deepspeed llava/train/train_mem.py \
    --lora_enable True --lora_r 128 --lora_alpha 256 --mm_projector_lr 2e-5 \
    --deepspeed ./scripts/zero2.json \
    --model_name_or_path /root/autodl-tmp/checkpoints/llava-v1.6-mistral-7b-1114 \
    --version v1 \
    --data_path /root/autodl-tmp/code_wy/dataset/fine_tuned_llava/MMQA_finetuned_data.jsonl \
    --image_folder /root/autodl-tmp/final_dataset_images \
    --vision_tower /root/autodl-tmp/checkpoints/clip-vit-large-patch14-336 \
    --mm_projector_type mlp2x_gelu \
    --mm_vision_select_layer -2 \
    --mm_use_im_start_end False \
    --mm_use_im_patch_token False \
    --image_aspect_ratio pad \
    --group_by_modality_length True \
    --bf16 True \
    --output_dir /root/autodl-tmp/checkpoints/llava-v1.6-mistral-7b-hf-task-lora \
    --num_train_epochs 10 \
    --per_device_train_batch_size 16 \
    --per_device_eval_batch_size 8 \
    --gradient_accumulation_steps 1 \
    --evaluation_strategy "no" \
    --save_strategy "steps" \
    --save_steps 50000 \
    --save_total_limit 1 \
    --learning_rate 2e-4 \
    --weight_decay 0. \
    --warmup_ratio 0.03 \
    --lr_scheduler_type "cosine" \
    --logging_steps 1 \
    --tf32 True \
    --model_max_length 2048 \
    --gradient_checkpointing True \
    --dataloader_num_workers 4 \
    --lazy_preprocess True \
    --report_to wandb

python scripts/merge_lora_weights.py --model-path "/root/autodl-tmp/checkpoints/llava-v1.6-mistral-7b-hf-task-lora" \
       --model-base "/root/autodl-tmp/checkpoints/llava-v1.6-mistral-7b-1114" \
       --save-model-path "/root/autodl-tmp/checkpoints/llava-v1.6-mistral-7b-hf-merged"

Log:

/root/miniconda3/envs/llava/lib/python3.10/site-packages/transformers/generation/configuration_utils.py:392: UserWarning: `do_sample` is set to `False`. However, `temperature` is set to `0` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `temperature`.
  warnings.warn(
/root/miniconda3/envs/llava/lib/python3.10/site-packages/transformers/generation/configuration_utils.py:397: UserWarning: `do_sample` is set to `False`. However, `top_p` is set to `None` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `top_p`.
  warnings.warn(
The attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.
Setting `pad_token_id` to `eos_token_id`:2 for open-end generation.
![screenshot-20241116-110518](https://github.com/user-attachments/assets/865ab513-e54d-4f11-bfff-deed11a028ef)
![screenshot-20241116-110552](https://github.com/user-attachments/assets/73a805aa-1108-49fc-876c-92dbdbb97853)

Screenshots:
epoch=10
screenshot-20241116-110518

epoch=1
screenshot-20241116-110552

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant