-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CPU/QLoRA-FineTuning #9406
Comments
Do we need to |
I think it's ok, I'll add it to the readme file. |
Maybe you can try |
thanks for your reply in fact python command never worked for me. it only works using llm-convert, llm-cli etc. |
Please check your conda env based on https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/CPU/QLoRA-FineTuning . |
I followed the configuration since the beggining. |
Hi @ernleite, what do you mean python command never work? Have you tried taskset -c 0-27 to use more cores? Could you please share the commands for how you run this qlora fine tuning, we will try to check and reproduce it. |
@glorysdj my configuration I would be so happy if this can work here an unresolved issue I explained few weeks ago : [https://github.com//issues/8936] thanks ! |
@ernleite - a quick question: are you able to run bigdl-llm using these python commands on your local PC (either windows or linux)? |
I have a laptop running on Windows 11. Let me try. I will let you know. |
@jason-dai I used my laptop I have two GPUs in my laptop but was not able to use my Intel Iris Xe with 16GB I tried many configurations but the Qlora GPU version does not work. Are we sure it works with python 3.9? So my question is : does the GPU version works with Windows ? thanks again |
@ernleite Do you have a GPU on your machine? I tried to reproduce the issue and found that after converting the current model to sym_int4 format the finetuning program ran on the GPU. So you can try to disable GPU when you finetune on CPU, and make sure you use the CPU version of package bigdl-llm. Hope this can help you. |
Currently it's not supported yet |
We fixed this issue(only use one core) last week. Related to this pr. When the CPU does not support bf16, qlora will automatically use only one core. You can try to use this cmd |
Wow! amazing thanks. |
Hello

I am trying to fine tune a LLama2 model
Actually the finetuning process is taking a very long time so I had to cancel it because it is using only one core in my machine (DELL R730 with 2 CPUS / 56 Logicals cores)
I tried accelerate config but it is not working
Any idea?
Thanks !!
The text was updated successfully, but these errors were encountered: