You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
bin /home/ub2004/anaconda3/envs/gh_qlora-chinese-LLM/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cpu.so
/home/ub2004/anaconda3/envs/gh_qlora-chinese-LLM/lib/python3.10/site-packages/bitsandbytes/cextension.py:34: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable.
warn("The installed version of bitsandbytes was compiled without GPU support. "
/home/ub2004/anaconda3/envs/gh_qlora-chinese-LLM/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cpu.so: undefined symbol: cadam32bit_grad_fp32
CUDA SETUP: Loading binary /home/ub2004/anaconda3/envs/gh_qlora-chinese-LLM/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cpu.so...
loading base model /data-ssd-1t/hf_model/chatglm-6b...
Traceback (most recent call last):
File "/home/ub2004/llm_dev/qlora-chinese-LLM/qlora.py", line 1011, in
train()
File "/home/ub2004/llm_dev/qlora-chinese-LLM/qlora.py", line 836, in train
model = get_accelerate_model(args, checkpoint_dir)
File "/home/ub2004/llm_dev/qlora-chinese-LLM/qlora.py", line 375, in get_accelerate_model
model = model_class[args.model_name].from_pretrained(
File "/home/ub2004/llm_dev/qlora-chinese-LLM/transformers/src/transformers/models/auto/auto_factory.py", line 479, in from_pretrained
return model_class.from_pretrained(
File "/home/ub2004/llm_dev/qlora-chinese-LLM/transformers/src/transformers/modeling_utils.py", line 2819, in from_pretrained
raise ValueError(
ValueError:
Some modules are dispatched on the CPU or the disk. Make sure you have enough GPU RAM to fit
the quantized model. If you want to dispatch the model on the CPU or the disk while keeping
these modules in 32-bit, you need to set load_in_8bit_fp32_cpu_offload=True and pass a custom device_map to from_pretrained. Check https://huggingface.co/docs/transformers/main/en/main_classes/quantization#offload-between-cpu-and-gpu
for more details.
(gh_qlora-chinese-LLM) ub2004@ub2004-B85M-A0:~/llm_dev/qlora-chinese-LLM$ python3 qlora.py --model_name="chatglm" --model_name_or_path="/data-ssd-1t/hf_model/chatglm-6b" --trust_remote_code=True --dataset="msra" --source_max_len=128 --target_max_len=64 --do_train --save_total_limit=1 --padding_side="left" --per_device_train_batch_size=8 --do_eval --bits=4 --save_steps=10 --gradient_accumulation_steps=1 --learning_rate=1e-5 --output_dir="./output/chatglm-6b/" --lora_r=8 --lora_alpha=32
===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please run
python -m bitsandbytes
and submit this information together with your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
bin /home/ub2004/anaconda3/envs/gh_qlora-chinese-LLM/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cpu.so
/home/ub2004/anaconda3/envs/gh_qlora-chinese-LLM/lib/python3.10/site-packages/bitsandbytes/cextension.py:34: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable.
warn("The installed version of bitsandbytes was compiled without GPU support. "
/home/ub2004/anaconda3/envs/gh_qlora-chinese-LLM/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cpu.so: undefined symbol: cadam32bit_grad_fp32
CUDA SETUP: Loading binary /home/ub2004/anaconda3/envs/gh_qlora-chinese-LLM/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cpu.so...
loading base model /data-ssd-1t/hf_model/chatglm-6b...
Traceback (most recent call last):
File "/home/ub2004/llm_dev/qlora-chinese-LLM/qlora.py", line 1011, in
train()
File "/home/ub2004/llm_dev/qlora-chinese-LLM/qlora.py", line 836, in train
model = get_accelerate_model(args, checkpoint_dir)
File "/home/ub2004/llm_dev/qlora-chinese-LLM/qlora.py", line 375, in get_accelerate_model
model = model_class[args.model_name].from_pretrained(
File "/home/ub2004/llm_dev/qlora-chinese-LLM/transformers/src/transformers/models/auto/auto_factory.py", line 479, in from_pretrained
return model_class.from_pretrained(
File "/home/ub2004/llm_dev/qlora-chinese-LLM/transformers/src/transformers/modeling_utils.py", line 2819, in from_pretrained
raise ValueError(
ValueError:
Some modules are dispatched on the CPU or the disk. Make sure you have enough GPU RAM to fit
the quantized model. If you want to dispatch the model on the CPU or the disk while keeping
these modules in 32-bit, you need to set
load_in_8bit_fp32_cpu_offload=True
and pass a customdevice_map
tofrom_pretrained
. Checkhttps://huggingface.co/docs/transformers/main/en/main_classes/quantization#offload-between-cpu-and-gpu
for more details.
(gh_qlora-chinese-LLM) ub2004@ub2004-B85M-A0:~/llm_dev/qlora-chinese-LLM$
The text was updated successfully, but these errors were encountered: