Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inference problem: typeerror: sat.model.transformer.basetransformer () got multiple values for keyword argument' parallel _ output' #26

Open
cccccrj opened this issue Jul 25, 2024 · 4 comments

Comments

@cccccrj
Copy link

cccccrj commented Jul 25, 2024

Hello, I encountered the following error when trying to load the model for reasoning: typeerror: sat.model.transformer.basetransformer () got multiple values for keyword argument' parallel _ output'.
I used the command line to load Python CLI_demo.py-from_retrained/root/autodl-tmp/transgpt-mm-v1/1 -prompt_zh. 图中的标志表示什么含义?
The following is my file structure:
1
|-- gitattributes
|-- latest
|-- model_config.json
|-- mp_rank_00_model_states.pt
`-- mp_rank_00_model_states.pt.lock

I modified "THUDM/chatglm-6b": tokenizer = auto tokenizer. from _ retrained (args. from_retrained, trust_remote_code = true).

Could you please provide more information? Thx !

The specific error information is:
[2024-07-25 23:05:06,615] [INFO] building FineTuneVisualGLMModel model ...
[2024-07-25 23:05:06,619] [INFO] [RANK 0] > initializing model parallel with size 1
[2024-07-25 23:05:06,620] [INFO] [RANK 0] You didn't pass in LOCAL_WORLD_SIZE environment variable. We use the guessed LOCAL_WORLD_SIZE=1. If this is wrong, please pass the LOCAL_WORLD_SIZE manually.
[2024-07-25 23:05:06,620] [INFO] [RANK 0] You are using model-only mode.
For torch.distributed users or loading model parallel models, set environment variables RANK, WORLD_SIZE and LOCAL_RANK.
[rank0]: Traceback (most recent call last):
[rank0]: File "/root/autodl-tmp/TransGPT-main/multi_modal/cli_demo.py", line 105, in
[rank0]: main()
[rank0]: File "/root/autodl-tmp/TransGPT-main/multi_modal/cli_demo.py", line 30, in main
[rank0]: model, model_args = AutoModel.from_pretrained(
[rank0]: File "/root/miniconda3/envs/transgpt/lib/python3.10/site-packages/sat/model/base_model.py", line 342, in from_pretrained
[rank0]: return cls.from_pretrained_base(name, args=args, home_path=home_path, url=url, prefix=prefix, build_only=build_only, overwrite_args=overwrite_args, **kwargs)
[rank0]: File "/root/miniconda3/envs/transgpt/lib/python3.10/site-packages/sat/model/base_model.py", line 334, in from_pretrained_base
[rank0]: model = get_model(args, model_cls, **kwargs)
[rank0]: File "/root/miniconda3/envs/transgpt/lib/python3.10/site-packages/sat/model/base_model.py", line 420, in get_model
[rank0]: model = model_cls(args, params_dtype=params_dtype, **kwargs)
[rank0]: File "/root/autodl-tmp/TransGPT-main/multi_modal/finetune_visualglm.py", line 14, in init
[rank0]: super().init(args, transformer=transformer, **kw_args)
[rank0]: File "/root/autodl-tmp/TransGPT-main/multi_modal/model/visualglm.py", line 34, in init
[rank0]: self.add_mixin("eva", ImageMixin(args))
[rank0]: File "/root/autodl-tmp/TransGPT-main/multi_modal/model/visualglm.py", line 18, in init
[rank0]: self.model = BLIP2(args.eva_args, args.qformer_args)
[rank0]: File "/root/autodl-tmp/TransGPT-main/multi_modal/model/blip2.py", line 56, in init
[rank0]: self.vit = EVAViT(EVAViT.get_args(**eva_args))
[rank0]: File "/root/autodl-tmp/TransGPT-main/multi_modal/model/blip2.py", line 21, in init
[rank0]: super().init(args, transformer=transformer, parallel_output=parallel_output, **kwargs)
[rank0]: File "/root/miniconda3/envs/transgpt/lib/python3.10/site-packages/sat/model/official/vit_model.py", line 111, in init
[rank0]: super().init(args, transformer=transformer, **kwargs)
[rank0]: File "/root/miniconda3/envs/transgpt/lib/python3.10/site-packages/sat/model/base_model.py", line 93, in init
[rank0]: self.transformer = BaseTransformer(
[rank0]: TypeError: sat.model.transformer.BaseTransformer() got multiple values for keyword argument 'parallel_output'
[rank0]:[W725 23:05:10.281982969 ProcessGroupNCCL.cpp:1168] Warning: WARNING: process group has NOT been destroyed before we destruct ProcessGroupNCCL. On normal program exit, the application should call destroy_process_group to ensure that any pending NCCL operations have finished in this process. In rare cases this process can exit before this point and block the progress of another member of the process group. This constraint has always been present, but this warning has only been added since PyTorch 2.4 (function operator())

@corkiyao
Copy link

请问解决了吗?我也遇到了这错误,但是是最新的SwissArmyTransformer版本了

@Hakur0uken
Copy link

请问解决了吗?我也遇到了这错误,但是是最新的SwissArmyTransformer版本了

降低sat版本到pip install SwissArmyTransformer==0.3.6

@corkiyao
Copy link

corkiyao commented Feb 1, 2025 via email

@Hakur0uken
Copy link

因为原来的模型有这个异常的参数,把它删除掉就好了

是的没错,你是正确的,在代码中有多出重复的parallel_output,把他们删掉即可

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants