You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, I encountered the following error when trying to load the model for reasoning: typeerror: sat.model.transformer.basetransformer () got multiple values for keyword argument' parallel _ output'.
I used the command line to load Python CLI_demo.py-from_retrained/root/autodl-tmp/transgpt-mm-v1/1 -prompt_zh. 图中的标志表示什么含义?
The following is my file structure:
1
|-- gitattributes
|-- latest
|-- model_config.json
|-- mp_rank_00_model_states.pt
`-- mp_rank_00_model_states.pt.lock
I modified "THUDM/chatglm-6b": tokenizer = auto tokenizer. from _ retrained (args. from_retrained, trust_remote_code = true).
Could you please provide more information? Thx !
The specific error information is:
[2024-07-25 23:05:06,615] [INFO] building FineTuneVisualGLMModel model ...
[2024-07-25 23:05:06,619] [INFO] [RANK 0] > initializing model parallel with size 1
[2024-07-25 23:05:06,620] [INFO] [RANK 0] You didn't pass in LOCAL_WORLD_SIZE environment variable. We use the guessed LOCAL_WORLD_SIZE=1. If this is wrong, please pass the LOCAL_WORLD_SIZE manually.
[2024-07-25 23:05:06,620] [INFO] [RANK 0] You are using model-only mode.
For torch.distributed users or loading model parallel models, set environment variables RANK, WORLD_SIZE and LOCAL_RANK.
[rank0]: Traceback (most recent call last):
[rank0]: File "/root/autodl-tmp/TransGPT-main/multi_modal/cli_demo.py", line 105, in
[rank0]: main()
[rank0]: File "/root/autodl-tmp/TransGPT-main/multi_modal/cli_demo.py", line 30, in main
[rank0]: model, model_args = AutoModel.from_pretrained(
[rank0]: File "/root/miniconda3/envs/transgpt/lib/python3.10/site-packages/sat/model/base_model.py", line 342, in from_pretrained
[rank0]: return cls.from_pretrained_base(name, args=args, home_path=home_path, url=url, prefix=prefix, build_only=build_only, overwrite_args=overwrite_args, **kwargs)
[rank0]: File "/root/miniconda3/envs/transgpt/lib/python3.10/site-packages/sat/model/base_model.py", line 334, in from_pretrained_base
[rank0]: model = get_model(args, model_cls, **kwargs)
[rank0]: File "/root/miniconda3/envs/transgpt/lib/python3.10/site-packages/sat/model/base_model.py", line 420, in get_model
[rank0]: model = model_cls(args, params_dtype=params_dtype, **kwargs)
[rank0]: File "/root/autodl-tmp/TransGPT-main/multi_modal/finetune_visualglm.py", line 14, in init
[rank0]: super().init(args, transformer=transformer, **kw_args)
[rank0]: File "/root/autodl-tmp/TransGPT-main/multi_modal/model/visualglm.py", line 34, in init
[rank0]: self.add_mixin("eva", ImageMixin(args))
[rank0]: File "/root/autodl-tmp/TransGPT-main/multi_modal/model/visualglm.py", line 18, in init
[rank0]: self.model = BLIP2(args.eva_args, args.qformer_args)
[rank0]: File "/root/autodl-tmp/TransGPT-main/multi_modal/model/blip2.py", line 56, in init
[rank0]: self.vit = EVAViT(EVAViT.get_args(**eva_args))
[rank0]: File "/root/autodl-tmp/TransGPT-main/multi_modal/model/blip2.py", line 21, in init
[rank0]: super().init(args, transformer=transformer, parallel_output=parallel_output, **kwargs)
[rank0]: File "/root/miniconda3/envs/transgpt/lib/python3.10/site-packages/sat/model/official/vit_model.py", line 111, in init
[rank0]: super().init(args, transformer=transformer, **kwargs)
[rank0]: File "/root/miniconda3/envs/transgpt/lib/python3.10/site-packages/sat/model/base_model.py", line 93, in init
[rank0]: self.transformer = BaseTransformer(
[rank0]: TypeError: sat.model.transformer.BaseTransformer() got multiple values for keyword argument 'parallel_output'
[rank0]:[W725 23:05:10.281982969 ProcessGroupNCCL.cpp:1168] Warning: WARNING: process group has NOT been destroyed before we destruct ProcessGroupNCCL. On normal program exit, the application should call destroy_process_group to ensure that any pending NCCL operations have finished in this process. In rare cases this process can exit before this point and block the progress of another member of the process group. This constraint has always been present, but this warning has only been added since PyTorch 2.4 (function operator())
The text was updated successfully, but these errors were encountered:
---Original---
From: ***@***.***>
Date: Fri, Jan 31, 2025 10:18 AM
To: ***@***.***>;
Cc: ***@***.******@***.***>;
Subject: Re: [DUOMO/TransGPT] Inference problem: typeerror:sat.model.transformer.basetransformer () got multiple values for keywordargument' parallel _ output' (Issue #26)
请问解决了吗?我也遇到了这错误,但是是最新的SwissArmyTransformer版本了
降低sat版本到pip install SwissArmyTransformer==0.3.6
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you commented.Message ID: ***@***.***>
Hello, I encountered the following error when trying to load the model for reasoning: typeerror: sat.model.transformer.basetransformer () got multiple values for keyword argument' parallel _ output'.
I used the command line to load Python CLI_demo.py-from_retrained/root/autodl-tmp/transgpt-mm-v1/1 -prompt_zh. 图中的标志表示什么含义?
The following is my file structure:
1
|-- gitattributes
|-- latest
|-- model_config.json
|-- mp_rank_00_model_states.pt
`-- mp_rank_00_model_states.pt.lock
I modified "THUDM/chatglm-6b": tokenizer = auto tokenizer. from _ retrained (args. from_retrained, trust_remote_code = true).
Could you please provide more information? Thx !
The specific error information is:
[2024-07-25 23:05:06,615] [INFO] building FineTuneVisualGLMModel model ...
[2024-07-25 23:05:06,619] [INFO] [RANK 0] > initializing model parallel with size 1
[2024-07-25 23:05:06,620] [INFO] [RANK 0] You didn't pass in LOCAL_WORLD_SIZE environment variable. We use the guessed LOCAL_WORLD_SIZE=1. If this is wrong, please pass the LOCAL_WORLD_SIZE manually.
[2024-07-25 23:05:06,620] [INFO] [RANK 0] You are using model-only mode.
For torch.distributed users or loading model parallel models, set environment variables RANK, WORLD_SIZE and LOCAL_RANK.
[rank0]: Traceback (most recent call last):
[rank0]: File "/root/autodl-tmp/TransGPT-main/multi_modal/cli_demo.py", line 105, in
[rank0]: main()
[rank0]: File "/root/autodl-tmp/TransGPT-main/multi_modal/cli_demo.py", line 30, in main
[rank0]: model, model_args = AutoModel.from_pretrained(
[rank0]: File "/root/miniconda3/envs/transgpt/lib/python3.10/site-packages/sat/model/base_model.py", line 342, in from_pretrained
[rank0]: return cls.from_pretrained_base(name, args=args, home_path=home_path, url=url, prefix=prefix, build_only=build_only, overwrite_args=overwrite_args, **kwargs)
[rank0]: File "/root/miniconda3/envs/transgpt/lib/python3.10/site-packages/sat/model/base_model.py", line 334, in from_pretrained_base
[rank0]: model = get_model(args, model_cls, **kwargs)
[rank0]: File "/root/miniconda3/envs/transgpt/lib/python3.10/site-packages/sat/model/base_model.py", line 420, in get_model
[rank0]: model = model_cls(args, params_dtype=params_dtype, **kwargs)
[rank0]: File "/root/autodl-tmp/TransGPT-main/multi_modal/finetune_visualglm.py", line 14, in init
[rank0]: super().init(args, transformer=transformer, **kw_args)
[rank0]: File "/root/autodl-tmp/TransGPT-main/multi_modal/model/visualglm.py", line 34, in init
[rank0]: self.add_mixin("eva", ImageMixin(args))
[rank0]: File "/root/autodl-tmp/TransGPT-main/multi_modal/model/visualglm.py", line 18, in init
[rank0]: self.model = BLIP2(args.eva_args, args.qformer_args)
[rank0]: File "/root/autodl-tmp/TransGPT-main/multi_modal/model/blip2.py", line 56, in init
[rank0]: self.vit = EVAViT(EVAViT.get_args(**eva_args))
[rank0]: File "/root/autodl-tmp/TransGPT-main/multi_modal/model/blip2.py", line 21, in init
[rank0]: super().init(args, transformer=transformer, parallel_output=parallel_output, **kwargs)
[rank0]: File "/root/miniconda3/envs/transgpt/lib/python3.10/site-packages/sat/model/official/vit_model.py", line 111, in init
[rank0]: super().init(args, transformer=transformer, **kwargs)
[rank0]: File "/root/miniconda3/envs/transgpt/lib/python3.10/site-packages/sat/model/base_model.py", line 93, in init
[rank0]: self.transformer = BaseTransformer(
[rank0]: TypeError: sat.model.transformer.BaseTransformer() got multiple values for keyword argument 'parallel_output'
[rank0]:[W725 23:05:10.281982969 ProcessGroupNCCL.cpp:1168] Warning: WARNING: process group has NOT been destroyed before we destruct ProcessGroupNCCL. On normal program exit, the application should call destroy_process_group to ensure that any pending NCCL operations have finished in this process. In rare cases this process can exit before this point and block the progress of another member of the process group. This constraint has always been present, but this warning has only been added since PyTorch 2.4 (function operator())
The text was updated successfully, but these errors were encountered: