You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
(MagicQuill) F:\TOOL\MagicQuill>python gradio_run.py
Total VRAM 4096 MB, total RAM 16234 MB
pytorch version: 2.1.2+cu118
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce GTX 1050 Ti : native
Using pytorch cross attention
['F:\TOOL\MagicQuill', 'C:\Users\Fanny Firman Syah\.conda\envs\MagicQuill\python310.zip', 'C:\Users\Fanny Firman Syah\.conda\envs\MagicQuill\DLLs', 'C:\Users\Fanny Firman Syah\.conda\envs\MagicQuill\lib', 'C:\Users\Fanny Firman Syah\.conda\envs\MagicQuill', 'C:\Users\Fanny Firman Syah\.conda\envs\MagicQuill\lib\site-packages', 'editable.llava-1.2.2.post1.finder.path_hook', 'F:\TOOL\MagicQuill\MagicQuill']
Traceback (most recent call last):
File "C:\Users\Fanny Firman Syah.conda\envs\MagicQuill\lib\site-packages\transformers\utils\hub.py", line 385, in cached_file
resolved_file = hf_hub_download(
File "C:\Users\Fanny Firman Syah.conda\envs\MagicQuill\lib\site-packages\huggingface_hub\utils_validators.py", line 106, in inner_fn
validate_repo_id(arg_value)
File "C:\Users\Fanny Firman Syah.conda\envs\MagicQuill\lib\site-packages\huggingface_hub\utils_validators.py", line 160, in validate_repo_id
raise HFValidationError(
huggingface_hub.errors.HFValidationError: Repo id must use alphanumeric chars or '-', '', '.', '--' and '..' are forbidden, '-' and '.' cannot start or end the name, max length is 96: 'F:\TOOL\MagicQuill\models\llava-v1.5-7b-finetune-clean'.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "F:\TOOL\MagicQuill\gradio_run.py", line 24, in
llavaModel = LLaVAModel()
File "F:\TOOL\MagicQuill\MagicQuill\llava_new.py", line 26, in init
self.tokenizer, self.model, self.image_processor, self.context_len = load_pretrained_model(
File "F:\TOOL\MagicQuill\MagicQuill\LLaVA\llava\model\builder.py", line 116, in load_pretrained_model
tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=False)
File "C:\Users\Fanny Firman Syah.conda\envs\MagicQuill\lib\site-packages\transformers\models\auto\tokenization_auto.py", line 758, in from_pretrained
tokenizer_config = get_tokenizer_config(pretrained_model_name_or_path, **kwargs)
File "C:\Users\Fanny Firman Syah.conda\envs\MagicQuill\lib\site-packages\transformers\models\auto\tokenization_auto.py", line 590, in get_tokenizer_config
resolved_config_file = cached_file(
File "C:\Users\Fanny Firman Syah.conda\envs\MagicQuill\lib\site-packages\transformers\utils\hub.py", line 450, in cached_file
raise EnvironmentError(
OSError: Incorrect path_or_model_id: 'F:\TOOL\MagicQuill\models\llava-v1.5-7b-finetune-clean'. Please provide either the path to a local folder or the repo_id of a model on the Hub.
The text was updated successfully, but these errors were encountered:
Hi, check this issue#54. Take a screenshot at your folder at path F:\TOOL\MagicQuill\models\llava-v1.5-7b-finetune-clean would help us find the problem.
i have an error like this, how to solve this?
(MagicQuill) F:\TOOL\MagicQuill>python gradio_run.py
Total VRAM 4096 MB, total RAM 16234 MB
pytorch version: 2.1.2+cu118
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce GTX 1050 Ti : native
Using pytorch cross attention
['F:\TOOL\MagicQuill', 'C:\Users\Fanny Firman Syah\.conda\envs\MagicQuill\python310.zip', 'C:\Users\Fanny Firman Syah\.conda\envs\MagicQuill\DLLs', 'C:\Users\Fanny Firman Syah\.conda\envs\MagicQuill\lib', 'C:\Users\Fanny Firman Syah\.conda\envs\MagicQuill', 'C:\Users\Fanny Firman Syah\.conda\envs\MagicQuill\lib\site-packages', 'editable.llava-1.2.2.post1.finder.path_hook', 'F:\TOOL\MagicQuill\MagicQuill']
Traceback (most recent call last):
File "C:\Users\Fanny Firman Syah.conda\envs\MagicQuill\lib\site-packages\transformers\utils\hub.py", line 385, in cached_file
resolved_file = hf_hub_download(
File "C:\Users\Fanny Firman Syah.conda\envs\MagicQuill\lib\site-packages\huggingface_hub\utils_validators.py", line 106, in inner_fn
validate_repo_id(arg_value)
File "C:\Users\Fanny Firman Syah.conda\envs\MagicQuill\lib\site-packages\huggingface_hub\utils_validators.py", line 160, in validate_repo_id
raise HFValidationError(
huggingface_hub.errors.HFValidationError: Repo id must use alphanumeric chars or '-', '', '.', '--' and '..' are forbidden, '-' and '.' cannot start or end the name, max length is 96: 'F:\TOOL\MagicQuill\models\llava-v1.5-7b-finetune-clean'.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "F:\TOOL\MagicQuill\gradio_run.py", line 24, in
llavaModel = LLaVAModel()
File "F:\TOOL\MagicQuill\MagicQuill\llava_new.py", line 26, in init
self.tokenizer, self.model, self.image_processor, self.context_len = load_pretrained_model(
File "F:\TOOL\MagicQuill\MagicQuill\LLaVA\llava\model\builder.py", line 116, in load_pretrained_model
tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=False)
File "C:\Users\Fanny Firman Syah.conda\envs\MagicQuill\lib\site-packages\transformers\models\auto\tokenization_auto.py", line 758, in from_pretrained
tokenizer_config = get_tokenizer_config(pretrained_model_name_or_path, **kwargs)
File "C:\Users\Fanny Firman Syah.conda\envs\MagicQuill\lib\site-packages\transformers\models\auto\tokenization_auto.py", line 590, in get_tokenizer_config
resolved_config_file = cached_file(
File "C:\Users\Fanny Firman Syah.conda\envs\MagicQuill\lib\site-packages\transformers\utils\hub.py", line 450, in cached_file
raise EnvironmentError(
OSError: Incorrect path_or_model_id: 'F:\TOOL\MagicQuill\models\llava-v1.5-7b-finetune-clean'. Please provide either the path to a local folder or the repo_id of a model on the Hub.
The text was updated successfully, but these errors were encountered: