-
Notifications
You must be signed in to change notification settings - Fork 381
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can't run starchat: fails with AttributeError: module 'global_vars' has no attribute 'gen_config'
#75
Comments
Wiped my conda env, tried to install another model (alpacoom-7b), getting the same error:
|
There might be some other error before this. Maybe the model could not be loaded because of memory limits. Could you check the error log? |
Here is a part of the error detail before the the error. I have made a few changes to the file to run it in Google Colab, bit the error is the same as reported earlier. File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 2969, in _load_pretrained_model |
what kind of changes have you made? |
also, notice that I make updates very frequently, so please |
I have used Google colab so
|
I would not recommend to use Colab since the connection is not stable. This error is due to the unstable connection that will eventually be terminated shortly (I don't know why) |
It works with t5-vicuna though. |
yeah since t5-vicuna is the smallest model which don't take too much time to load up |
thought as much to try with Vicuna being a smaller model. thanks for your response! |
T5-Vicuna is 3B model, hence :) |
I got the same error when running on Gitpod (the 16Gb container), trying to run on CPU mode, the only changes:
using |
In my case the download_complete was getting the model_name and model_base with html tags, I cleaned up with the following print(f"model_name: {model_name}")
print(f"model_base: {model_base}")
model_name = model_name.replace("<h2>","").replace("</h2>","").strip()
model_base = clean_up(model_base)
model_ckpt = clean_up(model_ckpt)
model_gptq = clean_up(model_gptq)
The clean_up function is below def clean_up(model_base):
pattern = r":\s*(\w+/[\w-]+)"
match = re.search(pattern, model_base)
result = match.group(1) if match else model_base
return result |
Trying to run starchat and getting an error. The model downloaded but when I click "Confirm", I just get an error.
Update: Also getting this error when trying to download models. Might be an env issue, going to nuke my conda env and try again.
The text was updated successfully, but these errors were encountered: