-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Inference in Colab fails #13
Comments
Fixed-- retry. |
The issue is in the After that there are new issues:
And using deepspeed (replacing python to deepspeed and adding the
won't help:
the problem is that Edit: Update: after solving the local rank issue and fp16 attention issue (rough fix to the generate.py file (added a dummy local_rank parser), and in attention.py manually converting to fp16 and back to orignal(x.dtype)), a new issue arises: |
@johnpaulbin Sorry not trying to be a pest, please advice on this issue |
Hi, I think this issue is due to this line in the second code cell in the Collab notebook which fails (silently, as for some reason the output is cleared later in the cell!):
It seems you need to specify the full path to the output file for wget, like this:
I don't know who has access to that notebook, but if it could be updated that would be great. |
Sure that will solve this issue, but there are new issue after that. I've mentioned those above.It is a matmul import error of deepspeed. |
@johnpaulbin @PrithivirajDamodaran can you please share how this issue was resolved? I'm facing on my GPU |
On the DeepSpeed Sparse Attention doc page, there's this:
I have access to a v100 and ran the notebook on it (after adding the fixes) but I encountered the same issue. I ran it on CUDA 11.4 rather than 11.1 so that may be an issue. Colab is on 11.1. Is is possible to run the notebook on an older version of DeepSpeed? |
Follow the message (added some print statements to debug and removed clear_output) - Please advise
chosen_model: https://www.dropbox.com/s/8mmgnromwoilpfm/16L_64HD_8H_512I_128T_cc12m_cc3m_3E.pt?dl=1 folder_ /content/outputs/Cucumber_on_a_brown_wooden_chair/ Traceback (most recent call last): File "/content/dalle-pytorch-pretrained/DALLE-pytorch/generate.py", line 18, in <module> from dalle_pytorch import DiscreteVAE, OpenAIDiscreteVAE, VQGanVAE, DALLE File "/content/dalle-pytorch-pretrained/DALLE-pytorch/dalle_pytorch/__init__.py", line 1, in <module> from dalle_pytorch.dalle_pytorch import DALLE, CLIP, DiscreteVAE File "/content/dalle-pytorch-pretrained/DALLE-pytorch/dalle_pytorch/dalle_pytorch.py", line 11, in <module> from dalle_pytorch.vae import OpenAIDiscreteVAE, VQGanVAE File "/content/dalle-pytorch-pretrained/DALLE-pytorch/dalle_pytorch/vae.py", line 14, in <module> **from taming.models.vqgan import VQModel, GumbelVQ** ImportError: cannot import name 'GumbelVQ' from 'taming.models.vqgan' (/usr/local/lib/python3.7/dist-packages/taming/models/vqgan.py)
The text was updated successfully, but these errors were encountered: