-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RuntimeError: Expected dim to be between 0 and 2 but got -1 #2
Comments
Hi @dbl001, thanks for reaching out. I think this comes from batch decoding. I got that code from: For cumsum, in theory should be a pytorch tensor. I don't have with me an M processor, so I cannot try myself. QQ: could you try with another pytorch version? -- Andrea |
The PyTorch tensor.cumsum() does not take -1 as an argument.
You should be able to verify this without ‘mps’ or an M1.
… On Jan 17, 2023, at 11:35 AM, Andrea Madotto ***@***.***> wrote:
Hi @dbl001 <https://github.com/dbl001>,
thanks for reaching out.
I think this comes from batch decoding. I got that code from:
huggingface/transformers#21080 <huggingface/transformers#21080>
For cumsum, in theory should be a pytorch tensor.
I don't have with me an M processor, so I cannot try myself. QQ: could you try with another pytorch version?
-- Andrea
—
Reply to this email directly, view it on GitHub <#2 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AAXWFW3YKO4MIHFAYEAN46DWS3X7VANCNFSM6AAAAAAT5GWT6I>.
You are receiving this because you were mentioned.
|
Actually, torch.cumsum(-1) works with device=‘cpu’.
This may be an ‘mps’ bug.
… On Jan 17, 2023, at 11:35 AM, Andrea Madotto ***@***.***> wrote:
Hi @dbl001 <https://github.com/dbl001>,
thanks for reaching out.
I think this comes from batch decoding. I got that code from:
huggingface/transformers#21080 <huggingface/transformers#21080>
For cumsum, in theory should be a pytorch tensor.
I don't have with me an M processor, so I cannot try myself. QQ: could you try with another pytorch version?
-- Andrea
—
Reply to this email directly, view it on GitHub <#2 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AAXWFW3YKO4MIHFAYEAN46DWS3X7VANCNFSM6AAAAAAT5GWT6I>.
You are receiving this because you were mentioned.
|
let me know if you can figure it out. I close the issue. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
I am trying to run FSB on PyTorch Version: 2.0.0a0+gitf8b2879 on MacOS Ventura 13.1 using the 'MPS' backend (not Cuda).
I'm getting an exception here:
Torch
Here's is my command:
Here's the stack trace:
The attention mask is:
Correct me if I'm mistaken, but isn't cumsum(-1) for a numpy array (not a tensor)?
The text was updated successfully, but these errors were encountered: