-
Notifications
You must be signed in to change notification settings - Fork 514
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Some ops that take f16 tensor inputs require GPU to run in e2e tests #1669
Comments
Copying over @powderluv's message for reference:
|
@joker-eph, do you know who we can talk to to get access to a GPU in the CI? |
I think we can get an VM with GPUs on Google Cloud, but the LLVM org has to add a self-hosted runner. |
The gpu builder is online as |
There are some ops in PyTorch that don't have a CPU implementation for
f16
inputs. For example:Because currently the CIs only use CPUs, there is no way of testing
f16
support of these ops e2e. We should have a CI that has access to a GPU and add support to the e2e testing library for specifying device in order to ensure correctness of thef16
implementations.The text was updated successfully, but these errors were encountered: