Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AttributeError: module 'cuequivariance_ops_torch._ext' has no attribute 'fused_tensor_product_fwd_fp32_fp64_fp32_fp32_fp32' #89

Open
chiang-yuan opened this issue Mar 2, 2025 · 3 comments

Comments

@chiang-yuan
Copy link

chiang-yuan commented Mar 2, 2025

Hi @mariogeiger and team,

Thank you for developing this package. It is very useful and well-documented. I found some fused kernels are not implemented and wonder if that is in the plan, or if not, is there any fallback strategy without the need to uninstall cuequivariance_ops_torch?

The example error I got is

AttributeError: module 'cuequivariance_ops_torch._ext' has no attribute 'fused_tensor_product_fwd_fp32_fp64_fp32_fp32_fp32'

Below is the version info

torch                                2.6.0
cuequivariance                       0.2.0
cuequivariance-ops-torch-cu12        0.2.0
cuequivariance-torch                 0.2.0

nvidia-cublas-cu12                   12.4.5.8
nvidia-cuda-cupti-cu12               12.4.127
nvidia-cuda-nvrtc-cu12               12.4.127
nvidia-cuda-runtime-cu12             12.4.127
nvidia-cudnn-cu12                    9.1.0.70
nvidia-cufft-cu12                    11.2.1.3
nvidia-curand-cu12                   10.3.5.147
nvidia-cusolver-cu12                 11.6.1.9
nvidia-cusparse-cu12                 12.3.1.170
nvidia-cusparselt-cu12               0.6.2
nvidia-nccl-cu12                     2.21.5
nvidia-nvjitlink-cu12                12.4.127
nvidia-nvtx-cu12                     12.4.127
@chiang-yuan
Copy link
Author

Based on the name of this error, I could try identify where float64 is used. The trace back is not very informative though, but I will try.

@mariogeiger
Copy link
Collaborator

Hi @chiang-yuan,

It looks likes not all your inputs are of the same data type. It should work if all your inputs are fp32 or all fp64.
We are progressively moving towards the support for any combination of input data types.

@chiang-yuan
Copy link
Author

Thanks @mariogeiger. Yes, I realized that and identified one that was initialized as float64. It took some time to identify that as torch.set_default_dtype() seems not propagating correctly to all tensors.

I got another error for calculating the autograd. This one I believe is not dtype issue

RuntimeError: Batching rule not implemented for cuequivariance_ops_torch::fused_tensor_product_bwd_primitive. We could not generate a fallback.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants