You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for developing this package. It is very useful and well-documented. I found some fused kernels are not implemented and wonder if that is in the plan, or if not, is there any fallback strategy without the need to uninstall cuequivariance_ops_torch?
The example error I got is
AttributeError: module 'cuequivariance_ops_torch._ext' has no attribute 'fused_tensor_product_fwd_fp32_fp64_fp32_fp32_fp32'
It looks likes not all your inputs are of the same data type. It should work if all your inputs are fp32 or all fp64.
We are progressively moving towards the support for any combination of input data types.
Thanks @mariogeiger. Yes, I realized that and identified one that was initialized as float64. It took some time to identify that as torch.set_default_dtype() seems not propagating correctly to all tensors.
I got another error for calculating the autograd. This one I believe is not dtype issue
RuntimeError: Batching rule not implemented for cuequivariance_ops_torch::fused_tensor_product_bwd_primitive. We could not generate a fallback.
Hi @mariogeiger and team,
Thank you for developing this package. It is very useful and well-documented. I found some fused kernels are not implemented and wonder if that is in the plan, or if not, is there any fallback strategy without the need to uninstall
cuequivariance_ops_torch
?The example error I got is
Below is the version info
The text was updated successfully, but these errors were encountered: