You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, for fp16 models we need torch's Cuda version since some of the fp16 ops are not supported on CPU, hence it's not able to trace.
I'd like to write a pass that converts fp32 weights to fp16 and forwards the types. Would you happen to have any suggestions? @silvasean@ramiro050
The text was updated successfully, but these errors were encountered:
We don't want such a pass upstream. As discussed in the developer hour today, this is probably a feature request for PyTorch to support program capture for fp16 operators without CUDA.
We don't want such a pass upstream. As discussed in the developer hour today, this is probably a feature request for PyTorch to support program capture for fp16 operators without CUDA.
Currently, for fp16 models we need torch's Cuda version since some of the fp16 ops are not supported on CPU, hence it's not able to trace.
I'd like to write a pass that converts fp32 weights to fp16 and forwards the types. Would you happen to have any suggestions? @silvasean @ramiro050
The text was updated successfully, but these errors were encountered: